When chasing down performance problems on a website, you’ll often times hit an error around deferring offscreen images. This warning occurs when you have imagery “below the fold” (e.g., the area you must scroll to see) loading on your webpages. This problem is especially rampant in CMS systems where you’re never quite sure what the content authoring team is assembling.
Unfortunately, images are critical. We can’t eliminate them. According to the state of images report, at the time of writing, the average webpage loads 29 images at 950kb. The report states that the average webpage can save an additional 300kb of load time if images are lazily loaded.
Luckily, there are a few tricks to solve this within Sitecore, although these principles extend to any website or CMS architecture.
First off, when I talk about foreground imagery, I’m talking about any image loaded directly via an inline <img>
tag within your HTML. This is the simplest kind of image, it’s HTML 101.
Background imagery, on the other hand, is imagery which may be loaded via the CSS background-image property. These are often large banner images used to build logical chunks of your webpages (think stripe component).
In both cases, what we need to do is craft our initial HTML response coming out of the server to have the proper markup so that Javascript can take over and perform the actual “loading” of images for us. This technique is nothing new, and you can read about it in depth on css-tricks.com.
Let’s start with the front-end pieces.
Traditionally, to serve an image you would have HTML output similar to this:
<img src="/path/to/my/image/file.png" />
The unfortunate part about this tag is that the browser will initiate a download of this image as soon as it sees this HTML snippet, regardless of where this tag is on the page. We can prevent that download by transforming the tag slightly:
<img data-src="/path/to/my/image/file.png" class="lazy" />
The data-src treatment will prevent the browser from automatically loading this image. Keep in mind that this will result in what appears to be a broken image if the user actually sees it (unless you style .lazy accordingly). If this is a problem you can also take a secondary, but similar approach:
<img src="/path/to/a/really/small/placeholder.jpg#/path/to/my/real/image.png" class="lazy" />
In this second approach, we’re actually loading a real image (placeholder.jpg), but this “real” image can be as simple as a 1px by 1px white box. The overall method is still the same.
Essentially, whenever the image comes into view, we need to use a bit of javascript to replace “data-src” with “src”. In doing so, this will trigger the browser to download and paint the image immediately.
function lazyload() { if (lazyloadThrottleTimeout) { clearTimeout(lazyloadThrottleTimeout); } lazyloadThrottleTimeout = setTimeout(function () { var lazyElements = document.querySelectorAll(".lazy"); var scrollTop = window.pageYOffset; Array.prototype.forEach.call(lazyElements, function (elem) { // within 100px of the bottom of the screen if (elem.offsetTop - 100 < (window.innerHeight + scrollTop)) { if (elem.dataset.src) { // find any data-src and switch it to src elem.src = elem.dataset.src; } elem.classList.remove('lazy'); } }); if (lazyElements.length == 0) { document.removeEventListener("scroll", lazyload); window.removeEventListener("resize", lazyload); window.removeEventListener("orientationChange", lazyload); } }, 20); } document.addEventListener("scroll", lazyload); window.addEventListener("resize", lazyload); window.addEventListener("orientationChange", lazyload); lazyload(); // go ahead and invoke on page load
Again, this javascript is nothing new. You can see a similar example on css-tricks.com or within this codepen. We’re essentially looping through all elements with the class of “lazy” and processing the ones that are currently visible on the screen. Processing involves swapping data-src for src and removing the class of “lazy”.
Up until this point, everything you’ve seen revolves around pure HTML and JS. Luckily, the Sitecore CMS is flexible enough to allow us full control over how HTML is rendered, so we can absolutely accommodate our new front-end trick. Rather than attempt to manually render images by hand, what we want to do is leverage the existing @Html.Sitecore().Field(…)
helper methods and extend them to always render images with the data-src attribute. To do it, we’ll patch into the renderField
pipeline.
<?xml version="1.0" encoding="utf-8"?> <configuration xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/"> <sitecore> <pipelines> <renderField> <processor role:require="ContentDelivery or Standalone" patch:before="processor[@type='Sitecore.Pipelines.RenderField.GetImageFieldValue, Sitecore.Kernel']" type="MySite.Pipelines.RenderField.LazyImageRenderer, MySite.Custom" /> </renderField> </pipelines> </sitecore> </configuration>
Drop this into your website’s Pipelines.config
file (or make one if you don’t have one). The renderField
pipeline gets executed every time Sitecore attempts to render an image. Each processor within the pipeline checks the field type to render, and generates the necessary HTML if possible. Once HTML is generated, the pipeline is exited. Traditionally, each processor is responsible for a particular field type. In our case, we’re patching a processor in place before the default Sitecore GetImageFieldValue
processor (responsible for generating the <img>
tag markup).
Our processor is quite simple. We can inherit from Sitecore’s GetImageFieldValue
processor to do most of the heavy lifting for us. From there, all we really need to do is alter the results being written to the HTTP response:
public class LazyImageRenderer : Sitecore.Pipelines.RenderField.GetImageFieldValue { public DeferedImageRenderer() { } public virtual void Process(RenderFieldArgs args) { Assert.ArgumentNotNull((object)args, nameof(args)); if (!this.IsImage(args)) // base utility method return; if (this.ShouldExecute()) { base.Process(args); // generate "default" result args.Result.FirstPart = this.FixImageTag(args.Result.FirstPart); // alter results args.AbortPipeline(); // exit out } } public bool ShouldExecute() { if (!Sitecore.Context.PageMode.IsNormal) return false; if (Sitecore.Context.Site == null) return false; if (RenderingContext.Current?.Rendering?.RenderingItem?.InnerItem == null) return false; // note: you have access to RenderingContext.Current.Rendering at this point in time // you could check this for the current placeholder or type of rendering being processed // this may be useful if you want to avoid lazy loading images within your header, or // prevent lazy loading images within your hero renderings. return true; } public string FixImageTag(string tag) { // swap src= for data-src= to trick the browser into ignoring this image tag tag = tag.Replace("src=", "data-src="); // important: inject a class of "lazy" to ensure Javascript can lazily load this image if (tag.Contains("class=\"")) { tag = tag.Replace("class=\"", "class=\"lazy "); } else { tag = tag.Replace("/>", "class=\"lazy\" />"); } return tag; } }
As you can see, we’re simply replacing “src” with “data-src” and injecting the CSS class of “lazy” in place. Easy enough, right?
One important note here: it may or may not be wise for you to defer all images on your website. If you can, try and detect and lazily load only the images which are below the fold. This may not be possible in all scenarios, but checking RenderingContext.Current.Rendering against a list of known Hero renderings is a good way to know if this should or shouldn’t execute. Another option may be to create an “above the fold” placeholder for all pages. If all else fails, it may be wise to lazy load all imagery if your pages are long and this error must be fixed- but if your pages are short and you only have a handful of below the fold images, lazy loading may cause more harm than good.
Also take note that Javascript is required for this to work, so if supporting non-javascript enabled experiences are a must, you shouldn’t implement this (although that could be another check if, perhaps, you check the current HttpContext for a flag set by javascript).
Up until this point, I’ve mostly talked about Foreground imagery. So what about background imagery?
Background imagery simply looks like this:
<div style="background-image: url('/path/to/some/banner/image.jpg')"> <!-- ... --> </div>
This is where the power of CSS comes in really handy. Remember our Javascript snippet earlier? It’s indiscriminately looking for all instances of the class “lazy” on the page, regardless of tag type, and is removing the class as they scroll into view. We can leverage this to our advantage with this CSS:
<style type="text/css"> .lazy { background-image: none !important; transition: background-image ease-out 0.1s; } </style>
Be sure to put this directly within the <head> of your HTML (NOT within a css file), and ensure that this renders before any other images on the page. Forcing background-image: none !important;
will override all other in-lined background-image styles on the website. Whenever the lazy class is removed, the inline style will take over and the image will load. We typically throw a bit of spice on top with the ease-out css as well. This means that our above example simply needs to be modified to:
<div class="lazy" style="background-image: url('/path/to/some/banner/image.jpg')"> <!-- ... --> </div>
Now, another quick tip is to apply a similar technique to your foreground images:
<style type="text/css"> img.lazy { opacity: 0; transition: opacity ease-out 0.1s; } </style>
Put this in the head of your document as well, and it will ensure that any of your lazy images won’t appear “broken” if the user happens to scroll one into view before their network can catch up and load the image.
Again, shout out to css-tricks.com for sharing their wonderful guide on lazy loading, and I hope this helps you within your Sitecore journey.
]]>Following my previous blog post on how to add a new Dimension to a Data Sync task, this post looks at how to add a Fact and perform a lookup on dimensions while loading the target fact table in a data warehouse using Data Sync. To refer to the blog post on adding a Dimension follow this link. The latest releases of Data Sync included a few important features such as performing look-ups during an ETL job. So I intend to cover these best practices when adding new dimension and fact tasks. These instructions are based on Oracle BI Data Sync version 2.3.2 and may not apply to previous versions of Data Sync.
Recruiting real end users can be costly and time consuming for usability testing. For anyone but a user experience (UX) purist, the temptation to use employees for usability testing can be quite a temptation. Employees are accessible and already paid for, so why not use them? After all, they are users outside of work. So, is there really a problem?
I worked at a software firm several years ago that had 3,500 employees on-site, and yes, we did use employees for some of our website testing. It can be an acceptable solution if what you want to find out is more general user behavior like:
That “if” is a big one, so let me explain how we handled it before you report me to the UX police. We always stayed away from usability testing with:
Why? People in those positions can tend to know too much about website development or might be prone to overthink their reactions during testing because of their role in the company. We found usability testing with employees from the following categories will give us face-value reactions:
Remember, we had a campus of 3,500 employees, so filtering our employees to find UX testing participants still gave us enough participants to have some confidence in the data we collected.
Sure! In an ideal world where enough time and money was always available for usability testing, yes. However, there are times where practical and ideal are at odds with each other. The worst thing you can do is not do any testing. All testing has some risks, such as insufficient sample size or the participants acting differently because they know they are being tested. But even with these risks, taking the “risk concern” to the extreme and not performing any testing ensures that you don’t get any insight. Remember to keep things in perspective. Testing website usability is very different from testing the failure rate of a heart valve.
As a researcher, I always paid attention to any behavior or comments from the employees that seemed like insider knowledge. When that happened, we were careful to call it out to stakeholders as behavior that might be suspect. As always, we were hypervigilant in keeping the identities of our testing participants anonymous. These are some of the judgment calls that you make as a researcher when it comes to qualitative data. They are part of the process of usability testing.
If you have filled out screeners in the past, before being part of a focus group or some other type of testing, what you do for a living, your age, education, or employment status can play into whether you’re accepted into the research. If you have a large enough pool of possible participants from your employees, or the role they perform is far enough removed, then you might be able to take advantage of employee convenience.
Sometimes it’s possible to run with scissors, as long as you run slowly and let everyone around you know you’ve got scissors. And yes, it’s usually more ideal to have amazing research budgets and not use employees.
Okay, now you can call the UX police on me.
Search engine optimization (SEO) started as a discipline of testing and iteration. Most of what was learned came from continuous cycles of trial and error. As the industry grew and matured, the recommendations and advice shared on message boards, forums and later blogs and conferences, congealed with public statements from Google and Bing’s webmaster ambassadors to form a broadly accepted set of “best practices.”
The process of trial and error and iteration too often gives way to blind acceptance of theories and theoretical statements as almost universal truths. Unfortunately, many of these axioms that sound more like bumper stickers than advanced marketing advice fail to consider the incredible nuance and complexity of today’s advanced search engine optimization campaigns.
In theory, migrating your site from HTTP to HTTPS should be a simple and straightforward process. Once the SSL is installed, you simply redirect users and search engines to the new secure URL and wait.
In reality, though, the process is much more complex and introduces a considerable amount of risk to your site’s current search engine visibility and traffic. At Perficient Digital, we’ve been involved with hundreds of HTTPS migrations in recent years and have seen reality jump up and bite this theory in some pretty creative ways.
In theory, adding a site-wide redirect from your HTTP URLs to your new HTTPS URLs should be a safe and simple process. But if you already have a plethora of redirects in place, adding an extra layer may be the straw that breaks the camel’s (or search engine’s) back.
We recently encountered a situation where multiple site-wide redirects combined to create a multi-step process that some browsers refused to render. Rather than try to explain the intricate chain, I’ve diagramed it below with the previous redirects in red, and the new site-wide redirects in blue.
Theoretically, each of those redirects is a recommended best practice. In reality, a mistake in execution led to a convoluted mess, confusing users, breaking browsers, and befuddling the search engines.
Google has gone on record multiple times claiming that, in theory, 300 level redirects (301, 302, 307) are eventually all treated the same and it doesn’t matter which you use. When asked which type of redirect is best for SEO, Google’s Gary Illyes once answered “Don’t worry about it. Just use whatever you want, use whatever makes sense for you.”
While that might be true in theory or over an extended period of time, the reality is that it often matters quite a lot.
A month after a client’s platform migration, we noticed old URLs suddenly showing back up in the index causing significant ranking fluctuations. After investigating, we discovered a recent code deployment had changed all the 301 redirects put in place during the migration into 302 redirects. When the 301 redirects were restored, the old pages quickly dropped back out of the index.
While 30X level redirects may be treated the same over the long haul or in theory, the reality is 301 redirects remain the best option for timely search engine adoption of redirects.
Another popular axiom in the digital marketing space is to just “build for users.” This is a bit of a mantra for Google, showing up in their 10 Things We Know to be True and their SEO Starter Guide. In a 2016 chat, Google’s John Mueller responded to a question by saying “do what works best for the user, and search engines will generally figure it out from there too.”
Unfortunately, the world of digital marketing is rarely that simple.
Search engines do best with simple and straightforward site hierarchies and URL structures. Users, however, crave choices and control.
Shoppers browsing an eCommerce site, for example, need to filter, combine or customize the products they’re viewing. Faceted navigation or URL parameters can be a dream come true for users, but when applied to tens of thousands of products across hundreds of categories, it quickly becomes a tangled nightmare of conflicting signals for search engines to sort through and interpret.
Phrases like “compromise” or “best available solution” and answers like “it depends” don’t make great bumper stickers, but SEO is rarely as simple or as clear-cut as theoretical answers or best practices make it seem. Successful SEO strategies require flexibility and ingenuity to find the ideal solutions for each site’s unique set of challenges, technical debt, and circumstances.
If you’d like to learn about a few more instances where SEO theories don’t live up to reality, check out my recent Pubcon Austin presentation on the topic below.
Before we get started, I recommend reading about the revealing module pattern and closure, if you’re not already familiar with them.
When you are building components for use in a CMS, it’s important to understand that you have less control over the use of these components than you may initially think. Programming these blocks in such a way that they operate independently and discretely becomes more of an issue than if you were building a static or informational site, where you might be able to exhibit more control over the usage and structure of the components.
A good way to combat this is by making sure that as you’re writing the functionality of all of these blocks, you are mindful of their “scope leak” and “clobbering.” By leveraging JavaScript scope and closure, and keeping best practices, we can be sure that our blocks play nice with others.
In JavaScript, “scope leak” is the concept of where access to discrete pieces of code get defined globally. Variables become defined globally. Functions are placed on the window object. If this is done with little-to-no regard for future-proofing, the global namespace can get quite messy and unwieldy.
Example
var componentOptions = {...}; console.log(window.componentOptions);
In this example, the object componentOptions
was defined globally off of the window, and therefore is accessible from anywhere else in the code base.
When you don’t pay attention to your scoping, and you commit the various crimes of “scope leak,” there’s a chance that the functionality of one component will completely or partially override the functionality of another component.
Example
// Component 1 var componentOptions = {...}; function doSomething() {...} // Component 2 var componentOptions = {...}; function doSomething() {...}
In this example, both components’ code are defined globally off of window. Since both options objects and both doSomething
functions are named the same, whichever component is initialized last will overwrite the first component’s options and function.
In JavaScript, variables and functions are lexically scoped; when a variable or function is defined inside of a function, they are only available from within that function and any of its “children” functions. While this might be a strange concept for an entry level JS programmer to grasp, the subtle nuances of the language shine brightly in the concept of closure.
Example
function init(){ var localVar = true; console.log(localVar); } init(); console.log(localVar);
In this example, localVar
is defined within the init
function. When logged inside of init
, localVar
will be true
. However, when logged outside of init
, localVar
will be undefined. This phenomena allows us to use the revealing module pattern to discourage poor practices.
When we create a component, if we create its discrete functionality within a function, its code will operate separately from other component code due to closure. In this way, we’re able to build blocks of code that are portable, reusable, and independent of the rest of the code base.
Example
var component = (function(){ var localVar = true; return { init: init }; function init(){ console.log(localVar); } }()); component.init();
In this example, we create a variable and set it equal to an IIFE (immediately invoked functional expression) that returns an object with a single method. The object reveals the init
function, which has access to all of the IIFE’s closure (in this case the localVar
), and now both the init
function and the localVar
function are protected against clobbering and are not leaking all over the global scope.
Here’s the fun part. If all your components are in the revealing module pattern, you have to scope all of your components to somewhere. We recommend namespacing a main container object, that in-turn contains a utils
and a components
object (with an optional pages
object). The issue with this is the global namespace object has to be defined off of window, and it has to be setup before you load any of your component code. Additionally, all your component codes should also guard against null errors.
Example
Script in the <head> tag
var PD = {};
Base script (that executes before any component/util script)
(function(){ PD = PD || {}; PD.components = PD.components || {}; PD.utils = PD.utils || {}; }();
Component script
PD = PD || {}; PD.components = PD.components || {}; PD.components.myComponent = (function(){...}());
In this multi-step example, we first need to define the namespaced object to make sure it’s there for future use. One of the first things we do in our external script files, before we even start defining components, we should write code making sure that the sub-objects exist. We also should write code before defining individual components, all of this is in an effort to avoid some kind of race condition where a component possibly gets defined before the component
object, causing a null reference error or accidentally overwriting your defined component with an empty object later. (Note the absence of the var
keyword in the second and third parts of this example; in this case, we do want to define the objects globally off of window e.g. window.PD.component
).
When you use the revealing module pattern to create CMS-ready components, you have to come up with a way to initialize them. Traditionally, when you have control over the template or page, you are able to only initialize the components that you actually use on the page before the closing <body>
tag. When using a CMS, we are not afforded the ability to know when and where which specific components will be used or in what order or configuration. Therefore, it is important that we make sure each component is given a chance to initialize.
Example
Component script
PD = PD || {}; PD.components = PD.components || {}; PD.components.myComponent = (function(){ return { init: init }; function init(){...} }());
Base initialization script (after all components/utils have been defined)
(function(){ var component; if (PD.components) { for (component in PD.components) { if (PD.components.hasOwnProperty(component) && PD.components[component] && PD.components[component].init) { PD.components[component].init(); } } } }());
In this example, the components are defined individually and after all components are defined, the initialization code iterates through all components and initializes them at once. This will work well if both: all the components have a function named “init,” and if every init
function checks to see if the component is on the page before attempting to initialize it. For instance, you would not want a gallery to be initialized on every page if there was no gallery on that page to init.
Example
PD = PD || {}; PD.components = PD.components || {}; PD.components.gallery = (function(){ var $galleries = $(); return { init:init }; function init(){ $galleries.add($('.my-gallery-selector')); $galleries.each(function(){...}); } }());
In this example, the gallery’s initialization code will iterate through all the galleries that have been added to the jQuery collection of $galleries
and do something with each of them. If the jQuery collection is empty, nothing will happen.
It is also possible to initialize specific components instead of all components, by calling each component’s init function verbosely.
Example
Base initialization script (after all components/utils have been defined)</>
(function(){ PD.components.nav.init(); PD.components.videoModal.init(); PD.components.capabilities.init(); ... }());
In this example, instead of iterating through every component, we choose specific components to initialize in a specific order. There is no immediate downside to this, other than the fact that it is more manual and that initialization calls will have to be added to this list in the future as more components are created. The mass-initializing method works more “automagically.”
TL;DR
Use the revealing module pattern to avoid “scope leak” and “clobbering.” Utilize “closure” to globally define a namespace “container” for your components. Write your components in a way that they can be initialized in any order and irrespective of other components on the page.
Modular component styles are becoming significantly important in modern UI trends. Handling scalability while decreasing naming convention differences is a necessary step towards code cohesion.
Disclaimer: I use a slightly modified flavor of BEM syntax, feel free to use traditional BEM, or some other flavor of your own.
When developing UI components, specifically for use in CMS or other modular platforms, it becomes necessary to organize your components’ styles. Naming conventions of components will become more important, both as more developers begin touching the same codebase, and as the codebase’s size and complexity grows.
To combat issues that arise from multiple developers and a growing list of components, we have to begin standardizing our components. This is where a CSS methodology like BEM comes into play. With BEM syntax, we can all use a common naming convention in all of our components. This way, any one developer can modify and make changes to any other developer’s work.
BEM syntax is a naming convention for CSS classes. BEM has 3 parts to its naming structure:
I won’t get into the specifics of the BEM prescribed name rules, but I encourage you to familiarize yourself with them.
As mentioned above, I don’t use as strict set of rules as BEM dictates. The major difference is that I prefer to denote more of a tree structure:
HTML
<div class="promo"> <div class="promo--background"></div> <div class="promo--content"> <div class="promo--content--header">...</div> <div class="promo--content--info">...</div> <div class="promo--content--cta">...</div> </div> </div>
This might make for longer and more verbose class names, and I don’t recommend you get too crazy and literally build a selector from every leaf. In this example, I’d still have my .promo--title
and .promo--subtitle
inside of .promo--content--header
and not name them .promo--content--header--title
(that’s just ridiculous). The key here is that I use these “layers” where they make sense structurally, where the “element” that is being named lies within a “section” of the master “block.” I prefer to separate these structural “layers” by using double-hyphens, instead of the BEM prescribed double-underscore. The reasoning behind this comes down to code legibility, two hyphens are actually separated in a monospace font; where two underscores get ligitured together. This “double-dash” style makes classes’ layers more immediately parsable by human eyes.
I also prefer to just chain modifier classes:
HTML
<div class="promo square">...</div>
CSS
.promo.square {...}
This affords me the ability to style all elements similarly, while keeping the markup’s class-lists less cluttered.
As with any use of SCSS, there is an importance to keeping the SCSS legible and not nesting too deeply (so as not to create a specificity nightmare). However, when you componentize SCSS into modular pieces, it’s a good idea to keep your files organized. We have chosen to standardize on a “component-level” selector, and then nest concatenated selectors underneath it. The idea is that then every style needed for a given component is chunked beneath a single SCSS node.
SCSS
.promo { &--header {...} &--content {...} &--cta {...} }
In this example, the chunk of SCSS that contains all the styles for the promo component are all nested underneath the single promo selector. Using the concatenation combinator can be dangerous for readability; it’s easy to get lost in the SCSS and wonder what selector the rule is actually applying to. For this reason, we’ve established that it’s necessary to comment every selector that uses the concatenation combinator (to change the selector).
SCSS
.promo { &--header { // .promo--header &.closed {...} } }
In this example, we commented the .promo--header
selector, but did not comment the &.closed
selector, since the closed selector is just a modifier and doesn’t change the base selector. An added benefit of using the concatenation combinator is: theoretically, specificity stays relatively light all the way down the SCSS node tree; it becomes easier to manage specificity later, either by overriding or when you go to use media queries, mixins, etc.
TL;DR
SCSS is great. BEM makes sense, mostly. Use structural “layer” selectors. Nest selectors to modularize component styles. Use concatenation combinator to avoid specificity hell. Comment concatenated selectors to avoid readability issues. Don’t get crazy with the naming or nesting.
Now that we have three different approaches to extract Oracle ERP Cloud data from OTBI, there comes the question – what’s the best practice, or which one should you recommend? Before answering the question, let’s take a look at the feature list of each approach so we can understand what they are best at.
Feature List
Features | Logical SQL (SA) | Physical SQL (PVO) | Custom SQL (BIP) |
Uses the built-in Oracle BI Connector in Data Sync for data mappings | x | x | |
Leverages ERP Cloud Public View Objects (PVOs) for execution | x | x | |
Complies with ERP Cloud PVO data security | x | x | |
Supports data incremental loads | x1 | x | x2 |
Uses analyses’ advanced feature to obtain logical SQLs | x | ||
Allows access to what is not available in Standard SAs | x | ||
Allows future switch to Logical SQLs if desired | x | ||
Leverages BI Publisher Web Service for execution | x | ||
Allows direct access to ERP Cloud database for what is not available in neither Standard SAs nor PVOs | x | ||
——— | |||
1 Last modify date required for incremental load is not always available in all subject area folders | |||
2 Last ETL run date needs passed in as a report parameter |
General Recommendation
There are two questions that almost any data mapping developer will need to answer.
This is also true when comes to which approach should be recommended for extracting ERP Cloud data from OTBI. Based on answers to these two questions and features each approach presents, the general approach would be recommended in the given order:
Best Practice in Reality
Ideally, the Logical SQLs approach should be sufficient and all that is needed for extracting the ERP Cloud date from OTBI. But reality is always bringing up a different story.
In reality, the best practice would be using the Physical SQLs approach. Reasons are simple.
The Physical SQLs approach is safe to use and future proof.
Previous << 3 Ways to Integrate … | Next >> Using Logical SQLs …
]]>While Adobe Summit has a wide range of focuses, data seems to be a pretty big emphasis, especially with the the data co-0p and key predictive capabilities. Hence another analytics session. This particular one focused on best practices and getting past the basics.
He started out with a couple polls, most of the attendees basically love the intellectual stimulation so obviously there were a bunch of data geeks involved.
What’s the end goal?
In ascending order
Some of the questions to ask: what, who, why and then so what.
Quote: What got you here won’t get you where you want to go
Quote: Simplicity is the ultimate sophistication (Leanardo DaVinci) You have to keep your data explanations as simple as possible.
Key best practice: Communicate visually. A picture is worth a thousand words. Use them to better relay key concepts. He showed several examples including some bar charts of pencils. But the key is to tell a story with the data and visuals.
The same goes for infographics. You can use various stock photography libraries to make your own infographics.
Best practice: Tell me what to think. Don’t just throw data at it. Add a little color to highlight things people should see.
Best Practice: Align to your corporate objectives. Then measure yourself in those terms.
Best Practice: Use composite metrics. Composite metrics is a combination or index of things with an average. Nasdaq composite, Dow Jones Index, Quarterback ratings, etc.
Pop Sugar is a media and technology company that caters to 18-35 year old women. Popsugar.com and shopstyle.com. Mobile is hitting them in a big way. Over 70% of visits come from a mobile device. It’s really hard to create ads for mobile. That means they are putting a lot of investment in content marketing (easier on mobile) vs display advertising (banner).
Problem: Marketers are not content creators.
Pop Sugar is focused on using data to identify places to focus on content. To that end, they’ve tried very hard to define key content metrics like social sharing. They created a composite metric: the POPSUGAR engagement score: composits of vists, time spent, social score. They then plot the composite score over time. You can then use that data to define what content will have the most relevance to a brand or company.
End result: Data helps you focus on what content will have the most impact.
Note: this is less about your technical skills and more about just getting the right answers and acting on them.
]]>Don’t forget the other parts of this series:
In some ways, this topic is related to culture and highlights that so much of digital transformation does not deal directly with technology. Our reality is that a change means dealing with people. People organize in a variety of ways. Those organizations have very specific explicit (bonus….) and implicit (culture) inducements to action. Just starting down path and announcing, “Today we start our digital transformation. We are going to create the best possible customer experience.” will do nothing if your organization remains the same.
There are a variety of things you should be doing in order to organize correctly. Here are several thoughts:
Thanks Eric Roch for pointing me to a great article in CIO Magazine about Scaling Digital that highlights this concept perfectly.
First the problem that Dave Smoley, CIO of AstraZeneca, faced:
“The reality is, we’ve got pockets of digital activity all over the place,” says Smoley, who has been CIO of the $26-billion pharmaceutical company since 2013. “Our commercial business is focused on social and content creation. Global medicine development is working with sensors and smart devices. Oncology is looking at digital injection technologies, and we have multiple groups using digital to improve the patient experience.”
Smoley loves to see all of this focus on digital, but as of yet, he sees only individual strategies. “Everyone is chasing the same problem, but we are not talking to each other.”
I love his two fold solution:
First it was about opening their eyes:
“I took the CEO and executive staff, and we spent a week in San Francisco,” says Smoley. “My CTO and I hosted the trip. We met with a bunch of really interesting cloud companies, some with products and services specifically for the life sciences.”
After meeting with some big players, Smoley and his CTO curated a half day of meetings with startup companies. “We did speed dating with a bunch of healthcare related technology companies, and our executives were completely blown away.”
Then it was about starting to organize for success:
“The Digital Center of Excellence spans the whole digital strategy piece, including social, apps, websites, devices, sensors and data analytics,” Smoley says. While the center is a business construct that stands next to IT, Smoley’s CTO is an official member of the group. “I want to make sure we’re having one conversation around what technology can and can’t do, not two. We want to avoid the scenario where there’s the digital conversation, and then there’s the IT conversation.”
You should read the whole article though.
Let’s face it, many organizations incent almost exactly the opposite results than digital agility within an organization. You cannot get results without thinking through your organization. I may be a little influenced by David Chapman who heads our Organization Change Management practice but it makes a lot of sense and gets past a lot of the roadblocks we’ve faced in the past.
Sit down and do a review of your organization with someone who focuses on change management. That way they can work through:
From that you will have a series of recommendations you’ll put into your transformation roadmap.
Nigel Fenwick at Forrester among others has recommended that you create a tea whose focus is on agility. While there is some potential for the creation of yet another silo, this at least gets IT and the business on the same team pushing in the same direction. Like AstraZeneca above, that Digital Center of Excellence or whatever you wish to call it will work on laying down the foundation, tools, and even solutions that solve your needs.
Another examples is Eric Roch’s recent post on Integration Strategy in a digital World. He pushes agility and highlights that the heavy integration organization can get in the way of that.
So create a team, tie business and IT at the hip, focus on moving forward quickly. Then tie that in with the other two items above…..
]]>When a merger, acquisition, consolidation or spin-off takes place, there are often separate #Salesforce Orgs (instances) that reside in different areas of a company. When there are separate CRM systems, it is impossible to roll up performance results to one management dashboard, so analytics will not be powerful/reflective of total business results. Also, you may be missing opportunities to cross-sell and up-sell by maintaining separate Orgs. And, without a combined system, you do not have a single point for integration and data cleansing. So, there are significant advantages to merging Orgs, many of which are a direct result of housing data in a single location. This ensures that users will have greater visibility and accessibility to unified information on sales, service, reporting, analysis and more.
For example, Perficient recently helped a client integrate three instances of Salesforce after they acquired businesses that were using the tool. All three businesses were using Salesforce in very different ways, which resulted in vastly different configurations. Because of this, naming conventions had to be standardized, account and contact data de-duped and business processes merged.
Perficient was able to work with them to harmonize and re-engineer their accounts, contacts, opportunities, activities management and train several hundred users. The merged business processes now support a single, unified business development team and the company is looking to use our process as the “gold standard” for future Org consolidations.
If your company is having problems consolidating Salesforce Orgs now, or if you will be consolidating systems in the near future, please contact us for expert help at sales@perficient.com.
]]>by Dan Kaho
Whether transitioning a site from one platform to another or starting a new project on a new platform, one commonly overlooked project phase is the creation of wireframes. For pre-existing sites (ie – sites that are already live), wireframes are not usually a part of the project plan. In this scenario, wireframes are even more important because proper page analysis has to be performed in order to minimize the amount of surprises encountered during a site transition project.
Wireframes highlight the main features, content areas and, most importantly, outline the “flow” of a user interface. In the website development industry, a common example of a main feature would be a menu on the home page. In this example, if the customer has asked that the main menu, when clicked, should drop-down to show subcategories, this particular functionality would be called out in the wireframes. Here are a few points that everyone involved in an website development project needs to consider when discussing the importance of wireframes:
For sites that are being transitioned from one platform to another, one of the biggest misconceptions is that the development team can simply duplicate the old site as it currently stands. However, it’s not as simple as copying the HTML page content from one server to another. Proper current state analysis has to be performed to capture current site functionality. Always remember, no matter if it’s a static content page, product detail page, or a category page, each page on a website has a very specific job to do. As the project delivery team, we want to make sure we capture all features and functionality up front so we don’t lose site of the value each page offers.
How many times have you been on a project and heard someone on the team say, “I see what you’ve done, but I was expecting it to do <fill in the blank>.” It doesn’t matter if you are a project manager for an eCommerce services company or a project manager developing an internal software project, chances are, you WILL hear that statement at least once in every project. Wireframes help uncover these types of questions before work has started and help to minimize the amount of necessary rework.
If there is an existing style guide or site skin in place, some customers will try to save project costs by moving forward with wireframing but skip the design phase of a project. When they receive the wireframes, they tend to get caught up in the details of the page design rather than reviewing the wireframes themselves. Keep in mind, wireframes serve a simple purpose: to show the overall structure (skeleton) of a page while outlining the high-level features and functionality of the page.
The level of detail provided by a set of wireframes helps everyone in the project team work efficiently.
Without wireframes to reference as questions arise, many customers find that look, feel and flow questions tend to come up during the final delivery of the site. It is in everyone’s best interest to spend a little time up front to make sure the user interface is clearly defined and understood.
]]>Chris Crummey (@ccrummey) is probably the most adept presenter on the whole experience in using the IBM tools. He gets what makes people successful and incorporates that into how he works. So it’s a good session.
Here’s what makes you successful:
Types of people:
Quote: this is not about company size. Social is not a product. It’s not a feature. It’s an organic living thing.
How does a YouTube video grow. It goes from one network of people to the other as they share it. It relies on the influencers.
Collective intelligence and the wisdom of your collective expertise can and should be harnessed
One of IBM key 9 principles is “Shared Expertise”
IBM’s new way of working is an initiative. It’s about going through multiple phases ranging from enhanced profiles at the beginning. Now social is being pushed into their CRM systems and external events are being pulled into the social network. Social is a service, not a product or feature.
Results: saved $110M on help desk calls via social support. Keep in mind that the 800 number only supports Windows 7 / Blackberry. They had to rely on the social support.
Results: 90,000 communities. CEO Think Friday
Now: Continuous cultural change
Bosch:
RENO (German store)
Sandy Carter says, “Culture eats strategy for breakfast”
Social transparency is about trust. One company went into the social platform by opening up their offices and even going to the honor system in the cafeteria.
TD Bank
Corporate culture is about how they treat their customers. Their social platform started with Wow moments. “People are fighting over the stories of how they treated the customers” The social platform helps to support the goal of customer change.
Celebrate good ideas: Good ideas. Bad ideas. One customer celebrated the worst idea in the “Golden Cow” award.
IBM uses the platform for cultural change too. They created branded emoticons for Sametime. The HR program called BlueThx is a person inthe directory. They microblog for BlueThx. It hits the activity stream. a blue thx thank you hits the social network.
Almost all of IBM’s busienss processes are run on the social network. It could be mergers, sales, support, events, marketing, etc. They are all supported with the social tools. Obviously their are other systems but the support is there.
It changed the way they create ad campaigns.
They even created a crowd sourcing kickstart strategy.
Back to key initiatives having profiles. You can follow them. They help to further the goals and adoption.
The activity stream has business applications integrated right into it.
IBM has an app store. If you hit like button on the app, you put that event into your social network. You can even microblog on it. It’s an ecosystem made simple.
Best Practice: think of social as a service
Email has to have the social capabilities
Mobile should be enabled. Cameras in phones to upload. Profiles need to be available
Non business usage:
Look at one example of sharing, saving, and networking.
All that sharing saves money in disk space but now the file has one version of the truth. Sharing that file within a social network went from one side of the corporation to the other far faster than email.
IBM uses a video blog for their CEO. Her second day on the job she had 205,000 visits, 751 comments, and 175 likes. Half of IBM saw that video within 48 hours. It wasn’t long before all the other execs started to do video blogs.
IBM’s Think Academy is a new initiative……….supported by a community.
The use of this “feeds the machine” Ask a question, get an answer from one, then the other, then the other. They all learn. They all get deeper into the social networks of others.
Another use case: Using to celebrate everyday heroes. CEO goes to someones wall to congratulate them on a good job done. Everone sees that. Compare that to email.
Driving unique business events. They created a stand and deliver badge and people would give the gift badge. That pushed out a whole bunch of other badges.
]]>