Target Articles / Blogs / Perficient https://blogs.perficient.com/tag/target/ Expert Digital Insights Thu, 26 Aug 2021 18:03:30 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Target Articles / Blogs / Perficient https://blogs.perficient.com/tag/target/ 32 32 30508587 Sending and Receiving an event through EventBridge with Multiple Targets https://blogs.perficient.com/2021/08/10/sending-and-receiving-an-event-through-eventbridge-with-multiple-targets/ https://blogs.perficient.com/2021/08/10/sending-and-receiving-an-event-through-eventbridge-with-multiple-targets/#respond Tue, 10 Aug 2021 05:09:27 +0000 https://blogs.perficient.com/?p=296110

In this context, we will show how to send an event using custom rule and retrieve an event by adding target. Here I have added two targets which is simple pub/sub implementation as Amazon SNS as our publishing service, Amazon SQS as a subscriber and monitor the success event logs using AWS CloudWatch in EVENTBRIDGE.

What is EventBridge?

Amazon EventBridge is a serverless event bus that makes it easier to connect applications with data from a variety of sources. These event sources can be custom applications, AWS services and partner SaaS applications.

It provides a flexible way of filtering to allow events to be published to the event bus, and then, based on the target routing rules, sends the eligible events to the configured target applications or services.

Step 1: Create Event:

Create a new event bus in the EventBridge console and name it as test-event-bus.

Step 2: Create Custom Rule:

On the EventBridge homepage, Select Rules.

  • From the Event bus dropdown, select the test-event-bus.
  • Click Create rule and name it as sample-eventBridge-rule.

Step 3: Under Define pattern

  • Choose Event pattern
  • Under Event matching pattern, select Custom pattern and add your custom pattern.

Define

Basically, rule will check our event data. The source, detail-type, detail as the three parameters in event pattern and these would be constant. If it matched, then only the rule will pass.

Here I have filtered the event based on “Jack and Jim” in detail (our event message) params. If I gave the input with Jack/Jim the message will pass. If we gave any new value instead of our custom pattern, then the request event will be created. But we are not able to monitor that failure event in SNS, SQS and CloudWatch log.

Step 4: Create Target:

For Select targets, choose the AWS service that you want to act when EventBridge detects an event of the selected type.

We can create 25 targets for a single rule. Here I have used two targets:

  • Choose target as CloudWatch log group and create a log group as sample-eventBridge-log.
  • Choose target as SNS topic and select the SNS topic name as test-eventBridge-topic.

Target for CloudWatch:

Target Cloud

Target for SNS:

Target Sns

In SNS target I have transformed the input message based on my logic for input path and template. As shown below, the first part as Input path  where the required data values are extracted from the event payload. The second part is the Input template where the outgoing event data is created by incorporating the previously extracted values. One thing to note here is that the outgoing event data doesn’t need to be a JSON!

Transformer

Step 5: Send events:

I have created a simple EventBridge application using spring boot with gradle project.

Prjt Strure

Dependencies:

implementation group: ‘software.amazon.awssdk’, name: ‘eventbridge’, version: ‘2.16.101’

In EventBridgeController we have two api calls, one is for send an event to eventbridge and second one is for retrieving rules from an event. Finally added a SQS Listener, this is used to consume the message from a triggered event which from SNS target in rule. I have subscribed this queue from SNS topic. When an event successfully passed the rule the pub/sub messaging will occur.

Controller

In service classPutEventsRequest action sends multiple events to EventBridge in a single request.

Service

Step 6: Testing the event:

From local:

Local

From AWS console:

Aws Console

Triggered event output from both the AWS console and Postman. AWS SQS will be consumed only for the success message as highlighted. If it did not satisfy the rule, the event could be created with id but not consumed from SQS as second log in below screenshot.

Local Log

CloudWatch monitor the success log:

Cloudwatch Log

Conclusion:

This is a brief write up on EventBridge focusing mainly on the event routing rule configurations. If used wisely, it certainly can bring more versatility to the entire event ingestion and the delivery mechanism that we benefit from Amazon EventBridge. Please look into my GitHub repo for the full implementation of the applications and feel free to contribute to the repo.

I hope you found the details I shared here useful. Until then, happy eventing!

 

]]>
https://blogs.perficient.com/2021/08/10/sending-and-receiving-an-event-through-eventbridge-with-multiple-targets/feed/ 0 296110
Synergizing Agile Projects https://blogs.perficient.com/2018/09/10/synergizing-agile-projects/ https://blogs.perficient.com/2018/09/10/synergizing-agile-projects/#comments Mon, 10 Sep 2018 09:22:26 +0000 https://blogs.perficient.com/?p=231075

The Synergizing Idea

This hypothesis started when I was part of the potluck-hosting team in my office. Our lead had set targets on taste, quantity, quality, and timelines. Then he said, “We are presenting this food to our clients; let us make it a unique memorable experience for them. Let us ensure that it does not upset their tummies at any cost and add a little bit of that ‘secret ingredient,’ plenty of ‘love’ to make it taste ‘wow’.”

I explored on synergizing my dish and extended the same to Agile projects.

What is Synergy and What is this “Wow”?

The Cambridge dictionary describes Synergy as:

the combined power of a group of things when they are working together that is greater than the total power achieved by each working separately

Stephen Covey simplifies it as: “Synergize is the habit of creative cooperation.”

This Synergy component output cannot be measured accurately. Sometimes it is not even exquisite immediately. It’s what we call the “feel good” factor, the “wow” component. As you consistently accumulate the synergy component output, you might end up with a measurable value.

synergy

synergy

My Root Cause Analysis on Synergy

When I was pondering over synergizing opportunities, I stumbled on this thought:

Why does the food some of you cooked for a special friend or guest tasted better than your routine food?

You valued the special guest and wanted to make the food taste better than usual, better than your routine target.

You looked for Values over Targets!

Values over Targets in Agile Projects

When we understand the values over targets in all project activities, we identify the secret synergizers that could lead to those values:

Synergizer Value
Effective user stories Clear Requirements
Appropriate prioritizing and story picking Delivering highest business value in shortest time
Effective Test-driven development and continuous testing, Paired Programming Healthy development and testing partnership, Proactive issue management
Prompt Scrum ceremonies Team Focus, Team commitment, Team collaboration
Putting yourself in Customer’s shoes Customer Excellence

Even the least values tend to cumulatively result in Big value adds to deliverables.

Value 1 + Value 2+…Value n = Value N + Target N

but

Target 1+Target 2+…Target n ≠ Value N + Target N

Conclusion:

Our potluck was sure a memorable event because we also wanted it to be a unique memorable experience. Every host had added that ‘secret ingredient’ and had synergized their dish, realizing the values over targets and in turn achieving the ‘wow’.

]]>
https://blogs.perficient.com/2018/09/10/synergizing-agile-projects/feed/ 3 231075
Client-Side Testing with the Experience Cloud Debugger https://blogs.perficient.com/2018/08/21/client-side-testing-with-the-experience-cloud-debugger/ https://blogs.perficient.com/2018/08/21/client-side-testing-with-the-experience-cloud-debugger/#respond Tue, 21 Aug 2018 12:15:54 +0000 https://blogs.perficientdigital.com/?p=223133

Consider the following scenarios:

  • You’ve just launched your brand new Adobe Target optimization campaign but you aren’t sure if you are seeing any campaign content.
  • Your boss has informed you that step 3 or 4 in the Adobe Analytics daily sales funnel report has gone missed as of yesterday.
  • The Sales team has just informed you “High Value Prospects” (based on the Audience Manager segment) aren’t being presented with Summer BOGO promotional offer on the homepage.

Not the best news one could hear right after the morning coffee – but not uncommon and certainly not an impossible challenge to overcome. While there are obviously a number possible causes for the scenarios above, the goal is to identify the roots cause(s) as efficiently as possible. Although you could log into each of the systems to validate campaign settings, launch information, etc. – wouldn’t it better if there was a single tool you could use to view exactly what the Experience Cloud libraries are doing on a given page? Well that tool exists – it’s called the Adobe Experience Cloud Debugger!
 

What is the Adobe Experience Cloud Debugger?

The Adobe Experience Cloud Debugger (previously the DigitalPulse Debugger) is a free tool provided by Adobe that lets you view the data being collected on your site by most of Adobe’s Experience Cloud products. This means that **almost any data the Experience Cloud SaaS tools collect or set on a site visitor’s machine via the browser can be viewed in a normalized, easy to read dashboard. No more invoking your browser’s developer tools/Firebug panels searching for arcane request/response pairs for each cloud product – with the click of a single button all of the information needed to start the debug process is at your fingertips. A significant timesaver!!!
The Adobe Experience Cloud Debugger currently supports the following SaaS tools:

  • Adobe Advertising Cloud (formerly Media Optimizer)
  • Adobe Analytics
  • Adobe Audience Manager
  • Adobe Target
  • Experience Cloud ID Service (formerly VisitorID service)
  • Dynamic Tag Management (DTM)
  • Launch, by Adobe.

 

Getting Started with the Experience Cloud Debugger

To use the Experience Cloud Debugger, you’ll first need to install the extension from the Chrome Web Store. In addition, you can read about use cases, version history and more by visiting the dedicated page on Adobe’s site – https://marketing.adobe.com/resources/help/en_US/experience-cloud-debugger/. Once installed, navigate to a page running any of the supported Experience Cloud products and execute the extension from the Chrome menu.
Adobe Experience Cloud Debugger
As you can see, the window lists dedicated tabs for each of the supported Experience Cloud tools in the top row and defaults to the Summary tab which contains the most important information captured for each service. In the above example, you can easily see there are 3 Analytics calls, 5 Target mbox calls and one Cloud ID service call.
So far, so good – 3 of the four services in our original scenarios are at least running on the page; we can turn our attention towards campaign troubleshooting and away from implementation troubleshooting for these three. But what about that “0” Audience Manager server-call item – could there be an issue there? There could be a problem if the Client-Side DIL is being used or completely normal if Analytics Server-Side Forwarding is enabled. Based on the implementation, you would either proceed to validating the Audience Manager implementation or looking at the Target campaign or AAM segment qualification(s).
The good news for those Target administrators who were used to the legacy DigitalPulse Debugger is a good bit of the old functionality has been ported over including the ability to Disable Target, Highlight all available mboxes (except the auto created Global mbox), enable console logging and the familiar mboxTrace. In addition, the Experience Cloud Debugger includes a few new features not available in the legacy debugger including:

  • View network requests generated by each of the services.
  • DTM\Launch specific tools such as disabling, inserting the libraries into a page even they don’t already exist, etc.
  • Logs of every action seen by the debugger and the code that initiated the log entry
  • Run an Audit of up to 100 pages of the site
  • Much more…

Lastly, back to our Summer BOGO scenario above; using the debugger we can see that I was able to qualify for the campaign but I am part of the Control group and therefore was presented with default content, e.g., the normal website. Crisis likely averted!!
Experience Cloud Debugger - Target section

A Few Drawbacks

One of the beauties of the legacy PulseDebugger was that it was completely JavaScript based and therefore browser agnostic. That meant that there were no extensions/addins to install or update – simply add a bookmark in your favorite browser (yes, even IE9) and it just worked. The new Experience Cloud Debugger is a formal Chrome extension which needs to be installed from the Chrome Web Store in order to use. While it’s true Chrome holds a commanding lead in terms of browser market share (certainly amongst developers), that’s still a sizeable user based stuck using old technology (DigitalPulse) or time-consuming methods (browser tools). It’s also not uncommon for a problem to only occur in certain browsers – usually some version of Internet Explorer. Hopefully the Chrome extension can be ported to Firefox, Safari and Edge in the future.
In addition, compared to the old DigitalPulse debugger the new Experience Cloud version takes up a ton of screen real-estate. Not necessarily a big deal if you have a dual monitor setup but not a very good experience if you are on your laptop.
Finally, as of this writing the debugger is still technically in Beta so there is potential for data/usage issues. That said, it has already gone through 5 public releases and looks pretty solid as is.
 

In Conclusion

So long as you are using Chrome and have a large enough monitor to view the dialog box, the Adobe Experience Cloud Debugger is a worthy upgrade to the legacy PulseDebugger. The easy to use summary panel, new Tools section and support for DTM/Launch functionality makes it a not only a worthy addition to your testing/debugging toolset but also a useful tool to provide to Business stakeholders for their own UAT activities.
Enjoy!
 
** For security reasons, Adobe Target userProfile data is not displayed within the debugger; you must generate a token and leverage your browser’s developer tools to view this data. mboxProfile data is displayed.
 

]]>
https://blogs.perficient.com/2018/08/21/client-side-testing-with-the-experience-cloud-debugger/feed/ 0 269334
4 Ways to Succeed in Retail in the “Age of Amazon” https://blogs.perficient.com/2017/11/28/4-ways-to-succeed-in-retail-in-the-age-of-amazon/ https://blogs.perficient.com/2017/11/28/4-ways-to-succeed-in-retail-in-the-age-of-amazon/#respond Tue, 28 Nov 2017 16:00:42 +0000 https://blogs.perficient.com/commerce/?p=6464

Austin Carr just published “The Future of Retail in the Age of Amazon,” in Fast Company on the 24th of November. In it, he outlines four key takeaways for how successful retailers are innovating and evolving to remain competitive with their brick-and-mortar experiences.

“Retail is under huge pressure, but the death of stores is greatly exaggerated,” says [NYU Stern professor of marketing Scott Galloway], who believes that while Amazon will continue to disrupt the market, an increasing number of competitors will discover new ways to respond. “In the age of Amazon, retailers must leverage assets that [Bezos] doesn’t have: When Amazon zigs, retailers must zag.”

Here are four suggestions for remaining relevant in the “Age of Amazon”:

  1. Feature products that customers can’t get elsewhere.
    Target is a shining example of this: “craft a collection of mass-market housewares, partnering with high-end fashion designers like Isaac Mizrahi for custom fashion lines, and nurturing emerging brands such as Method through forward-thinking curation.” The modern-day version includes brands like Cat & Jack, “a boutiquey children’s decor line called Pillowfort, a modern furniture collection called Project 62, an athleisure apparel line for the post-yoga brunch crowd called JoyLab, and a dapper menswear brand called Goodfellow & Co.”
  2. Focus on delivering a satisfying experience.
    “Big retailers and digital-native consumer brands alike cite Warby Parker as an inspiration and seek to mimic, even reverse engineer, what they believe is the core of its hip but inviting store experience. But refashioning stores with a certain wood finish or outfitting employees in a distinctive smock doesn’t make you Warby Parker any more than painting your store white makes you Apple.” 
  3. Challenge the fundamental assumptions of commerce.
    While “in-store augmented reality, drone delivery, or bitcoin payments” may be “gimmicky distractions,” try to think differently about “how physical space can be monetized.” Think about how you can offer new experiences in-store such as stylists, fittings, and product trials and demos.
  4. Resurrect the Art of Selling.
    “Whereas the traditional rules of brick-and-mortar dictate that sales matter above all else, store associates at MartinPatrick3 are encouraged to dole out sincere fashion advice, and if it means counseling a guest away from a higher-priced item or directing him to competitors’ shops, so be it. The payback comes in the lasting relationships such honesty builds.”

“Retailers don’t need to chase a futuristic version of themselves that they might never attain; they first need to remember what made them special in the first place.”

Read the full article here.

]]>
https://blogs.perficient.com/2017/11/28/4-ways-to-succeed-in-retail-in-the-age-of-amazon/feed/ 0 269059
IRCE 2015: Springtime in Chicago https://blogs.perficient.com/2015/06/05/irce-2015-spring-time-in-chicago/ https://blogs.perficient.com/2015/06/05/irce-2015-spring-time-in-chicago/#respond Fri, 05 Jun 2015 21:12:19 +0000 http://blogs.perficient.com/consumermarkets/?p=1843

IRCEThe weather was beautiful in Chicago this week for the Internet Retailer Conference + Expo (IRCE) 2015: Sunny skies, moderate temperatures, and no humidity. But if you know the weather in the Midwest, then you know that you only need to wait a day – or sometimes even an hour – and the weather can change dramatically. Retailers have been weathering changes for decades: Changes in consumer needs, changes in business models, and tremendous changes in technology. Change was a common theme at IRCE this week, but so was focus, and both were well covered in a couple of the keynotes.
“We need to embrace change. It’s inevitable. It’s not a choice and it’s rapid. It’s about survival,” said Christopher McCann, president of 1-800-Flowers.com in his IRCE 2015 keynote.  McCann’s company grew from a handful of flower shops in New York City through waves of catalog, phone, eCommerce, and mobile channels, ultimately expanding into multiple brands, all with a common purpose to “deliver smiles for our customers.” Who can say “no” to a smile?
1-800-Flowers hasn’t stopped to smell the roses (sorry, I couldn’t resist.) In 1991, they launched their first online store on dial-up pioneer CompuServe, then onto AOL, and were early adopters with Netscape in 1995. I believe that their success lies not just with this ability to change and evolve, but to do so in concert with a singular focus on their customers.  “We all crave a sense of belonging, in essence, we need a network of relationships. At 1-800-Flowers, we are in the business to connect with the people in their lives.” This dual purpose of balancing relentless change with customer-centricity is at the heart of successful transformations like 1-800-Flowers.
In his keynote on Wednesday, Target’s Jason Goldberger, president, Target.com and Mobile, explained how Target’s multi-channel success has rested on being “guest-obsessed, not channel-obsessed.” Since bringing Target.com in house just four years ago, Target’s digital transformation has produced two of the most downloaded retail apps (their flagship Target app and Cartwheel, Target’s digital coupon app) and continues to leverage their physical stores as the force multiplier for digital. “Customers who shop in store and in digital channels do so three times more often,” Goldberger said, “producing three times as much in sales as those who are store-based only. They represent more trips and more sales as they establish a deeper relationship with the brand.”
As a consulting partner to Target, I can say first-hand that the obsession on both their guests and on relentless change is also at the heart of their digital transformation. Target has also weathered a few major public disruptions, but each time has bounced back stronger as they have kept their guest commitments and simply continued to adapt. They are used to it. After all, Minneapolis is also a Midwestern town.

]]>
https://blogs.perficient.com/2015/06/05/irce-2015-spring-time-in-chicago/feed/ 0 256976
An Architectural Approach to Cognos TM1 Design https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/ https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/#respond Thu, 28 Aug 2014 20:22:52 +0000 http://blogs.perficient.com/dataanalytics/?p=4907

Overtime, I’ve written about keeping your TM1 model design “architecturally pure”. What this means is that you should strive to keep a models “areas of functionality” distinct within your design.

Common Components

I believe that all TM1 applications, for example, are made of only 4 distinct “areas of functionality”. They are absorption (of key information from external data sources), configuration (of assumptions about the absorbed data), calculation (where the specific “magic” happens; i.e. business logic is applied to the source data using the set assumptions) and consumption (of the information processed by the application and is ready to be reported on).

Some Advantages

Keeping functional areas distinct has many advantages:

  • Reduces complexity and increases sustainability within components
  • Reduces the possibility of one component negativity effecting another
  • Enables the probability of reuse of the particular (distinct) components
  • Promotes a technology independent design; meaning components can be built using the technology that best fits their particular objective
  • Allows components to be designed, developed and supported by independent groups
  • Diminishes duplication of code, logic, data, etc.
  • Etc.

Resist the Urge

There is always a tendency to “jump in” and “do it all” using a single tool or technology or, in the case of Cognos TM1, a few enormous cubes and today, with every release of software, there are new “package connectors” that allow you to directly connect (even external) system components. In addition, you may “understand the mechanics” of how a certain technology works which will allow you to “build” something, but without comprehensive knowledge of architectural concepts, you may end up with something that does not scale, has unacceptable performance or is costly to sustain.

Final Thoughts

Some final thoughts:

  • Try white boarding the functional areas before writing any code
  • Once you have your “like areas” defined, search for already existing components that may meet your requirements
  • If you do decide to “build new”, try to find other potential users for the new functionality. Could you partner and co-produce (and thus share the costs) a component that you both can use?
  • Before building a new component, “try out” different technologies. Which best serves the need of these components objectives? (A rule of thumb, if you can find more than 3 other technologies or tools that better fit your requirements than the technology you planned to use, you’re in trouble!).

And finally:

Always remember, just because you “can” doesn’t mean you “should”.

]]>
https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/feed/ 0 200051
A Practice Vision https://blogs.perficient.com/2014/08/27/a-practice-vision/ https://blogs.perficient.com/2014/08/27/a-practice-vision/#respond Wed, 27 Aug 2014 23:11:53 +0000 http://blogs.perficient.com/dataanalytics/?p=4905

Vision

Most organizations today have had successes implementing technology and they are happy to tell you about it. From a tactical perspective, they understand how to install, configure and use whatever software you are interested in. They are “practitioners”. But, how may can bring a “strategic vision” to a project or to your organization in general?

An “enterprise” or “strategic” vision is based upon an “evolutionary roadmap” that starts with the initial “evaluation and implementation” (of a technology or tool), continues with “building and using” and finally (hopefully) to the organization, optimization and management of all of the earned knowledge (with the tool or technology). You should expect that whoever you partner with can explain what their practice vision or mythology is or, at least talk to the “phases” of the evolution process:

Evaluation and Implementation

The discovery and evaluation that takes place with any new tool or technology is the first phase of a practices evolution. A practice should be able to explain how testing is accomplished and what it covers How was it that they determined if the tool/technology to be used will meet or exceed your organization’s needs? Once a decision is made, are they practiced at the installation, configuration and everything that may be involved in deploying the new tool or technology for use?

Build, Use, Repeat

Once deployed, and “building and using” components with that tool or technology begin, the efficiency at which these components are developed as well as the level of quality of those developed components will depend upon the level of experience (with the technology) that a practice possess. Typically, “building and using” is repeated with each successful “build” so how many times has the practice successfully used this technology? By human nature, once a solution is “built” and seems correct and valuable, it will be saved and used again. Hopefully, this solution would have been shared as a “knowledge object” across the practice. Although most may actually reach this phase, it is not uncommon to find:

  • Objects with similar or duplicate functionality (they reinvented the wheel over and over).
  • Poor naming and filing of objects (no one but the creator knows it exists or perhaps what it does)
  • Objects not shared (objects visible only to specific groups or individuals, not the entire practice)
  • Objects that are obsolete or do not work properly or optimally are being used.
  • Etc.

Manage & Optimization

At some point, usually while (or after a certain number of) solutions have been developed, a practice will “mature its development or delivery process” to the point that it will begin investing time and perhaps dedicate resources to organize, manage and optimize its developed components (i.e. “organizational knowledge management”, sometimes known as IP or intellectual property).

You should expect a practice to have a recognized practice leader and a “governing committee” to help identify and manage knowledge developed by the practice and:

  • inventory and evaluate all known (and future) knowledge objects
  • establish appropriate naming standards and styles
  • establishing appropriate development and delivery standards
  • create, implement and enforce a formal testing strategy
  • continually develop “the vision” for the practice (and perhaps the industry)

 

More

As I’ve mentioned, a practice needs to take a strategic or enterprise approach to how it develops and delivers and to do this it must develop its “vision”. A vision will ensure that the practice is leveraging its resources (and methodologies) to achieve the highest rate of success today and over time. This is not simply “administrating the environment” or “managing the projects” but involves structured thought, best practices and continued commitment to evolved improvement. What is your vision?

]]>
https://blogs.perficient.com/2014/08/27/a-practice-vision/feed/ 0 200050
IBM OpenPages GRC Platform –modular methodology https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/ https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/#respond Thu, 14 Aug 2014 14:58:10 +0000 http://blogs.perficient.com/dataanalytics/?p=4849

The OpenPages GRC platform includes 5 main “operational modules”. These modules are each designed to address specific organizational needs around Governance, Risk, and Compliance.

Operational Risk Management module “ORM”

IBM OpenPages GRC Platform - modular methodologyThe Operational Risk Management module is a document and process management tool which includes a monitoring and decision support system enabling an organization to analyze, manage, and mitigate risk simply and efficiently. The module automates the process of identifying, measuring, and monitoring operational risk by combining all risk data (such as risk and control self-assessments, loss events, scenario analysis, external losses, and key risk indicators (KRI)), into a single place.

Financial Controls Management module “FCM”

The Financial Controls Management module reduces time and resource costs associated with compliance for financial reporting regulations. This module combines document and process management with awesome interactive reporting capabilities in a flexible, adaptable easy-to-use environment, enabling users to easily perform all the necessary activities for complying with financial reporting regulations.

Policy and Compliance Management module “PCM”

The Policy and Compliance Management module is an enterprise-level compliance management solution that reduces the cost and complexity of compliance with multiple regulatory mandates and corporate policies. This model enables companies to manage and monitor compliance activities through a full set of integrated functionality:

  • Regulatory Libraries & Change Management
  • Risk & Control Assessments
  • Policy Management, including Policy Creation, Review & Approval and Policy Awareness
  • Control Testing & Issue Remediation
  • Regulator Interaction Management
  • Incident Tracking
  • Key Performance Indicators
  • Reporting, monitoring, and analytics

IBM OpenPages IT Governance module “ITG”

This module aligns IT services, risks, and policies with corporate business initiatives, strategies, and operational standards. Allowing the management of internal IT control and risk according to the business processes they support. In addition, this module unites “silos” of IT risk and compliance delivering visibility, better decision support, and ultimately enhanced performance.

IBM OpenPages Internal Audit Management module “IAM”

This module provides internal auditors with a view into an organizations governance, risk, and compliance, affording the chance to supplement and coexist with broader risk and compliance management activities throughout the organization.

One Solution

The IBM OpenPages GRC Platform Modules Object Model (“ORM”, “FCM”, “PCM”, “ITG” an “IAM”) interactively deliver a superior solution for Governance, Risk, and Compliance. More to come!

]]>
https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/feed/ 0 200044
The installation Process – IBM OpenPages GRC Platform https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/ https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/#respond Wed, 13 Aug 2014 18:13:27 +0000 http://blogs.perficient.com/dataanalytics/?p=4843

When preparing to deploy the OpenPages platform, you’ll need to follow these steps:

  1. Determine which server environment you will deploy to – Windows or AIX.
  2. Determine your topology – how many servers will you include as part of the environment? Multiple application servers? 1 or more reporting servers?
  3. Perform the installation of the OpenPages prerequisite software for the chosen environment -and for each server’s designed purpose (database, application or reporting).
  4. Perform the OpenPages installation, being conscious of the software that is installed as part of that process.

Topology

Depending upon your needs, you may find that you’ll want to use separate servers for your application, database and reporting servers. In addition, you may want to add additional application or reporting servers to your topology.

 

 

topo

 

 

 

 

 

 

 

 

 

 

 

 

After the topology is determined you can use the following information to prepare your environment. I recommend clean installs (meaning starting with fresh or new machines and VM’s are just fine (“The VMWare performance on a virtualized system is comparable to native hardware. You can use the OpenPages hardware requirements for sizing VM environments” – IBM).

(Note – this is if you’ve chosen to go Oracle rather than DB2):

MS Windows Severs

All servers that will be part of the OpenPages environment must have the following installed before proceeding:

  • Microsoft Windows Server 2008 R2 and later Service Packs (64-bit operating system)
  • Microsoft Internet Explorer 7.0 (or 8.0 in Compatibility View mode)
  • A file compression utility, such as WinZip
  • A PDF reader (such as Adobe Acrobat)

The Database Server

In addition to the above “all servers” software, your database server will require the following software:

  • Oracle 11gR2 (11.2.0.1) and any higher Patch Set – the minimum requirement is Oracle 11.2.0.1 October 2010 Critical Patch Update.

Application Server(s)

Again, in addition to the above “all servers” software, the server that hosts the OpenPages application modules should have the following software installed:

  • JDK 1.6 or greater, 64-bit Note: This is a prerequisite only if your OpenPages product does not include WebLogic Server.
  • Application Server Software (one of the following two options)

o   IBM Websphere Application Server ND 7.0.0.13 and any higher Fix Pack Note: Minimum requirement is Websphere 7.0.0.13.

o   Oracle WebLogic Server 10.3.2 and any higher Patch Set Note: Minimum requirement is Oracle WebLogic Server 10.3.2. This is a prerequisite only if your OpenPages product does not include Oracle WebLogic Server.

  • Oracle Database Client 11gR2 (11.2.0.1) and any higher Patch Set

Reporting Server(s)

The server that you intend to host the OpenPages CommandCenter must have the following software installed (in addition to the above “all servers” software):

  • Microsoft Internet Information Services (IIS) 7.0 or Apache HTTP Server 2.2.14 or greater
  • Oracle Database Client 11g R2 (11.2.0.1) and any higher Patch Set

During the OpenPages Installation Process

As part of the OpenPages installation, the following is installed automatically:

 

For Oracle WebLogic Server & IBM WebSphere Application Server environments:

  • The OpenPages application
  • Fujitsu Interstage Business Process Manager (BPM) 10.1
  • IBM Cognos 10.2
  • OpenPages CommandCenter
  • JRE 1.6 or greater

If your OpenPages product includes the Oracle WebLogic Server:

  • Oracle WebLogic Server 10.3.2

If your OpenPages product includes the Oracle Database:

  • Oracle Database Server Oracle 11G Release 2 (11.2.0.1) Standard Edition with October 2010 CPU Patch (on a database server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 64-bit (on an application server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 32-bit (on a reporting server system)

 Thanks!

]]>
https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/feed/ 0 200043
IBM OpenPages Start-up https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/ https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/#respond Tue, 12 Aug 2014 17:47:20 +0000 http://blogs.perficient.com/dataanalytics/?p=4833

In the beginning…

OpenPages was a company “born” in Massachusetts, providing Governance, Risk, and Compliancesoftware and services to customers. Founded in 1996, OpenPages had more than 200 customers worldwide including Barclays, Duke Energy, and TIAA-CREF. On October 21, 2010, OpenPages was officially acquired by IBM:

http://www-03.ibm.com/press/us/en/pressrelease/32808.wss

IBM OpenPages Start-upWhat is it?

OpenPages provides a technology driven way of understanding the full scope of risk an organization faces. In most cases, there is extreme fragmentation of a company’s risk information – like data collected and maintained in numerous disparate spreadsheets – making aggregation of the risks faced by a company extremely difficult and unmanageable.

Key Features

IBM’s OpenPages GRC Platform can help by providing many capabilities to simplify and centralize compliance and risk management activities. The key features include:

  • Provides a shared content repository that can (logically) present the processes, risks and controls in many-to-many and shared relationships.
  • Supports the import of corporate data and maintains an audit trail ensuring consistent regulatory enforcement and monitoring across multiple regulations.
  • Supports dynamic decision making with its CommandCenter interface, which provides interactive, real-time executive dashboards and reports with drill-down.
  • Is simple to configure and localize with detailed user-specific tasks and actions accessible from a personal browser based home page.
  • Provides for Automation of Workflow for management assessment, process design reviews, control testing, issue remediation and sign-offs and certifications.
  • Utilizes Web Services for Integration. OpenPages utilizes OpenAccess API Interoperate with leading third-party applications to enhance policies and procedures with actual business data.

Understanding the Topology

The OpenPages GRC Platform consists of the following 3 components:

  • 1 database server
  • 1 or more application servers
  • 1 or more reporting servers

Database Server

The database is the centralized repository for metadata, (versions of) application data, and access control. OpenPages requires a set of database users and a tablespace (referred to as the “OpenPages database schema”). These database components install automatically during the OpenPages application installation, configuring all of the required elements. You can use either Oracle or DB2 for your OpenPages GRC Platform repository.

 Application Server(s)

The application server is required to host the OpenPages applications. The application server runs the application modules, and includes the definition and administration of business metadata, UI views, user profiles, and user authorization.

 Reporting Server

The OpenPages CommandCenter is installed on the same computer as IBM Cognos BI and acts as the reporting server.

Next Steps

An excellent next step would be to visit the ibm site and review the available slides and whitepapers. After that, keep tuned to this blog!

]]>
https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/feed/ 0 200042
Configuring Cognos TM1 Web with Cognos Security https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/ https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/#respond Thu, 07 Aug 2014 20:28:00 +0000 http://blogs.perficient.com/dataanalytics/?p=4821

Recently I completed upgrading a client’s IBM Cognos environment – both TM1 and BI. It was a “jump” from Cognos 8 to version 10.2, and TM1 9.5 to version 10.2.2. In this environment, we had multiple virtual servers (Cognos lives on one, TM1 on one and the third is the gateway/webserver).

Once the software was all installed and configured (using IBM Cognos Configuration and, yes, you still need to edit the TM1 configuration cfg file), we started the services and (it appeared) everything looked good. I spin through the desktop applications (Perspectives, Architect, etc.) and then go the Web browser, first to test TM1Web:

http:// stingryweb:9510/tm1web/

The familiar page loads:

01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

But when I enter my credentials, I get the following:

 

02

 

 

Go to Goggle

Since an installation and configuration is not something you do every day, goggle reports that there are evidentially 2 files that the installation placed on the web server that belong on the Cognos BI server. These files need to be located, edited and then copied to the correct location for TM1Web to use IBM Cognos authentication security.

What files?

There are 2 files; an XML file (variables_TM1.xml.sample) and an HTML file (tm1web.html). These can be found on the server that you installed TM1Web – or can they? Turns out, they are not found individually but are included in zip files:

Tm1web_app.zip (that is where you’ll find the xml file) and tm1web_gateway.zip (and that is where you will find tm1web.html):

03

 

 

 

 

I found mine in:

Program Files\ibm\cognos\tm1_64\webapps\tm1web\bi_files

Make them your own

Once you unzip (the files) you need to rename the xml file (to drop the “.sample”) and place it onto the Cognos BI server in:

Program Files\ibm\cognos\c10_64\templates\ps\portal.

Next, edit the file (even though it’s an XML file, its small so you can use notepad). What you need to do is modify the URL’s (the “localhost” string should be replaced with the name of the server running TM1Web.) within the <urls> tags. You’ll find three (one for TM1WebLogin.aspx, one for TM1WebLoginHandler.aspx and one for TM1WebMain.aspx).

Now, copy your tm1web.html file to (on the Cognos BI server)

Program Files\ibm\cognos\c10_64\webcontent\tm1\web and edit it (again, you can use notepad). One more thing, the folder “tm1” may need to be manually created.

The html file update is straight forward (you need to point to where Cognos TM1 Web is running) and there is only a single line in the file. You change:

var tm1webServices = [“http://localhost:8080”];

To:

var tm1webServices = [“http:// stingryweb:9510”];

 

Now, after stopping and starting the servers web services:

 

04

 

 

 

 

The above steps are simple; you just need to be aware of these extra, very manual steps….

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

]]>
https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/feed/ 0 200041
Perficient takes Cognos TM1 to the Cloud https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/ https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/#respond Tue, 01 Jul 2014 16:51:03 +0000 http://blogs.perficient.com/dataanalytics/?p=4724

IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.

Analytic Projects

Perficient takes Cognos TM1 to the CloudPerhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.

It doesn’t stop there

As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:

  • Hardware monitoring and administration
  • Software upgrades
  • Expansion or reconfiguration based upon additional requirements (i.e. data or user base changes or additional functionality or enhancements to deployed models)
  • And so on…

Teaming Up

Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.

Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.

What we found was, it works and works well, offering unique advantages to our customers:

  • Lowers the “cost of entry” (getting TM1 deployed)
  • Lowers the total cost of ownership (ongoing “care and feeding”)
  • Reduces the level of capital expenditures (doesn’t require the procurement of internal hardware)
  • Reduces IT involvement (and therefore expense)
  • Removes the need to plan for, manage and execute upgrades when newer releases are available (new features are available sooner)
  • (Licensed) users anywhere in world have access form day 1 (regardless of internal constraints)
  • Provides for the availability of auxiliary environments for development and testing (without additional procurement and support)

In the field

Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.

A Versatile platform

During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.

Bottom Line

Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.

Major takeaways

Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:

  • There is no “hardware administration” to worry about
  • No software installation headaches to hold things up!
  • The cloud provided an accurately configured VM -including dedicated RAM and CPU based exactly upon the needs of the solution.
  • The application was easily accessible, yet also very secure.
  • Everything was “powerfully fast” – did not experience any “WAN effects”.
  • 24/7 support provided by the IBM cloud team was “stellar”
  • The managed RAM and “no limits” CPU’s set things up to take full advantage of features like TM1’s MTQ.
  • The users could choose a complete web based experience or install CAFÉ on their machines.

In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.

More to Come

To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.

]]>
https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/feed/ 0 200031