Modeling Articles / Blogs / Perficient https://blogs.perficient.com/tag/modeling/ Expert Digital Insights Thu, 26 Jan 2023 06:37:23 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Modeling Articles / Blogs / Perficient https://blogs.perficient.com/tag/modeling/ 32 32 30508587 Content Hub ONE Full Review in Action – Feedback and Afterthoughts (part 3 / 3) https://blogs.perficient.com/2023/01/26/content-hub-one-full-review-in-action-feedback-and-afterthoughts/ https://blogs.perficient.com/2023/01/26/content-hub-one-full-review-in-action-feedback-and-afterthoughts/#respond Thu, 26 Jan 2023 06:35:24 +0000 https://blogs.perficient.com/?p=326315

Content Hub ONE developers did a great job in such a short time. However, from my point of view, there are a few issues that make it hard to use this platform in its current stage for commercial usage. Let’s take a look at them.

Content

Feedback

  1. Lack of official support for many in-demand media types other than four types of images is a big blocker. Especially given that there is no technical barrier to doing that in principle. Hopefully, that will sorted with time.

  2. Many times while working with CH1 I got phantom errors, without understanding the cause. For example, I wanted to upload media but got Cannot read properties of undefined (reading 'error') in return. Later, I realized that was caused by session expiration, which was not entirely clear. Also frustrating – I got these session issues even after just navigating the site as if navigation did not reset the session expiration timer. But since that is SaaS product – it’s only my guess without having access to internals.

  3. Another issue experienced today was CH1 got down with UI showing me a failed to fetch error. That also occurred with my cloud-deployed head app which also failed to fetch content from CH1. Unannounced/planned maintenance?

  1. Not being able to reference more than 10 other records limits platform usage. With my specific example, I had around 50 items of whisky to be exposed through this app but was able to include only max 10 of them. What is worse – there are no error messages around it or UI informing me about the limitation in any other way.

  2. When playing around with the existing type I cannot change the field type, and that limitation is understood. The obvious solution would be deleting that field instead and regrating it with the same name but another type (let’s assume there’s no content to be affected). Sadly, that was not possible and ended with Failed entity definition saving with name: 'HC.C.collection' error. You can only recreate the field with a new name, not the same one you’ve just deleted. If you got lots of queries in your client code – you’ll need to locate them and update them correspondingly.

  3. Not enough field types. For example, a URL could be simply placed into a small text field, but without proper validation, editors may end up having broken links if they put a faulty URL value on a page.

There is some UI/UX to be improved

  1. Content Hub One demands more clicks for content modeling creation compared to let’s say XP. For example, if you publish a content item, related media does not get published automatically. You need manually click through media, locate it and publish explicitly. On large volumes of content, this adds unwanted labor.

  2. To help with the above, it would make sense to add a publish menu item into the context menu upon an uploaded item in a draft state. That eliminates the extra step of clicking into the item for publishing.

  1. On the big monitors the name of a record is mislocated in the top left corner making it unclear to edit it. Given that, it is not located with a form field, so not immediately obvious that is editable. That is especially important for records that are not possible to rename after creation. Bringing the name close to the other fields would definitely help!

  1. Lack of drag&drop. It would be much easier to upload media by simply dragging the files onto a media listbox, or any other reasonable control.

  2. Speaking about media, UI does not support selecting multiple files for an upload. Users have to click one after another.

  3. Need better UI around grouping and managing assets. Currently, there are facets but need something more than that, maybe the ability to group records into folders. I don’t have a desired view on that, but definitely see the need for such a feature, as my ultra-simple demo case already requires navigational effort.

Conclusion

I don’t want to end with the criticism only leaving a negative impression about this product: there are plenty of positives as well. I would mention decent SDKs, attention to the details where the feature is actually implemented (like the order of referenced items follows up the order you select them), an excellent idea of a modern asynchronous UI powered with webhooks that can notify you about when the resource gets published to Edge (just needs to sort out the session expiration issues).

Content Hub One is definitely in the early stages of its career. I hope that the development team and product managers will eventually overcome this early stage of the product and deliver us a lightweight but reasonably powerful headless CMS that will speed up the content modeling and content delivery experience.

One of the strengths of a SaaS application is that Sitecore is going to continue adding functionality without needing to upgrade every time to do it. The foot is already in the door, so the team needs to push on it!

]]>
https://blogs.perficient.com/2023/01/26/content-hub-one-full-review-in-action-feedback-and-afterthoughts/feed/ 0 326315
Content Hub ONE Full Review in Action – Developing Client App (part 2 / 3) https://blogs.perficient.com/2023/01/25/content-hub-one-full-review-in-action-developing-client-app/ https://blogs.perficient.com/2023/01/25/content-hub-one-full-review-in-action-developing-client-app/#respond Thu, 26 Jan 2023 04:54:48 +0000 https://blogs.perficient.com/?p=326313

In the previous post, I crafted two content types and created records for home pages and each specific whisky item from my collection, populating them with actual data. Now let’s create a client “head” app to consume and display that content from Content Hub ONE tenant.

Content

There is the documentation for the developers, a good start at least.

CLI

Content Hub One comes with helpful CLI and useful documentation. It has support for docker installation, but when speaking about local installation I personally enjoy support for installing using my favorite chocolatey package management tool:

choco install Sitecore.ContentHubOne.Cli --source https://nuget.sitecore.com/resources/v2

With CLI you execute commands against the tenants with only one active at the moment. Adding a tenant is easy, but in order to do you must provide the following four parameters:

  • organization-id
  • tenant-id
  • client-id
  • client-secret

Using CLI you can do serialization the same as with XP/XM platforms and see the difference and that is a pretty important feature here. I pulled all my content into a folder using ch-one-cli serialization pull content-item -c pdp command where pdp is my type for whisky items:

The serialized item looks as below:

id: kghzWaTk20i2ZZO3USdEaQ
name: Glenkinchie
fields:
  vendor:
    value: 'Glenkinchie '
    type: ShortText
  brand:
    value: 
    type: ShortText
  years:
    value: 12
    type: Integer
  description:
    value: >
      The flagship expression from the Glenkinchie distillery, one of the stalwarts of the Lowlands. A fantastic introduction to the region, Glenkinchie 12 Year Old shows off the characteristic lightness and grassy elements that Lowland whiskies are known for, with nods to cooked fruit and Sauternes wine along the way. A brilliant single malt to enjoy as an aperitif on a warm evening.
    type: LongText
  picture:
    value:
    - >-
      {
        "type": "Link",
        "relatedType": "Media",
        "id": "lMMd0sL2mE6MkWxFPWiJqg",
        "uri": "http://content-api-weu.sitecorecloud.io/api/content/v1/media/lMMd0sL2mE6MkWxFPWiJqg"
      }
    type: Media
  video:
    value:
    - >-
      {
        "type": "Link",
        "relatedType": "Media",
        "id": "Vo5NteSyGUml53YH67qMTA",
        "uri": "http://content-api-weu.sitecorecloud.io/api/content/v1/media/Vo5NteSyGUml53YH67qMTA"
      }
    type: Media

After modifying it locally and saving the changes, it is possible to validate and promote these changes back to Content Hub One CMS. With that in mind, you can automate all the things for your CI/CD pipelines using PowerShell, for example. I would also recommend watching this walkthrough video to familiarize yourself with Content Hub ONE CLI in action.

SDK

There is a client SDK available with the support of two languages: JavaScript and C#. For the sake of simplicity and speed, I decided to use C# SDK for my ASP.NET head application. At a first glance, SDK looked decent and promising:

And quite easy to deal with:

var content = await _client.ContentItems.GetAsync();
 
var collection = content.Data
    .FirstOrDefault(i => i.System.ContentType.Id == "collection");
 
var whiskies = content.Data
    .Where(i => i.System.ContentType.Id == "pdp")
    .ToList();
However, it has one significant drawback: the only way to get media content for use in a head application is via Experience Edge & GraphQL. After spending a few hours troubleshooting and doing various attempts I came to this conclusion. Unfortunately, I did not find anything about that in the documentation. In any case, with GraphQL querying Edge my client code looks nicer and more appealing, with fewer queries and fewer dependencies. The one and only dependency I got for this is GraphQL.Client library. The additional thing to add for querying Edge is setting X-GQL-Tokenwith a value, you obtain from the settings menu.

The advantage of GraphQL is that you can query against the endpoints specifying quite complex structures of what you want to get back as a single response and receive only that without any unwanted overhead. I ended up having two queries:

For the whole collection:

{
  collection(id: ""zTa0ARbEZ06uIGNABSCIvw"") {
    intro
    rich
    archive {
        results {
        fileUrl
        name
        }
    }
    items{
    results{
        ... on Pdp {
        id
        vendor
        brand
        years
        description
        picture {
            results {
            fileUrl
            name
            }
            }
        }
      }
    }
  }
}

And for specific whisky record items requested from a whisky PDP page:

{
  pdp(id: $id) {
    id
    vendor
    brand
    years
    description
    picture {
        results {
        fileUrl
        name
          }
        }
    video {
        results {
        fileUrl
        name
            }
        }
    }
}

The last query results get easily retrieved in the code as:

var response = await Client.SendQueryAsync<Data>(request);
var whiskyItem = response.Data.pdp;

Rich Text Challenges

When dealing with rich text fields, you have to come up with building your own logic (my inline oversimplified example, lines 9-50) for rendering HTML output from a JSON structure you got for that field. The good news is that .NET gets it nicely deserialized so that you can at least iterate through this markup:

Sitecore provided an extremely helpful GraphQL IDE tool for us to test and craft queries, so below is how the same Rich text filed value looks in a JSON format:

You may end up wrapping all clumsy business logic for rendering rich text fields into a single HTML helper producing HTML output for the entire rich text field, which may accept several customization parameters. I did not do that as it is labor-heavy, but for the sake of example, produced such a helper for long text field type:

public static class TextHelper
{
    public static IHtmlContent ToParagraphs(this IHtmlHelper htmlHelper, string text)
    {
        var modifiedText = text.Replace("\n", "<br>");
        var p = new TagBuilder("p");
        p.InnerHtml.AppendHtml(modifiedText);
        return p;
    }
}

which can be called from the view: @Html.ToParagraphs(Model.Description).

Supporting ZIP downloads

On the home page, there is a download link sitting within rich text content. This link references a controller action that returns a zip archive with the correct mime types.

public async Task<IActionResult> Download()
{
    // that method id overkill, ideally
    var collection = await _graphQl.GetCollection();
 
    if (collection.Archive.Results.Any())
    {
        var url = collection.Archive.Results[0].FileUrl;
        var name = collection.Archive.Results[0].Name;
        name = Path.GetFileNameWithoutExtension(name);
 
        // gets actual bytes from ZIP binary stored as CH1 media
        var binaryData = await Download(url);
        if (binaryData != null)
        {
            // Set the correct MIME type for a zip file
            Response.Headers.Add("Content-Disposition", $"attachment; filename={name}");
            Response.ContentType = "application/zip";
 
            // Return the binary data as a FileContentResult
            return File(binaryData, "application/zip");
        }
    }
 
    return StatusCode(404);
}

Supporting video

For the sake of a demo, I simply embedded a video player to a page and referenced the URL of published media from CDN:

<video width="100%" style="margin-top: 20px;" controls>
    <source src="@Model.Video.Results[0].FileUrl" type="video/mp4">
    Your browser does not support the video tag.
</video>

Bringing it all together

I built and deployed the demo at https://whisky.martinmiles.net. You can also find the source code of the resulting .NET 7 head application project at this GitHub link.

Now, run it in a browser. All the content seen on a page is editable from Content Hub ONE, as modeled and submitted earlier. Here’s what it looks like:

That concludes the second part of this series. The final part will share some of my thoughts and feedback with the team.

]]>
https://blogs.perficient.com/2023/01/25/content-hub-one-full-review-in-action-developing-client-app/feed/ 0 326313
Modeling as Proof of Concept (POC) https://blogs.perficient.com/2018/03/05/modeling-proof-concept-poc/ https://blogs.perficient.com/2018/03/05/modeling-proof-concept-poc/#respond Mon, 05 Mar 2018 06:02:59 +0000 https://blogs.perficient.com/dataanalytics/?p=8405

Why do I give my precedence to build the model as Proof of Concept (POC) instead of following established methodologies such as CRISP-DM, SEMMA, AIE, MAD Skills, etc.?

Even though most of the Data scientists will say that they are two different things and used for different purposes; one is a methodology or a step by step approach to deliver the workable model, and the other is to test your idea to make sure your model is possible. Yes, they are different. However, the goal is to have a good working model in both approaches.

Here are the reasons why I prefer to create a working model as Proof of Concept (POC).

  • Flexibility – It gives you the needed flexibility, you can incorporate feedback loop, validation, third-party knowledge and SME into any step of the process.
  • Benchmark – It gives you a baseline model that you can use as a reference to compare to your other models. Also, you can demo your baseline model at any time through your engagement.
  • Support – Not all Data scientists are Data engineers, working on POC as a collaborative task will guarantee that majority of heavy liftings such as ETL, Parallel Computation, and Data Preprocessing will be done on the server side and let you concentrate on the modeling.
  • Holistic View – By creating POC you get to see how your model is fit within the scheme of things, you’ll get to see the fruits of your labor and not just an isolated model. Also, you will know exactly what tools, resources, and how much time is required to create a working prototype, because you will have a complete view of your creation.
  • Challenges – You will identify all your challenges early in your development and as they happen will be able to attend them.

These are my reasons to create the model using Proof of Concept approach; everyone is different. However, I urge you to try it. I’m a strong believer that you will be much satisfied with the model you have created.

]]>
https://blogs.perficient.com/2018/03/05/modeling-proof-concept-poc/feed/ 0 200221
Machine Learning Vs. Statistical Learning https://blogs.perficient.com/2018/01/29/machine-learning-vs-statistical-learning/ https://blogs.perficient.com/2018/01/29/machine-learning-vs-statistical-learning/#comments Mon, 29 Jan 2018 16:35:51 +0000 https://blogs.perficient.com/dataanalytics/?p=8307

Most of the time as a data scientist I get asked the question, what is the difference between Machine Learning and Statistical Learning? Even though you would think that the answer is obvious, there are a lot of novice data scientists that are still confused about those two approaches.

As a beginner data scientist, it is hard for you to see the differences between the two, and it is probably due to how we learn Data Science. To become a data scientist, you are required to develop knowledge in multiple subjects such as Statistics, Programming, SQL, Linear Algebra and have the domain expertise. Hopefully, you will start your journey with Statistics, and most of the data scientists believe that this is the foundation in Data Science and I cannot disagree with them.

After, when you get comfortable with Statistics, then, eventually, expand your horizons within Data Science, by sailing away from all too familiar, small datasets such as Titanic, Iris, Cars, Diamonds, etc. to more uncharted territories, to a new world of Big Data. Nevertheless, with your confidence in Statistical Learning, you will probably take on a big data challenge and hope to generate insight from your data by applying Statistical Learning techniques. I don’t want to disappoint you, but not much value will be formed from this method.  This is because you incorrectly approached the situation, you applied a statistical learning solution to a machine learning problem. I cannot stress enough the importance of understanding the differences between those two.

To prevent novice data scientists from the future disappointments, I have composed a list of differences between Statistical Learning and Machine Learning to aid you on your journey to success.

Here are some of the differences:

  1. Both methods are data dependent. However, Statistical Learning relies on rule-based programming; it is formalized in the form of relationship between variables, where Machine Learning learns from data without explicitly programmed instructions.
  2. Statistical Learning is based on a smaller dataset with a few attributes, compared to Machine Learning where it can learn from billions of observations and attributes.
  3. Statistical Learning operates on assumptions, such as normality, no multicollinearity, homoscedasticity, etc. when Machine Learning is not as assumptions dependent and in most of the cases ignores them.
  4. Statistical Learning is mostly about inferences, most of the idea is generated from the sample, population, and hypothesis, in comparison to Machine Learning which emphasizes predictions, supervised learning, unsupervised learning, and semi-supervised learning.
  5. Statistical Learning is math intensive which is based on the coefficient estimator and requires a good understanding of your data. On the other hand, Machine Learning identifies patterns from your dataset through the iterations which require a way less of human effort.

Even though most will argue that Machine Learning is superior, and to some extent, I will agree. On the contrary, with the application of Statistical Learning, you familiarize yourself better with your data which help you to build that needed confidence in your modeling.

]]>
https://blogs.perficient.com/2018/01/29/machine-learning-vs-statistical-learning/feed/ 1 200211
An Architectural Approach to Cognos TM1 Design https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/ https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/#respond Thu, 28 Aug 2014 20:22:52 +0000 http://blogs.perficient.com/dataanalytics/?p=4907

Overtime, I’ve written about keeping your TM1 model design “architecturally pure”. What this means is that you should strive to keep a models “areas of functionality” distinct within your design.

Common Components

I believe that all TM1 applications, for example, are made of only 4 distinct “areas of functionality”. They are absorption (of key information from external data sources), configuration (of assumptions about the absorbed data), calculation (where the specific “magic” happens; i.e. business logic is applied to the source data using the set assumptions) and consumption (of the information processed by the application and is ready to be reported on).

Some Advantages

Keeping functional areas distinct has many advantages:

  • Reduces complexity and increases sustainability within components
  • Reduces the possibility of one component negativity effecting another
  • Enables the probability of reuse of the particular (distinct) components
  • Promotes a technology independent design; meaning components can be built using the technology that best fits their particular objective
  • Allows components to be designed, developed and supported by independent groups
  • Diminishes duplication of code, logic, data, etc.
  • Etc.

Resist the Urge

There is always a tendency to “jump in” and “do it all” using a single tool or technology or, in the case of Cognos TM1, a few enormous cubes and today, with every release of software, there are new “package connectors” that allow you to directly connect (even external) system components. In addition, you may “understand the mechanics” of how a certain technology works which will allow you to “build” something, but without comprehensive knowledge of architectural concepts, you may end up with something that does not scale, has unacceptable performance or is costly to sustain.

Final Thoughts

Some final thoughts:

  • Try white boarding the functional areas before writing any code
  • Once you have your “like areas” defined, search for already existing components that may meet your requirements
  • If you do decide to “build new”, try to find other potential users for the new functionality. Could you partner and co-produce (and thus share the costs) a component that you both can use?
  • Before building a new component, “try out” different technologies. Which best serves the need of these components objectives? (A rule of thumb, if you can find more than 3 other technologies or tools that better fit your requirements than the technology you planned to use, you’re in trouble!).

And finally:

Always remember, just because you “can” doesn’t mean you “should”.

]]>
https://blogs.perficient.com/2014/08/28/an-architectural-approach-to-cognos-tm1-design/feed/ 0 200051
A Practice Vision https://blogs.perficient.com/2014/08/27/a-practice-vision/ https://blogs.perficient.com/2014/08/27/a-practice-vision/#respond Wed, 27 Aug 2014 23:11:53 +0000 http://blogs.perficient.com/dataanalytics/?p=4905

Vision

Most organizations today have had successes implementing technology and they are happy to tell you about it. From a tactical perspective, they understand how to install, configure and use whatever software you are interested in. They are “practitioners”. But, how may can bring a “strategic vision” to a project or to your organization in general?

An “enterprise” or “strategic” vision is based upon an “evolutionary roadmap” that starts with the initial “evaluation and implementation” (of a technology or tool), continues with “building and using” and finally (hopefully) to the organization, optimization and management of all of the earned knowledge (with the tool or technology). You should expect that whoever you partner with can explain what their practice vision or mythology is or, at least talk to the “phases” of the evolution process:

Evaluation and Implementation

The discovery and evaluation that takes place with any new tool or technology is the first phase of a practices evolution. A practice should be able to explain how testing is accomplished and what it covers How was it that they determined if the tool/technology to be used will meet or exceed your organization’s needs? Once a decision is made, are they practiced at the installation, configuration and everything that may be involved in deploying the new tool or technology for use?

Build, Use, Repeat

Once deployed, and “building and using” components with that tool or technology begin, the efficiency at which these components are developed as well as the level of quality of those developed components will depend upon the level of experience (with the technology) that a practice possess. Typically, “building and using” is repeated with each successful “build” so how many times has the practice successfully used this technology? By human nature, once a solution is “built” and seems correct and valuable, it will be saved and used again. Hopefully, this solution would have been shared as a “knowledge object” across the practice. Although most may actually reach this phase, it is not uncommon to find:

  • Objects with similar or duplicate functionality (they reinvented the wheel over and over).
  • Poor naming and filing of objects (no one but the creator knows it exists or perhaps what it does)
  • Objects not shared (objects visible only to specific groups or individuals, not the entire practice)
  • Objects that are obsolete or do not work properly or optimally are being used.
  • Etc.

Manage & Optimization

At some point, usually while (or after a certain number of) solutions have been developed, a practice will “mature its development or delivery process” to the point that it will begin investing time and perhaps dedicate resources to organize, manage and optimize its developed components (i.e. “organizational knowledge management”, sometimes known as IP or intellectual property).

You should expect a practice to have a recognized practice leader and a “governing committee” to help identify and manage knowledge developed by the practice and:

  • inventory and evaluate all known (and future) knowledge objects
  • establish appropriate naming standards and styles
  • establishing appropriate development and delivery standards
  • create, implement and enforce a formal testing strategy
  • continually develop “the vision” for the practice (and perhaps the industry)

 

More

As I’ve mentioned, a practice needs to take a strategic or enterprise approach to how it develops and delivers and to do this it must develop its “vision”. A vision will ensure that the practice is leveraging its resources (and methodologies) to achieve the highest rate of success today and over time. This is not simply “administrating the environment” or “managing the projects” but involves structured thought, best practices and continued commitment to evolved improvement. What is your vision?

]]>
https://blogs.perficient.com/2014/08/27/a-practice-vision/feed/ 0 200050
IBM OpenPages GRC Platform –modular methodology https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/ https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/#respond Thu, 14 Aug 2014 14:58:10 +0000 http://blogs.perficient.com/dataanalytics/?p=4849

The OpenPages GRC platform includes 5 main “operational modules”. These modules are each designed to address specific organizational needs around Governance, Risk, and Compliance.

Operational Risk Management module “ORM”

IBM OpenPages GRC Platform - modular methodologyThe Operational Risk Management module is a document and process management tool which includes a monitoring and decision support system enabling an organization to analyze, manage, and mitigate risk simply and efficiently. The module automates the process of identifying, measuring, and monitoring operational risk by combining all risk data (such as risk and control self-assessments, loss events, scenario analysis, external losses, and key risk indicators (KRI)), into a single place.

Financial Controls Management module “FCM”

The Financial Controls Management module reduces time and resource costs associated with compliance for financial reporting regulations. This module combines document and process management with awesome interactive reporting capabilities in a flexible, adaptable easy-to-use environment, enabling users to easily perform all the necessary activities for complying with financial reporting regulations.

Policy and Compliance Management module “PCM”

The Policy and Compliance Management module is an enterprise-level compliance management solution that reduces the cost and complexity of compliance with multiple regulatory mandates and corporate policies. This model enables companies to manage and monitor compliance activities through a full set of integrated functionality:

  • Regulatory Libraries & Change Management
  • Risk & Control Assessments
  • Policy Management, including Policy Creation, Review & Approval and Policy Awareness
  • Control Testing & Issue Remediation
  • Regulator Interaction Management
  • Incident Tracking
  • Key Performance Indicators
  • Reporting, monitoring, and analytics

IBM OpenPages IT Governance module “ITG”

This module aligns IT services, risks, and policies with corporate business initiatives, strategies, and operational standards. Allowing the management of internal IT control and risk according to the business processes they support. In addition, this module unites “silos” of IT risk and compliance delivering visibility, better decision support, and ultimately enhanced performance.

IBM OpenPages Internal Audit Management module “IAM”

This module provides internal auditors with a view into an organizations governance, risk, and compliance, affording the chance to supplement and coexist with broader risk and compliance management activities throughout the organization.

One Solution

The IBM OpenPages GRC Platform Modules Object Model (“ORM”, “FCM”, “PCM”, “ITG” an “IAM”) interactively deliver a superior solution for Governance, Risk, and Compliance. More to come!

]]>
https://blogs.perficient.com/2014/08/14/ibm-openpages-grc-platform-modular-methodology/feed/ 0 200044
The installation Process – IBM OpenPages GRC Platform https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/ https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/#respond Wed, 13 Aug 2014 18:13:27 +0000 http://blogs.perficient.com/dataanalytics/?p=4843

When preparing to deploy the OpenPages platform, you’ll need to follow these steps:

  1. Determine which server environment you will deploy to – Windows or AIX.
  2. Determine your topology – how many servers will you include as part of the environment? Multiple application servers? 1 or more reporting servers?
  3. Perform the installation of the OpenPages prerequisite software for the chosen environment -and for each server’s designed purpose (database, application or reporting).
  4. Perform the OpenPages installation, being conscious of the software that is installed as part of that process.

Topology

Depending upon your needs, you may find that you’ll want to use separate servers for your application, database and reporting servers. In addition, you may want to add additional application or reporting servers to your topology.

 

 

topo

 

 

 

 

 

 

 

 

 

 

 

 

After the topology is determined you can use the following information to prepare your environment. I recommend clean installs (meaning starting with fresh or new machines and VM’s are just fine (“The VMWare performance on a virtualized system is comparable to native hardware. You can use the OpenPages hardware requirements for sizing VM environments” – IBM).

(Note – this is if you’ve chosen to go Oracle rather than DB2):

MS Windows Severs

All servers that will be part of the OpenPages environment must have the following installed before proceeding:

  • Microsoft Windows Server 2008 R2 and later Service Packs (64-bit operating system)
  • Microsoft Internet Explorer 7.0 (or 8.0 in Compatibility View mode)
  • A file compression utility, such as WinZip
  • A PDF reader (such as Adobe Acrobat)

The Database Server

In addition to the above “all servers” software, your database server will require the following software:

  • Oracle 11gR2 (11.2.0.1) and any higher Patch Set – the minimum requirement is Oracle 11.2.0.1 October 2010 Critical Patch Update.

Application Server(s)

Again, in addition to the above “all servers” software, the server that hosts the OpenPages application modules should have the following software installed:

  • JDK 1.6 or greater, 64-bit Note: This is a prerequisite only if your OpenPages product does not include WebLogic Server.
  • Application Server Software (one of the following two options)

o   IBM Websphere Application Server ND 7.0.0.13 and any higher Fix Pack Note: Minimum requirement is Websphere 7.0.0.13.

o   Oracle WebLogic Server 10.3.2 and any higher Patch Set Note: Minimum requirement is Oracle WebLogic Server 10.3.2. This is a prerequisite only if your OpenPages product does not include Oracle WebLogic Server.

  • Oracle Database Client 11gR2 (11.2.0.1) and any higher Patch Set

Reporting Server(s)

The server that you intend to host the OpenPages CommandCenter must have the following software installed (in addition to the above “all servers” software):

  • Microsoft Internet Information Services (IIS) 7.0 or Apache HTTP Server 2.2.14 or greater
  • Oracle Database Client 11g R2 (11.2.0.1) and any higher Patch Set

During the OpenPages Installation Process

As part of the OpenPages installation, the following is installed automatically:

 

For Oracle WebLogic Server & IBM WebSphere Application Server environments:

  • The OpenPages application
  • Fujitsu Interstage Business Process Manager (BPM) 10.1
  • IBM Cognos 10.2
  • OpenPages CommandCenter
  • JRE 1.6 or greater

If your OpenPages product includes the Oracle WebLogic Server:

  • Oracle WebLogic Server 10.3.2

If your OpenPages product includes the Oracle Database:

  • Oracle Database Server Oracle 11G Release 2 (11.2.0.1) Standard Edition with October 2010 CPU Patch (on a database server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 64-bit (on an application server system)
  • Oracle Database Client 11g Release 2 (11.2.0.1) with October 2010 CPU Patch applied 32-bit (on a reporting server system)

 Thanks!

]]>
https://blogs.perficient.com/2014/08/13/the-installation-process-ibm-openpages-grc-platform/feed/ 0 200043
IBM OpenPages Start-up https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/ https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/#respond Tue, 12 Aug 2014 17:47:20 +0000 http://blogs.perficient.com/dataanalytics/?p=4833

In the beginning…

OpenPages was a company “born” in Massachusetts, providing Governance, Risk, and Compliancesoftware and services to customers. Founded in 1996, OpenPages had more than 200 customers worldwide including Barclays, Duke Energy, and TIAA-CREF. On October 21, 2010, OpenPages was officially acquired by IBM:

http://www-03.ibm.com/press/us/en/pressrelease/32808.wss

IBM OpenPages Start-upWhat is it?

OpenPages provides a technology driven way of understanding the full scope of risk an organization faces. In most cases, there is extreme fragmentation of a company’s risk information – like data collected and maintained in numerous disparate spreadsheets – making aggregation of the risks faced by a company extremely difficult and unmanageable.

Key Features

IBM’s OpenPages GRC Platform can help by providing many capabilities to simplify and centralize compliance and risk management activities. The key features include:

  • Provides a shared content repository that can (logically) present the processes, risks and controls in many-to-many and shared relationships.
  • Supports the import of corporate data and maintains an audit trail ensuring consistent regulatory enforcement and monitoring across multiple regulations.
  • Supports dynamic decision making with its CommandCenter interface, which provides interactive, real-time executive dashboards and reports with drill-down.
  • Is simple to configure and localize with detailed user-specific tasks and actions accessible from a personal browser based home page.
  • Provides for Automation of Workflow for management assessment, process design reviews, control testing, issue remediation and sign-offs and certifications.
  • Utilizes Web Services for Integration. OpenPages utilizes OpenAccess API Interoperate with leading third-party applications to enhance policies and procedures with actual business data.

Understanding the Topology

The OpenPages GRC Platform consists of the following 3 components:

  • 1 database server
  • 1 or more application servers
  • 1 or more reporting servers

Database Server

The database is the centralized repository for metadata, (versions of) application data, and access control. OpenPages requires a set of database users and a tablespace (referred to as the “OpenPages database schema”). These database components install automatically during the OpenPages application installation, configuring all of the required elements. You can use either Oracle or DB2 for your OpenPages GRC Platform repository.

 Application Server(s)

The application server is required to host the OpenPages applications. The application server runs the application modules, and includes the definition and administration of business metadata, UI views, user profiles, and user authorization.

 Reporting Server

The OpenPages CommandCenter is installed on the same computer as IBM Cognos BI and acts as the reporting server.

Next Steps

An excellent next step would be to visit the ibm site and review the available slides and whitepapers. After that, keep tuned to this blog!

]]>
https://blogs.perficient.com/2014/08/12/ibm-openpages-start-up/feed/ 0 200042
Configuring Cognos TM1 Web with Cognos Security https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/ https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/#respond Thu, 07 Aug 2014 20:28:00 +0000 http://blogs.perficient.com/dataanalytics/?p=4821

Recently I completed upgrading a client’s IBM Cognos environment – both TM1 and BI. It was a “jump” from Cognos 8 to version 10.2, and TM1 9.5 to version 10.2.2. In this environment, we had multiple virtual servers (Cognos lives on one, TM1 on one and the third is the gateway/webserver).

Once the software was all installed and configured (using IBM Cognos Configuration and, yes, you still need to edit the TM1 configuration cfg file), we started the services and (it appeared) everything looked good. I spin through the desktop applications (Perspectives, Architect, etc.) and then go the Web browser, first to test TM1Web:

http:// stingryweb:9510/tm1web/

The familiar page loads:

01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

But when I enter my credentials, I get the following:

 

02

 

 

Go to Goggle

Since an installation and configuration is not something you do every day, goggle reports that there are evidentially 2 files that the installation placed on the web server that belong on the Cognos BI server. These files need to be located, edited and then copied to the correct location for TM1Web to use IBM Cognos authentication security.

What files?

There are 2 files; an XML file (variables_TM1.xml.sample) and an HTML file (tm1web.html). These can be found on the server that you installed TM1Web – or can they? Turns out, they are not found individually but are included in zip files:

Tm1web_app.zip (that is where you’ll find the xml file) and tm1web_gateway.zip (and that is where you will find tm1web.html):

03

 

 

 

 

I found mine in:

Program Files\ibm\cognos\tm1_64\webapps\tm1web\bi_files

Make them your own

Once you unzip (the files) you need to rename the xml file (to drop the “.sample”) and place it onto the Cognos BI server in:

Program Files\ibm\cognos\c10_64\templates\ps\portal.

Next, edit the file (even though it’s an XML file, its small so you can use notepad). What you need to do is modify the URL’s (the “localhost” string should be replaced with the name of the server running TM1Web.) within the <urls> tags. You’ll find three (one for TM1WebLogin.aspx, one for TM1WebLoginHandler.aspx and one for TM1WebMain.aspx).

Now, copy your tm1web.html file to (on the Cognos BI server)

Program Files\ibm\cognos\c10_64\webcontent\tm1\web and edit it (again, you can use notepad). One more thing, the folder “tm1” may need to be manually created.

The html file update is straight forward (you need to point to where Cognos TM1 Web is running) and there is only a single line in the file. You change:

var tm1webServices = [“http://localhost:8080”];

To:

var tm1webServices = [“http:// stingryweb:9510”];

 

Now, after stopping and starting the servers web services:

 

04

 

 

 

 

The above steps are simple; you just need to be aware of these extra, very manual steps….

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

]]>
https://blogs.perficient.com/2014/08/07/configuring-cognos-tm1-web-with-cognos-security/feed/ 0 200041
Perficient takes Cognos TM1 to the Cloud https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/ https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/#respond Tue, 01 Jul 2014 16:51:03 +0000 http://blogs.perficient.com/dataanalytics/?p=4724

IBM Cognos TM1 is well-known as the planning, analysis, and forecasting software that delivers flexible solutions to address requirements across an enterprise, as well as provide real-time analytics, reporting, and what-if scenario modeling and Perficient is well-known for delivering expertly designed TM1 based solutions.

Analytic Projects

Perficient takes Cognos TM1 to the CloudPerhaps phase zero of a typical analytics project would involve our topology experts determining the exact server environment required to support the implementation of a number of TM1 servers (based upon not only industry proven practices, but our own breadth of practical “in the field” experiences). Next would be the procurement and configuration of said environment (and prerequisite software) and finally the installation of Cognos TM1.

It doesn’t stop there

As TM1 development begins, our engineers work closely with internal staff to outline processes for the (application and performance) testing and deployment (of developed TM1 models) but also to establish a maintainable support structure for after the “go live” date. “Support” includes not only the administration of the developed TM1 application but the “road map” to assign responsibilities such as:

  • Hardware monitoring and administration
  • Software upgrades
  • Expansion or reconfiguration based upon additional requirements (i.e. data or user base changes or additional functionality or enhancements to deployed models)
  • And so on…

Teaming Up

Earlier this year the Perficient analytics team teamed up with the IBM Cloud team to offer an interesting alternative to the “typical”: Cognos TM1 as a service in the cloud.

Using our internal TM1 models and colleagues literally all over the country, we evaluated and tested the viability of a fully cloud based TM1 solution.

What we found was, it works and works well, offering unique advantages to our customers:

  • Lowers the “cost of entry” (getting TM1 deployed)
  • Lowers the total cost of ownership (ongoing “care and feeding”)
  • Reduces the level of capital expenditures (doesn’t require the procurement of internal hardware)
  • Reduces IT involvement (and therefore expense)
  • Removes the need to plan for, manage and execute upgrades when newer releases are available (new features are available sooner)
  • (Licensed) users anywhere in world have access form day 1 (regardless of internal constraints)
  • Provides for the availability of auxiliary environments for development and testing (without additional procurement and support)

In the field

Once we were intimate with all of the “ins and outs” of TM1 10.2 on a cloud platform, we were able to to work directly with IBM to demonstrate how a cloud based solution would work to address the specific needs of one of our larger customers. After that, the Perficient team “on the ground” developed and deployed a “proof of concept” using real customer data, and partnered with the customer for the “hands on” evaluation and testing. Once the results were in, it was unanimous: “full speed ahead!””.

A Versatile platform

During the project life-cycle, the cloud environment was seamless; allowing Perficient developers to work (at the client site or remotely) and complete all necessary tasks without issue. The IBM cloud team was available (24/7) to analyze any perceived bottlenecks and, when required, to “tweak” things per the Perficient team’s suggestions, ensuring an accurately configured cloud and a successful, on-time solution delivery.

Bottom Line

Built upon our internal teams experience and IBM’s support, our delivered cloud based solution is robust and cutting edge and infinitely scalable.

Major takeaways

Even given everyone’s extremely high expectations, the project team was delighted and reported back the following major takeaways from the experience:

  • There is no “hardware administration” to worry about
  • No software installation headaches to hold things up!
  • The cloud provided an accurately configured VM -including dedicated RAM and CPU based exactly upon the needs of the solution.
  • The application was easily accessible, yet also very secure.
  • Everything was “powerfully fast” – did not experience any “WAN effects”.
  • 24/7 support provided by the IBM cloud team was “stellar”
  • The managed RAM and “no limits” CPU’s set things up to take full advantage of features like TM1’s MTQ.
  • The users could choose a complete web based experience or install CAFÉ on their machines.

In addition, IBM Concert (provided as part of the cloud experience) is a (quote) “wonderful tool for our user community to combine both TM1 & BI to create intuitive workflows and custom dashboards”.

More to Come

To be sure, you’ll be hearing much more about Concert & Cognos in the cloud and when you do, you can count on the Perficient team for expert delivery.

]]>
https://blogs.perficient.com/2014/07/01/perficient-takes-cognos-tm1-to-the-cloud/feed/ 0 200031
Exercising IBM Cognos Framework Manager https://blogs.perficient.com/2014/06/16/exercising-ibm-cognos-framework-manager/ https://blogs.perficient.com/2014/06/16/exercising-ibm-cognos-framework-manager/#respond Mon, 16 Jun 2014 14:31:25 +0000 http://blogs.perficient.com/dataanalytics/?p=4619

In Framework Manager, an expression is any combination of operators, constants, functions, and other components that evaluates to a single value. You can build expressions to create calculation and filter definitions. A calculation is an expression that you use to create a new value from existing values contained within a data item. A filter is an expression that you use to retrieve a specific subset of records. Lets walk though a few simple examples:

Using a Session Parameter

I’ve talked before about session parameters in Framework manager (a session parameter is a variable that IBM Cognos Framework Manager associates with a session, for example user ID and preferred language and you also create your own) in a previous post.

It doesn’t matter if you use a default session parameter or one you’ve created, it’s easy to include a session parameter in your Framework Manager Meta Model.

Here is an example.

In a Query Subject (a query subject is a set of query items that have a relationship and are used to optimize the data being received for reporting); you can click on the Calculations tab and then click Add.

Framework Manager shows the Calculation Definition dialog where you can view and select from the Available Components to create a new Calculation. The Components are separated into 3 types – Model, Functions and Parameters.

I clicked on Parameters and then expanded Session Parameters. Here FM lists all of the default parameters and any I’ve created as well. I selected current_timestamp (to add as my Expression definition (note – FM wraps the expression with the # character to indicate that it’s a MACRO that will be resolved at runtime).

During some additional experimentation I found:

  • You can add a reasonable name for your calculation
  • You may have to (or want to) nest functions within the expression statement (i.e. I’ve added the function “sq” as an example. This function wraps the returned value in single quotes). Hint: the more functions you nest, the slower the performance, so think it thorough).
  • If you’ve got the expression correct (the syntax anyway), the blue Run arrow lights up and you can test the expression and view the results the lower right hand pane of the dialog. Tips will show you errors/Results will show the runtime result of your expression.
  • Finally, you can click OK to save your calculation expression with your Query Subject.

june1

 

 

 

 

 

 

 

 

 

 

 

 

Filtering

Filtering works the same way as calculations. In my example I’m dealing with parts and inventories. If I’d like to create a query subject that perhaps lists only part numbers with a current inventory count of 5 or less, I can set a filter by clicking on the Filter tab and then Add (just like we just did for the calculation).

This time I can select the column InventoryCount from the Model tab and add it as my Expression definition. From there I can grab the “less than or equal to” operator (you can type it directly or select it from the Function list).

june2

 

 

 

 

 

 

 

 

 

 

 

 

Filter works the same as Calculation as far as syntax and tips (but it does not give you a chance to preview your result or the effect of your filter).

Click OK to save your filter.

JOIN ME

Finally, my inventory report is based upon the SQL table named PartInventory which only provides a part number and an inventory count. I’d like to add part descriptions (which are in a table named simply “Part”) to my report so I click on the SQL tab and create a simple join query (joining the tables using PartNo):

june3

 

 

 

 

 

 

 

 

 

 

To make sure everything looks right, I can click on the tab named Test and then click Test Sample.

You can see that you have a part name for each part number, the session parameter Time Stamp is displayed for each record and only those parts in the database where the inventory count is 5 or less:

june4

 

 

 

 

 

 

 

 

 

 

 

By the way, back on the SQL tab, you can:

  • Clear everything (and start over)
  • Enter or Modify SQL directly (remember to click the Validate button to test your code)
  • Insert an additional data source into your Query subject to include data from another source, perhaps an entirely different SQL database.
  • Insert a Macro, For example, you can add inline macro functions to your SQL query.

Here is an example:

#$Corvette_Year_Grouping{$CarYear}#

Notice the # character to indicate the code within is a function to be resolved within the SQL query.

This code uses a parameter map (I’ve blogged about PM’s in the past) to convert a session parameter (set to a particular vehicle model year) to the name of a particular SQL table column (and include that column of information in my query subject result). So in other words, the database table column included in the query result will be decided at run time.

june5

 

 

 

 

 

 

 

 

 

 

 

And our result:

june6

 

 

 

 

 

 

You can see that these are simple but thought-provoking examples of the power of IBM Cognos Framework Manager.

Framework Manager is a metadata modeling tool that drives query generation for Cognos BI reporting. Every reporting project should begin with a solid meta model to ensure success. More to come…

]]>
https://blogs.perficient.com/2014/06/16/exercising-ibm-cognos-framework-manager/feed/ 0 200021