Technical Articles / Blogs / Perficient https://blogs.perficient.com/tag/technical/ Expert Digital Insights Wed, 19 Jun 2024 18:48:59 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Technical Articles / Blogs / Perficient https://blogs.perficient.com/tag/technical/ 32 32 30508587 Create an RSS Feed using HTL https://blogs.perficient.com/2024/05/03/create-an-rss-feed-using-htl/ https://blogs.perficient.com/2024/05/03/create-an-rss-feed-using-htl/#comments Fri, 03 May 2024 17:15:39 +0000 https://blogs.perficient.com/?p=362326

Did you know you can create an RSS feed in AEM (Adobe Experience Manager) for external applications like Eloqua? While AEM provides out-of-the-box functionality for RSS feeds, customizing them may require additional steps. Below you’ll find several options for creating RSS feeds in AEM along with steps for creating one using HTL.  

3 Options to Create an RSS Feed in AEM  

  1. Override Default JSP Functionality (JSP Approach) 
    • Customize the JSP code to tailor the RSS feed according to your requirements 
    • This approach requires writing backend logic in Java and JSP
  2. Create a Servlet for RSS Feed
    • Implement the logic within the servlet to fetch and format the necessary data into RSS feed XML
    • Configure the servlet to respond to specific requests for the RSS feed endpoint
    • This approach allows more control and flexibility over the RSS feed generation process
  3. Use HTL with Sling Model (HTL Approach)
    • Write HTL templates combined with a Sling Model to generate the RSS feed
    • Leverage Sling Models to retrieve data from AEM and format it within the HTL template
    • This approach utilizes AEM’s modern templating language and component models
    • HTL is preferred for templating tasks due to its simplicity and readability

Expected RSS Feed 

Below is the feed response for an external source to integrate and send emails accordingly. Here the feed results can be filtered by category tag names (category) using query parameters in the feed URL. 

  • https://www.demoproject.com/products/aem.rss 
  • https://www.demoproject.com/products/aem.rss?category=web
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <atom:link rel="self" href="https://www.demoproject.com/products/aem" />
        <link>https://www.demoproject.com/products/aem</link>
        <title>AEM</title>
        <description />
        <pubDate>Fri, 29 Sep 2023 02:08:26 +0000</pubDate>
        <item>
            <guid>https://www.demoproject.com/products/aem/one.rss.xml</guid>
            <atom:link rel="self" href="https://www.demoproject.com/products/aem/sites" />
            <link>https://www.demoproject.com/products/aem/sites</link>
            <title>Aem Sites</title>
            <description><![CDATA[AEM Sites is the content management system within Adobe Experience Manager that gives you one place to create, manage and deliver remarkable digital experiences across websites, mobile sites and on-site screens.]]></description>
            <pubDate>Tue, 31 Oct 2023 02:23:04 +0000</pubDate>
        </item>
        <item>
            <guid>https://www.demoproject.com/products/aem/two.rss.xml</guid>
            <atom:link rel="self" href="https://www.demoproject.com/products/aem/assets" />
            <link>https://www.demoproject.com/products/aem/assets</link>
            <title>Aem Assets</title>
            <description><![CDATA[Adobe Experience Manager (AEM) Assets is a digital asset management system (DAM) that is built into AEM. It stores and delivers a variety of assets (including images, videos, and documents) with their connected metadata in one secure location.]]></description>
            <pubDate>Thu, 26 Oct 2023 02:21:19 +0000</pubDate>
            <category>pdf,doc,image,web</category>
        </item>
    </channel>
</rss>

Steps for Creating RSS Feed Using HTL 

  • Create a HTML file under the page component 
  • Create a PageFeed Sling Model that returns data for the RSS feed 
  • Add a rewrite rule in the dispatcher rewrite rules file 
  • Update the ignoreUrlParams for the required params 

Page Component – RSS html  

Create an HTML file with the name “rss.xml.html” under page component. Both ‘rss.html’ or ‘rss.xml.html’ work fine for this. Here, ‘rss.xml.html’ naming convention indicates that it is generating XML data. PageFeedModel provides the page JSON data for the expected feed.  

  • Category tag is rendered only when page properties are authored with tag values
  • CDATA (character data) is a section of element content to render as only character data instead of markup
<?xml version="1.0" encoding="UTF-8"?>
<sly data-sly-use.model="com.demoproject.aem.core.models.PageFeedModel" />
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <atom:link rel="self" href="${model.link}"/>
        ${model.linkTag @ context='unsafe'}
        <title>${model.title}</title>
        <description>${model.subTitle}</description>
        <pubDate>${model.publishedDate}</pubDate>
        <sly data-sly-list.childPage="${model.entries}">
            <item>
                <guid>${childPage.feedUrl}</guid>
                <atom:link rel="self" href="${childPage.link}"/>
                ${childPage.linkTag @ context='unsafe'}
                <title>${childPage.title}</title>
                <description><![CDATA[${childPage.description}]]></description>
                <pubDate>${childPage.publishedDate}</pubDate>
                <sly data-sly-test="${childPage.tags}">
                    <category>${childPage.tags}</category>
                </sly>
            </item>
        </sly>
    </channel>
</rss>  

Page Feed Model

This is a component model that takes the currentPage as the root and retrieves a list of its child pages. Subsequently, it dynamically constructs properties such as publish date and categories based on the page’s tag field. These properties enable filtering of results based on query parameters. Once implemented, you can seamlessly integrate this model into your component to render the RSS feed.

  • Using currentPage get the current page properties as a value map 
  • Retrieve title, description, pubDate, link for current page 
  • Retrieve title, description, pubDate, link, tags (categories) for child pages 
  • Filter the child pages list based on the query param value (category)
//PageFeedModel sample code 
package com.demoproject.aem.core.models;

import com.adobe.cq.export.json.ExporterConstants;
import com.day.cq.commons.Externalizer;
import com.day.cq.commons.jcr.JcrConstants;
import com.day.cq.wcm.api.Page;
import com.day.cq.wcm.api.PageFilter;
import com.demoproject.aem.core.utility.RssFeedUtils;
import lombok.Getter;
import org.apache.commons.lang.StringEscapeUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.sling.api.SlingException;
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.resource.Resource;
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.api.resource.ValueMap;
import org.apache.sling.models.annotations.DefaultInjectionStrategy;
import org.apache.sling.models.annotations.Exporter;
import org.apache.sling.models.annotations.Model;
import org.apache.sling.models.annotations.injectorspecific.SlingObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;

@Model(adaptables = {
    Resource.class,
    SlingHttpServletRequest.class
}, resourceType = PageFeedModel.RESOURCE_TYPE, defaultInjectionStrategy = DefaultInjectionStrategy.OPTIONAL)
@Exporter(name = ExporterConstants.SLING_MODEL_EXPORTER_NAME, extensions = ExporterConstants.SLING_MODEL_EXTENSION)
public class PageFeedModel {

    protected static final String RESOURCE_TYPE = "demoproject/components/page";
    private static final Logger logger = LoggerFactory.getLogger(PageFeedModel.class);
    @SlingObject
    ResourceResolver resourceResolver;
    @SlingObject
    SlingHttpServletRequest request;
    @Inject
    private Page currentPage;
    @Getter
    private String title;
    @Getter
    private String link;
    @Getter
    private String linkTag;
    @Getter
    private String description;
    @Getter
    private List < ChildPageModel > entries;
    @Inject
    private Externalizer externalizer;
    @Getter
    private String feedUrl;
    @Getter
    private String publishedDate;


    @PostConstruct
    protected void init() {
        try {
            ValueMap properties = currentPage.getContentResource().adaptTo(ValueMap.class);
            title = StringEscapeUtils.escapeXml(null != currentPage.getTitle() ? currentPage.getTitle() : properties.get(JcrConstants.JCR_TITLE, String.class));
            description = StringEscapeUtils.escapeXml(properties.get(JcrConstants.JCR_DESCRIPTION, String.class));

            link = RssFeedUtils.getExternaliseUrl(currentPage.getPath(), externalizer, resourceResolver);
            feedUrl = link + ".rss.xml";
            linkTag = RssFeedUtils.setLinkElements(link);

            String category = request.getParameter("category") != null ? request.getParameter("category").toLowerCase().replaceAll("\\s", "") : StringUtils.EMPTY;
            entries = new ArrayList < > ();
            Iterator < Page > childPages = currentPage.listChildren(new PageFilter(false, false));
            while (childPages.hasNext()) {
                Page childPage = childPages.next();
                ChildPageModel childPageModel = resourceResolver.getResource(childPage.getPath()).adaptTo(ChildPageModel.class);
                if (null != childPageModel) {
                    if (StringUtils.isBlank(category)) entries.add(childPageModel);
                    else {
                        String tags = childPageModel.getTags();
                        if (StringUtils.isNotBlank(tags)) {
                            tags = tags.toLowerCase().replaceAll("\\s", "");
                            List tagsList = Arrays.asList(tags.split(","));
                            String[] categoryList = category.split(",");
                            boolean flag = true;
                            for (String categoryStr: categoryList) {
                                if (tagsList.contains(StringEscapeUtils.escapeXml(categoryStr)) && flag) {
                                    entries.add(childPageModel);
                                    flag = false;
                                }
                            }
                        }
                    }
                }
            }
            publishedDate = RssFeedUtils.getPublishedDate(properties);

        } catch (SlingException e) {
            logger.error("Repository Exception {}", e);
        }
    }
}
//ChildPageModel 
package com.demoproject.aem.core.models;

import com.adobe.cq.export.json.ExporterConstants;
import com.day.cq.commons.Externalizer;
import com.day.cq.commons.jcr.JcrConstants;
import com.demoproject.aem.core.utility.RssFeedUtils;
import lombok.Getter;
import org.apache.commons.lang.StringEscapeUtils;
import org.apache.sling.api.SlingException;
import org.apache.sling.api.resource.Resource;
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.api.resource.ValueMap;
import org.apache.sling.models.annotations.DefaultInjectionStrategy;
import org.apache.sling.models.annotations.Exporter;
import org.apache.sling.models.annotations.Model;
import org.apache.sling.models.annotations.injectorspecific.SlingObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.PostConstruct;
import javax.inject.Inject;

@Model(adaptables = {
    Resource.class
}, defaultInjectionStrategy = DefaultInjectionStrategy.OPTIONAL)
@Exporter(name = ExporterConstants.SLING_MODEL_EXPORTER_NAME, extensions = ExporterConstants.SLING_MODEL_EXTENSION)
public class ChildPageModel {
    private static final Logger logger = LoggerFactory.getLogger(ChildPageModel.class);

    @SlingObject
    Resource resource;

    @Getter
    private String title;

    @Getter
    private String link;

    @Getter
    private String linkTag;

    @Getter
    private String feedUrl;

    @Getter
    private String description;

    @Getter
    private String publishedDate;

    @Getter
    private String tags;

    @Inject
    private Externalizer externalizer;

    @PostConstruct
    protected void init() {
        try {
            if (null != resource) {
                String url = resource.getPath();

                ResourceResolver resourceResolver = resource.getResourceResolver();
                link = RssFeedUtils.getExternaliseUrl(url, externalizer, resourceResolver);
                feedUrl = link + ".rss.xml";
                linkTag = RssFeedUtils.setLinkElements(link);

                ValueMap properties = resource.getChild(JcrConstants.JCR_CONTENT).adaptTo(ValueMap.class);
                title = StringEscapeUtils.escapeXml(properties.get(JcrConstants.JCR_TITLE, String.class));
                description = StringEscapeUtils.escapeXml(properties.get(JcrConstants.JCR_DESCRIPTION, String.class));
                publishedDate = RssFeedUtils.getPublishedDate(properties);
                tags = StringEscapeUtils.escapeXml(RssFeedUtils.getPageTags(properties, resourceResolver));

            }
        } catch (SlingException e) {
            logger.error("Error: " + e.getMessage());
        }
    }
}
//RSS Feed Utils 

package com.demoproject.aem.core.utility;

import com.day.cq.commons.Externalizer;
import com.day.cq.commons.jcr.JcrConstants;
import com.day.cq.tagging.Tag;
import com.day.cq.tagging.TagManager;
import com.day.cq.wcm.api.NameConstants;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.api.resource.ValueMap;

import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;

/** 
 * @desc RSS Feed Utils 
 */
@Slf4j
public class RssFeedUtils {

    public static final String FORMAT_DATE = "E, dd MMM yyyy hh:mm:ss Z";
    public static final String CONTENT_PATH = "/content/demoproject/us/en";

    public static String getPublishedDate(ValueMap pageProperties) {
        String publishedDate = StringUtils.EMPTY;
        SimpleDateFormat dateFormat = new SimpleDateFormat(FORMAT_DATE);
        Date updatedDateVal = pageProperties.get(JcrConstants.JCR_LASTMODIFIED, pageProperties.get(JcrConstants.JCR_CREATED, Date.class));
        if (null != updatedDateVal) {
            Date replicatedDate = pageProperties.get(NameConstants.PN_PAGE_LAST_REPLICATED, updatedDateVal);
            publishedDate = dateFormat.format(replicatedDate);
        }
        return publishedDate;
    }

    public static String getExternaliseUrl(String pagePath, Externalizer externalizer, ResourceResolver resourceResolver) {
        String url = StringUtils.EMPTY;
        if (StringUtils.isNotBlank(pagePath) && null != externalizer && null != resourceResolver)
            url = externalizer.publishLink(resourceResolver, resourceResolver.map(pagePath)).replace(CONTENT_PATH, "");

        return url;
    }

    public static String setLinkElements(String link) {
        String url = StringUtils.EMPTY;
        if (StringUtils.isNotBlank(link)) {
            url = "<link>" + link + "</link>";
        }
        return url;
    }

    public static String getPageTags(ValueMap properties, ResourceResolver resourceResolver) {
        String tags = StringUtils.EMPTY;
        String[] pageTags = properties.get(NameConstants.PN_TAGS, String[].class);
        if (pageTags != null) {
            List < String > tagList = new ArrayList < > ();
            TagManager tagManager = resourceResolver.adaptTo(TagManager.class);
            for (String tagStr: pageTags) {
                Tag tag = tagManager.resolve(tagStr);
                if (tag != null) {
                    tagList.add(tag.getName());
                }
            }
            if (!tagList.isEmpty()) tags = String.join(",", tagList);
        }
        return tags;
    }
}

Dispatcher Changes  

demoproject_rewrites.rules 

In the client project rewrites.rules (/src/conf.d/rewrites) file add a rewrite rule for .rss extension. This rewrite rule takes a URL ending with .rss and rewrites it to point to a corresponding rss.xml file in the page component, effectively changing the file extension from .rss to .rss.xml

#feed rewrite rule
RewriteRule ^/(.*).rss$ /content/demoproject/us/en/$1.rss.xml [PT,L]

100_demoproject_dispatcher_farm.any  

Set the URL parameters that should not be cached for the rss feed. It is recommended that you configure the ignoreUrlParams setting in an allowlist manner. As such, all query parameters are ignored and only known or expected query parameters are exempt (denied) from being ignored.

When a parameter is ignored for a page, the page is cached upon its initial request. As a result, the system subsequently serves requests for the page using the cached version, irrespective of the parameter’s value in the request. Here, we add URL parameters below to serve the content live as required by an external application.

/ignoreUrlParams {
    /0001 { /glob "*" /type "allow" }
    /0002 { /glob "category" /type "deny" }
    /0003 { /glob "pubdate_gt" /type "deny" }
    /0004 { /glob "pubdate_lt" /type "deny" }
}

 

Why is HTL Better?  

We can utilize this approach to produce any XML feed, extending beyond RSS feeds. We have the flexibility to add custom properties to tailor the feed to our specific needs. Plus, we can easily apply filters using query parameters.

 

Big thanks to my director, Grace Siwicki, for her invaluable assistance in brainstorming the implementation and completing this blog work.

]]>
https://blogs.perficient.com/2024/05/03/create-an-rss-feed-using-htl/feed/ 1 362326
How to Create and Use Paginated Reports in Power BI https://blogs.perficient.com/2022/07/12/how-to-create-and-use-paginated-reports-in-power-bi/ https://blogs.perficient.com/2022/07/12/how-to-create-and-use-paginated-reports-in-power-bi/#comments Tue, 12 Jul 2022 16:06:29 +0000 https://blogs.perficient.com/?p=313161

Paginated Reports are the same as SSRS, which have come to Power BI with advanced features for better reports. Paginated Reports are well designed & formatted reports which are perfectly sized so we can print and share the reports effectively. We can also load images and charts on these reports. We can create highly formatted reports for PDF generation purposes.

Power BI Report Builder is a tool used to create Paginated Reports and have them published on Power BI Service. We need Power BI Premium Capacity for Power BI Paginated Reports.

How to Create the Paginated Report

Following are the simple steps to create a Paginated Report using Power BI Report Builder:

1. Open Power BI Report Builder:

First, we need to open Power BI Report Builder; you will see the below window. In this one, we can select Table Wizard/ Chart wizard/ Map Wizard / Blank Report. Here we will select Blank Report.

Picture1
2. Create and Configure Data Source:

After selecting the Blank report, Create the Data source to connect with Dataset you have already created and published in the workspace:Picture2 Select the Power BI dataset available and create the Data source connection (Select the highlighted one Dataset):

Picture3

 

3. Create a Dataset for the Report:

After creating the Data source, we need to make the Dataset for the report. When you click on Add Dataset, the below Dataset Properties Window appears; here, we will give the appropriate name to the Dataset and select the Data source we have already created.
Picture4
Now we need to select the columns for the design view and also get the parameters in the parameters pane:

Picture5

Now Click on Ok, and you will see the query created in the Query Designer. Again, click on Ok, and we can go ahead to make the table on the Report Page.

4. Create the Paginated Report:

Lets’ create the table from the Table Wizard:
Picture6

After clicking on Table Wizard, you will see the window to choose the Dataset which already exists or the one you have already created before this step. After selecting the existing Dataset, the next step will be the Arrange Fields. In this, we need to drag the columns for Value, Row group, and Column Group pane as below. Also, we need to summarize the metrics as shown in the below image:

Picture7

Click on Next and Finish, and the table will be created on the Report Page. Then you can do some formatting as per the Client’s Requirement to get the final Report. We can use Properties for formatting purposes.

Picture8

5. Output of the Paginated Report:

To see the output of the report, we need to “Run” the Report from the “Home Tab.”
Picture9

Paginated Report is always saved in .rdl file format.

After publishing the report on the Workspace/Power BI Service, the user can view the report on the Power BI Service. Users can view or download the Report from the Power BI Service also.

Why Use Paginated Reports?

  1. Paginated reports help create tabular reports, purchases, sales invoices, etc.
  2. Paginated Reports are more customized and cascading parametrization.
  3. Includes graphic Query Designer to write the queries and test for the same.
  4. Optimized for printing or PDF generation.
]]>
https://blogs.perficient.com/2022/07/12/how-to-create-and-use-paginated-reports-in-power-bi/feed/ 6 313161
Learning How to Use the MuleSoft MongoDB Connector https://blogs.perficient.com/2021/12/07/learning-how-to-use-the-mulesoft-mongodb-connector/ https://blogs.perficient.com/2021/12/07/learning-how-to-use-the-mulesoft-mongodb-connector/#respond Tue, 07 Dec 2021 16:27:31 +0000 https://blogs.perficient.com/?p=302039

MuleSoft has various pre-built connectors to connect Mule to different databases and third-party software. The Anypoint Connector for MongoDB (MongoDB Connector) is a closed-source connector that provides a connection between the Mule runtime engine and third-party software on a MongoDB server.

MongoDB is similar to the MuleSoft Database Connector, which uses a single interface for every case and allows developers to run SQL operations on their database. By leveraging the MongoDB Connector, we can import, export, backup, analyze, transform, and connect to MongoDB. This connector provides the easiest way to connect with MongoDB from the MuleSoft workflow by utilizing a connection string or connectivity details in the MongoDB Connector.

Below are supported operations within the MongoDB Connector, as shown in the screenshot from the Anypoint Studio Mule palette. I will talk about some of the operations later in a use case discussion.

Mongo 1

Getting Started With the MongoDB Connector

To familiarize yourself with the MongoDB Connector, you can sign up for a free trial, cloud-hosted MongoDB service.

Create a free trial account below from Atlas with the following steps:

  • Go to https://account.mongodb.com/
  • Sign up for a trial account or sign in if you already have one
  • After verifying your email, you will be prompted to log in and taken to the below page:

Mongo 2

Click on “create a new project” and name your project:

Mongo 3

Once a project is created, create a database:

Mongo 4

Click on “build database” and choose the shared cluster for your free trial account with your choice of cloud provider and region:

Mongo 5

Then your cluster will be created:

Mongo 6

After the cluster is created, click on “connect to create a DB user detail” and make sure to choose “allow connection from anywhere:”

Mongo 7

Then click on “create a DB user:”

Mongo 8

Next, choose the connection method using Native app and select “Java as a driver” from the drop-down. From here, you will get the connection string to be used in MuleSoft to connect to Mongo DB.

Mongo 9

Configuring Mongo DB Connector in AnyPoint Studio

In the global config for MongoDB, select “connection string” from the connection drop-down. Then select the “recommended library” option as in the below screenshot for a MongoDB driver. Finally, use the connection string by updating it with your details (username, password, and the collection name to be created) in the above step to connect to MongoDB.

For example: mongodb+srv://admin1:admin1@cluster0.ahzus.mongodb.net/customers?retryWrites=true&w=majority

Mongo 10

Test the connectivity. It is successful if the correct connection string is provided:

Mongo 11

Now that we’ve walked through how to get started with the MongoDB Connector and configure it in Anypoint Studio, let’s take a look at a use case.

 

Use Case: Importing Data from CSV to MongoDB

In this use case, we are polling a CSV file from one location using a file connector and then transforming it as per the mapping to be stored in MongoDB.

First, we check to see if the collection where we need to store data already exists or not. If the collection isn’t present within MongoDB, we create the collection first. If the collection is present, we store the transformed data using Insert Documents.

To check if the collection doesn’t exist: Mongo 14

If we need to insert multiple records into our collection:

Mongo 12

Sample input.csv:

Mongo 13

Triggering the flow:

Mongo 15

Data imported from CSV to MongoDB:

Mongo 16

To learn more about the MongoDB Connector, click here.

Perficient + MuleSoft

At Perficient, we excel in tactical MuleSoft implementations by helping you address the full spectrum of challenges with lasting solutions, rather than relying on band-aid fixes. The end result is an intelligent, multifunctional resource that reduces the costs over time and equips your organization to proactively prepare for future integration demands.

We’re a Premier MuleSoft partner with more than 15 years of integration expertise across industries including financial services, healthcare, retail, and more. After MuleSoft’s acquisition by Salesforce, our continued innovation in the integration space offers more customized experiences on software developed by MuleSoft. We combine the MuleSoft product suite with our connectivity expertise to provide comprehensive solutions both on-premises and in the cloud.

Contact us today to learn how we can help you implement MuleSoft to solve your enterprise’s integration challenges.

 

]]>
https://blogs.perficient.com/2021/12/07/learning-how-to-use-the-mulesoft-mongodb-connector/feed/ 0 302039
An Intro to Caching in MuleSoft 4 https://blogs.perficient.com/2021/07/28/an-intro-to-caching-in-mulesoft-4/ https://blogs.perficient.com/2021/07/28/an-intro-to-caching-in-mulesoft-4/#respond Wed, 28 Jul 2021 15:45:14 +0000 https://blogs.perficient.com/?p=295753

Caching is a concept harnessed to store frequently used data in the memory, file system, or database that helps to improve processing times. This strategy is most useful when data does not change frequently or is static in nature. In general, some benefits of caching include improved responsiveness, increased performance, and decreased network costs.

The Cache scope in MuleSoft is used to store reusable and frequently used data. There are different types of caching available, which will be discussed later. We can use the caching mechanism to improve performance by speeding up processing times and easing the load on Mule instances.

Here is a snippet that shows where data from a database call is stored in an internal in-memory cache via a caching strategy. When we retrieve the same data multiple times from a database, it gets it from the cache rather than calling it from the database for subsequent calls. This alleviates the load on the end systems and achieves the desired output.

 

Cache 1

 

The Caching Process

How precisely does the Cache scope in MuleSoft 4 work?

A flow is enclosed in a Cache scope, so whenever a request comes, it will perform the following actions:

  • Checks if the request is repeatable or not. A repeatable payload means it’s a repeatable stream. If it is repeatable, only then it will come inside the Cache scope, or if it is not repeatable, it will go through the normal flow processing.
  • Creates the key so the default mechanism generates a Sha- 256 key generator and combines it with Sha- 256 digest; then the key is created.
  • Checks if the key is present in the cache. It could be a local cache, which means it is local in memory or the Object Store (could be a persistent Object Store, or it could be Object-Store V2). If the key is not found, it is considered a cache miss and will call the flow.

If the key is found, it will be called a Cache hit and will return a value from the cache that will be used by another processor. So, the processor inside the Cache scope will not be executed in case of a Cache hit. Before coming out of the Cache scope, it will store the value in the form of key-value pair in the cache.

 

Caching Configurations

To use a caching strategy, you will need to have a Cache scope property panel or Global Elements configuration in Anypoint Studio. Mainly, there are two strategies of caching in Mule:

Default Caching: If you do not specify a caching strategy, it uses a default caching strategy. This strategy provides a basic caching mechanism. Everything will be cached in memory, which is a volatile ram and is non-persistent i.e., if you restart your application, the cached data will be lost. If you want to store a huge static payload, you must use a custom caching strategy.

 

Cache 2

 

Reference to a Strategy: You can create a custom cache strategy using this option. In this, you can use Object Store and then define the cache size, time to live, and other configurations as per your requirement.

There are a few steps to configure a Cache Strategy:

  1. Open the Caching Strategy Configuration window.
  2. Define the name of the caching strategy.
  3. Define the Object Store by selecting between Edit Inline and Global Reference.
  4. Select a component for producing a key utilized for storing events inside the caching strategy.
  5. Open the Advanced tab in the property window to configure the advanced setting.

 

Cache 3

 

There are two different strategies for storing data in cache. The first one is non-persistent, while the alternative can be used for both options i.e., in-memory Object Store and persistent Object Store that will store in the same file system.

Also, to access the object store externally by another system/application as a REST API, we can enable Object Store V2 during cloud hub deployment. It can also be used if we want consistency or synchronization of cache across a cluster of nodes.

In conclusion, caching in MuleSoft helps process data faster. It’s effective for two types of tasks: 1) processing repeated requests for the same information; and 2) processing requests for information that include large repeatable streams. For example, the next time the Cache scope receives a duplicate message payload, it can send the cached response instead of starting the previously time-consuming process.

To learn more, review MuleSoft’s Cache scope documentation or contact us to discuss your enterprise’s integration strategy.

 

Perficient + MuleSoft

At Perficient, we excel in tactical MuleSoft implementations by helping you address the full spectrum of challenges with lasting solutions, rather than relying on band-aid fixes. The end result is an intelligent, multifunctional resource that reduces costs over time and equips your organization to proactively prepare for future integration demands.

We’re a Premier MuleSoft partner with more than 15 years of integration expertise across industries including financial services, healthcare, retail, and more. After MuleSoft’s acquisition by Salesforce, our continued innovation in the integration space offers more customized experiences on software developed by MuleSoft. We combine the MuleSoft product suite with our connectivity expertise to provide comprehensive solutions both on-premises and in the cloud.

Contact us today to learn how we can help you implement MuleSoft to solve your enterprise’s integration challenges.

 

References 

https://docs.mulesoft.com/mule-runtime/4.3/cache-scope

 

 

]]>
https://blogs.perficient.com/2021/07/28/an-intro-to-caching-in-mulesoft-4/feed/ 0 295753
API Security: Common Threats and Considerations https://blogs.perficient.com/2015/06/17/api-security/ https://blogs.perficient.com/2015/06/17/api-security/#respond Wed, 17 Jun 2015 14:36:25 +0000 https://blogs.perficient.com/digitaltransformation/?p=8730

shutterstock_199528379

Common API Threats: spoofing, tampering, repudiation, denial of service, unauthorized access, confidentiality violation

API Security Considerations: 

Identification – Know Your Consumer
The common approach to implementing this is using API keys, which are nothing but randomly generated values that will vary for each consumer.

Authentication – is Consumer Authentic

User-Password over SSl/TSL: the API consumer will be providing a user password to ensure their authenticity.

OAuth – Additional Security by providing token-based access, and the token can have attributes like expiration, which means
any user can perform certain activity for certain period of time and then later on they need to renew or get a new token
depending on what strategy is being implemented.

SAML – Another mechanism for Authentication. Security Assertion Markup Language (SAML) is an XML standard for injecting
Assertions. Typically, the identity provider will validate the user’s identity and insert appropriate assertions to describe things like what application, resource users have access, roles etc.

OpenID is another solution that gives funcationality similar to OAuth and SAML

Authorization – Is consumer authorized to perform a certain action?

Apart from these basic things, one might also want to consider following:

Json Attack: Since most of the API accept or return JSON response, the response can be intercepted in middle. We can have API Gateway taking care of this for all request responses.

Data Protection : Depending on the information being sent or received, we might need to encrypt certain data elements or mask data so that it will be difficult to guess or figure out what they are and what they really mean. For example, PHI or PCI information.

]]>
https://blogs.perficient.com/2015/06/17/api-security/feed/ 0 176086
Richer, More Personalized Customer Experiences for an API Economy https://blogs.perficient.com/2015/06/02/richer-more-personalized-customer-experiences-for-an-api-economy/ https://blogs.perficient.com/2015/06/02/richer-more-personalized-customer-experiences-for-an-api-economy/#respond Tue, 02 Jun 2015 20:14:34 +0000 http://blogs.perficient.com/digitaltransformation/?p=8667
block1-graphic2

Open API Economy Source: Point.io

At the IBM Digital Experience 2015 Conference, Ajay Kadakia with IBM talked about how the API economy is affecting legacy IT companies versus the newer cloud-based companies. The challenge is how to provide more agile, market reactive content off the legacy systems when competing against seemingly more agile, cloud based systems.

Ajay talked about the digital disruption that is already underway:

  • 90% of data has been created in the last 2 years
  • 4x increase in cloud investment vs 2013 (just 2 years)
  • 100% of LOB apps will be mobile first by 2017
  • 75B internet connected devices by 2020

Customer centricity is the only differentiator in today’s world, so experience really matters. But customer choice has exploded in the ways they can experience our brand.  Previously a website was the key method for customer self service.  Now we have devices such as mobile apps, kiosks, internet TV, connected appliances, connected cars, etc.

The only way to reach out to all these channels is to build robust APIs. To succeed, you must include a strategy for API creation and consumption in your overall business strategy. And this requires support at every level of the organization.

So what is an API in the context of an API economy. An API is like a Lego building block that can be combined with other APIs to build more sophisticated services.  APIs are the fast path to new business opportunities.  At the end of 2014, over 75% of Fortune 1000 had public APIs.  Almost every bank or financial services companies have APIs for their partners.

A successful API initiative requires end-to-end capabilities. APIs need to know who is using the API, you need to figure out how to charge or not charge for use of the API, and of course you need to manage the use of the API, which can require some IT infrastructure.

Entry points into the API Economy include:

  • Build – API Design and Implementation
  • Manage – API Lifecycle Management
  • Secure – Security, Metering and Control
  • Monetize – Analytics and Monetization

So how do you get started?  First accelerate your agility.  If you can’t be agile, you won’t be fast enough to meet customer and market demand.  Second you need a strategy to identify business goals, assets and revenue strategies.  Finally you need to monetize the API.

What can be API’s? Here are some examples of business assets that could be exposed through APIs:

  • Product catalogs
  • Customer records
  • ATM/Retail Locations
  • Payment Services
  • Shipping and fulfillment
  • Job Openings
  • Risk Profiles
  • Transaction data

You need to do a thorough asset inventory to identify the potential assets that you have that can become APIs.  Some APIs could be monetized, while others may be more useful to create brand loyalty. For each API you need to determine the business goals and success criteria.

There are several monetization models to consider:

  • For Free – can drive adoption for typically low valued assets or brand loyalty
  • Developer pays – high value assets (like Amazon Web Services) could get paid by developers
  • Developer gets paid – provides incentives for developers to use your API for things like Ad Placement, etc
  • Indirect – includes other models

For IBM, they were late to the API Economy, but have quickly caught up through various acquisitions over the past few years. IBM Watson and the new IBM/Apple apps are built on the IBM API platforms.

]]>
https://blogs.perficient.com/2015/06/02/richer-more-personalized-customer-experiences-for-an-api-economy/feed/ 0 186482
How to Implement Lighter Weight Portals, Part 3: Knockout Portlet https://blogs.perficient.com/2014/09/18/how-to-implement-lighter-weight-portals-part-3-knockout-portlet/ https://blogs.perficient.com/2014/09/18/how-to-implement-lighter-weight-portals-part-3-knockout-portlet/#respond Thu, 18 Sep 2014 22:25:50 +0000 http://blogs.perficient.com/digitaltransformation/?p=7674

In this series, I’m showing how Portals don’t have to be heavyweight.  In Part 1, I wrote about how to make the infrastructure lighter by using cloud or IBM’s Pure System.  In Part 2, I introduced the concept of using IBM’s Web Content Manager system to build very simple portlets.

Now in this final installment, I am going to extend the concepts introduced in Part 2 to show how we can build more complex portlets, but still keep everything lightweight.  To review quickly, in Part 2, I avoided the build and deploy cycle of building Java portlets by using the built-in content management system – WCM.  In that example, I used WCM to display a Reuter’s news feed from a simple Javascript widget supplied by Reuters.

My Appointments Portlet

Final Appointments Portlet

In this blog, I want to implement a more complex portlet using Knockout, which is a popular Javascript framework.  My example is to display in a portlet a list of my Doctor Appointments pulled from a REST service.  Our goal is still to keep this lightweight, so I shouldn’t see a lot of code.  The first screen shot shows you what the final version looks like in Portal 8.

A typical web page or application consists of several sections:

  • CSS
  • Links to external files
  • HTML body
  • Javascript

In WCM, we can create an authoring template that contains four HTML fields, one for each of the sections described above. The authoring template also has a workflow associated with it so we can control the publishing of our code.

Presentation Template

Presentation Template

We also need a corresponding presentation template to display the page. In the second screen shot I show the presentation template I built.  The template includes the four fields (Element tags).  And to help the author out, I included the <style> and <script> tags in the right places. As a reminder, the presentation template will display the raw HTML to the browser.

Just as I did in part 2, I mapped the presentation template to the authoring template in my content site area.  This way whenever Portal displays my code, it formats it correctly via the presentation template.

Now on to the actual code using Knockout.  To enter the code for my portlet, I navigate to the site area and create a new content item using the authoring template I built.  In this example, I called the authoring template “ComplexPortlet”.  The third screen shot shows the content item I created with the four fields collapsed.

Content Item

Content Item from the Authoring Template

In the CSS field,I entered in the styles I need for alternating row colors and formatting the table. If this was a real example, I might take the CSS and include it in my theme, or I might put it into a WCM component that I could reference here.  Both of these options would make the CSS reusable.

CSS

CSS

To use Knockout, I need to include its Javascript code in my portlet.  I could copy it into a separate field, add it to my theme, or include it in a reusable WCM component.  But I decided to just link to a version of Knockout stored on a public content distribution network.  By doing this, I don’t have to maintain any Knockout libraries.

I put the link to Knockout in the HTMLHead field. I could add other items here that would normally go in and <head> section of a page.

Including the Knockout Code

Including the Knockout Code

In the body of my portlet, I want to create a table using a Knockout model for data.  This makes by portlet lightweight because Knockout takes care of the heavy lifting for me.

HTML Table Binding with KO

HTMLBody – Table Binding with KO

You can see in the HTMLBody screenshot, the HTML code is a very simple table that “binds” to an AppointmentModel, which I define in the Javascript section below.   The TBODY tag uses Knockout’s data-bind attribute to tell KO which object to use for the data.  The rest of the code you can’t see here is just a couple more <td>’s and then the closing tags.

Finally we get to the JavascriptCode field on my form.  This is the field we use to get the AppointmentModel data and then instantiate Knockout to do its job.  In the JavascriptCode screen shot, you can see there isn’t a whole lot of code.  It’s very lightweight!

JavascriptCode Field

JavascriptCode Field

In the first line I get the data for the table.  Since this is an example, I didn’t want to create a REST service.  So I just included some JSON data that would normally be returned from a service. If I had the REST service available, I could use Knockout’s REST interface to retrieve the data.   The JSON data contains information about two appointments.

The final line runs Knockout’s applyBindings code when the page finishes loading.  This function takes the AppointmentModel object we created from JSON and Knockout fills in the table correctly.

There you have it:  a pretty nice looking portlet created through WCM and uses sophisticated Knockout features.  Since the WCM portlet is already built and deployed I avoided those steps and got my business application running quickly.

You can get even fancier in WCM by taking advantage of the content management system.  For example, I made the author manually enter the <script> link to load Knockout.  I could create a WCM Component with that script tag in it and just include that component in the presentation template.  Or I could ask the author which Javascript Framework they want to use in a field and then insert the right tag based on their selection.  WCM offers a lot of other possible enhancements too.

You can use other javascript libraries besides Knockout too.  I created another sample using AngularJS and you could try out others.

IBM has built a Script Portlet that works very similar to what I’ve shown here.  The script portlet has some added features like an editor that will format your HTML and Javascript as you work on it. The HTML Field that I used above does not provide any formatting.  One limitation to the current Script Portlet is that you can’t control where your code is stored in WCM, though.  In my example, you can put the code anywhere because it is just plain content.

I have one caution to bring up to you.  Knockout and Dojo don’t get along.  If you try to display this portlet on a portal page that uses Dojo, your table will be empty.  In version 8.5, IBM did away with having Dojo required in the default theme. In 8.5 you can pick the standard portal theme and the ‘Lightweight” style and this Knockout example works great.  In version 7 and 8, you will likely have to create a custom theme that eliminates Dojo to get this to work correctly.

]]>
https://blogs.perficient.com/2014/09/18/how-to-implement-lighter-weight-portals-part-3-knockout-portlet/feed/ 0 186406
IBM Digital Experience Conf: Developing Portlets Using JQuery https://blogs.perficient.com/2014/07/22/ibm-digital-experience-conf-developing-portlets-using-jquery/ https://blogs.perficient.com/2014/07/22/ibm-digital-experience-conf-developing-portlets-using-jquery/#respond Tue, 22 Jul 2014 16:11:35 +0000 http://blogs.perficient.com/digitaltransformation/?p=7542

jQuery is one of the most pervasive scripting libraries in use today. The session “Developing Portlets Using Javascript and JQuery for Engaging Digital Experiences” by Stephan Hesmer, Web 2.0 Architect, IBM and  Jaspreet Singh, Rational Tools Architect, IBM provided good insight as to how to leverage jQuery in IBM WebSphere Portal.

First, a couple of key statistics to indicate why this is important and cannot be ignored:

  • 57.5% of websites use jQuery.
  • jQuery has a 93% marketshare.

WebSphere Portal still includes Dojo but it isn’t required for view mode.  It is required in edit mode however, especially for in place editing.    One key change in portal 8.5 however is when edit mode, the edit panel is now isolated from pages so it will not conflict with the page.

The session was primarily demonstration based but some key items discussed included:

  • jQuery is simple to include in a portal theme.
    • Define the jQuery plugin as a module.
    • Simply create a new folder with the plugin name.
    • Update the prereqs.properties file.
  • jQuery mobile is easy to use but it will take over the entire page.
  • The OOB scripting portlet makes writing scripts simple.  There is no need to code and deploy portlets if you are creating a pure script portlet.

jQuery tooling is available in Rational Application Developer

Many tools are available to help the jQuery developer be more efficient.

  • Auto-generated code to get started with jQuery.
  • Content assist in portlet JSPs.
  • Drag and drop of jQuery widgets.
  • Visualization of jQuery mobile widgets in the Rich Page Editor.
  • jQuery mobile page generation using the Mobile Navigation View.
  • Properties view to help configure jQuery widgets.

 IBM Script Portlet

The script portlet has a couple of key goals:

  • Enable line of business to have autonomy and not be dependent on central IT.
  • Be able to write portlets without knowing Java.

Some of the key capabilities are:

  • All code (HTML/JS/CSS) is stored in WCM.
  • The script editor supports syntax highlighting and auto-indent.
  • Script based applications can be zipped up and imported or exported.
  • Data access is done with Ajax/REST services using JSON.
  • Portlet capabilities (preferences, render parameters etc) can be accessed.

IBM has put an incredible amount of innovation and flexibility in its support for scripting in recent portal releases and updates.  This really opens up the flexibility of the platform and widens the available developer base.

]]>
https://blogs.perficient.com/2014/07/22/ibm-digital-experience-conf-developing-portlets-using-jquery/feed/ 0 186391
IBM Digital Experience Conf: IBM Web Content Manager Patterns https://blogs.perficient.com/2014/07/21/ibm-digital-experience-conf-ibm-web-content-manager-patterns/ https://blogs.perficient.com/2014/07/21/ibm-digital-experience-conf-ibm-web-content-manager-patterns/#comments Mon, 21 Jul 2014 18:31:31 +0000 http://blogs.perficient.com/digitaltransformation/?p=7519

Eric Morentin and Nick Baldwin spoke about WCM Patterns that should be used in content management development in IBM Digital Experience.  Patterns of course are a “canned” way or even best practice for implementing solutions.  There are four themes of patterns they talked about:

  1. Better content / component model
    • There are different types of content and Content Manager build a content page by pulling various types of content.  Types can include things like slide shows, lists, blocks, highlights, teasers, etc.
    • A good first pattern is the List Content Component. Use a WCM Component to build the list.  The end user only has to select what list to display and perhaps customize the query to define the list.  Within content manager, lists are composed of Navigators and Presentations.  The navigator component is the query tool to select items for the list and the presentation component is how you display the results.
    • In general, then a good content/component model will let you create special purpose components  and then combine them into business level tools that the content authors can easily incorporate onto a page. Special purpose components such as lists, blocks, carousel are higher-level components than what come out of the box with WCM, but are built-up using those out of the box components.
    • A slideshow content component would consist of the same List Content Component pattern, but adds a Javascript plugin component to control the display of the slide show.
  2. More reuse
    • Build a library of standard components that can be reused.  In IBM’s Content Template Catalog, they have many reusable components built on component elements like field design, fragments, inline editing controls, etc.
    • You could have reusable component headers, designs and footers that get referenced by the higher-level components like the Slideshow mentioned above.
    • As an example, in the header, you could have common tools like the inline edit code.  This same header can then be used on all your components so you can manage or change the inline edit code in one place.
    • There are also good patterns and tools available like SASS – Syntactically Awesome Style Sheets to help you with creating reusable CSS.
  3. Better site model
    • Sites connect pages and content.  Pages provide the navigation model in portal.
    • The Page Content Structure pattern shows how you structure a site.  The content site contains just content.  There is a content item created for each “component”.  Teasers live in their site.  All these sites can roll into a common site based on the page.
    • This results in a lot of site areas.
  4. Split content, design, navigation, configuration and code or separation of concerns.
    • The component model pattern helps with this concept.
    • You should split design libraries from content libraries.
    • They suggest a Design library, a Content Library and a Process Library.  The process library and design libraries can be referenced from the various sites.

Other best practices/patterns:

  • Workflows can also benefit from good patterns.  One pattern is to use custom workflow actions to perform dynamic tasks such as picking the appropriate approvers based on an author’s business unit.
  • For Access Control, don’t explicitly define all access rights; instead use inheritance whenever possible. In 8.5, reviewer and draft creator (replacing Approver) can be inherited. Explicit access control also impacts performance.
  • Don’t have content items with 40+ fields.  Look for the ability to use custom fields to merge

Common Pitfalls

  • In place edits in non-projects – consider using a plugin to hide in line editing if no project is selected.
  • Multi Language – enable this upfront rather than wait.  Even with just two languages, use the MLS plug-in

Eric and Nick used the IBM Content Template Catalog as examples of patterns that you can implement.  They made the point over and over again that CTC is set of examples, so there are probably more components in there than you may actually every need.  You should take the ideas in CTC and make your own components based on the patterns. You should not really expect to install and use CTC right out of the box.

 

]]>
https://blogs.perficient.com/2014/07/21/ibm-digital-experience-conf-ibm-web-content-manager-patterns/feed/ 3 186389
Google: Reasons Why Nobody Uses Your App, Your Site, Your… https://blogs.perficient.com/2014/07/17/google-reasons-why-nobody-uses-your-app-your-site-your/ https://blogs.perficient.com/2014/07/17/google-reasons-why-nobody-uses-your-app-your-site-your/#respond Thu, 17 Jul 2014 15:57:37 +0000 http://blogs.perficient.com/digitaltransformation/?p=7489

I came across the article Google: Reasons Why Nobody Uses Your App in my favorite iPhone app Zite.  The article is about a presentation given by Tomer Sharon, a user researcher at Google, at Google’s I/O Conference. I embedded the video here for you to view.

Tomer identifies reasons why nobody uses your app.  I want to extend this to your web site, your portal, or whatever because these six reasons apply beyond an app.

I’ll summarize the reasons below, but there were two reasons that really caught my attention because they are spot on with my experience consulting with many, many companies over the past 18 years.

The first reason that caught my eye was “You didn’t test your riskiest assumption.”  Many times clients look to companies like Perficient to reduce risks in their projects.  We have deep expertise in a product they want to implement or build upon.  But we don’t always have expertise in the exact problem that is the riskiest.  When we don’t have that expertise, our value can be in how we approach the problem and how we draw on experience in similar areas.  However too often, clients don’t want to test their riskiest assumptions first, but instead, want to dive headlong into a large project.  Part of the reason is because they they can only get funding one time – so lets ask for the most we can get and then start moving.  Another reason for this is that spending on these kinds of projects – experimentations, proof of concepts (POC), etc – are viewed as wasting money.  But getting a solution to the trickiest part of your project early on is absolutely critical to overall success.

The second reason that caught my attention was “You listened to users instead of watching them.”  Companies have spent boat loads of money gathering requirements by asking users what they want in a system.  Users are more than willing to talk about what they would do with a new system.  But too often what a user says they will do doesn’t match what they really will do.  In the video, Tomar talks about a UK Research Project where the researchers asked people whether they washed their hands after using the restroom.  99% said of course they did.  When the researchers put equipment into the restroom to monitor hand washing, surprise, surprise, less than 80% actually washed their hands.  So when building systems, it is important to get something built quickly – a prototype or POC – and observe how people actually use the system.

Here are the reasons why people don’t use your app, your web site, or whatever. I encourage you to watch the video to get all the details.

  1. You didn’t understand the problem your were solving
  2. You asked your friends (or co-workers) what they thought
  3. You listened to users instead of watching them
  4. You didn’t test your riskiest assumption(s)
  5. You had a “Bob the Builder” mentality

Let me know what you think or if you have other advice.

 

]]>
https://blogs.perficient.com/2014/07/17/google-reasons-why-nobody-uses-your-app-your-site-your/feed/ 0 186382
Upcoming Webinar: Going Mobile with Your Liferay Portal https://blogs.perficient.com/2014/06/06/upcoming-webinar-going-mobile-with-your-liferay-portal/ https://blogs.perficient.com/2014/06/06/upcoming-webinar-going-mobile-with-your-liferay-portal/#respond Fri, 06 Jun 2014 15:15:58 +0000 http://blogs.perficient.com/digitaltransformation/?p=7449

Next week, on June 12 at 1 pm CDT, I will be presenting a free webinar on Going Mobile with Liferay Portal.  Below is a description of the webinar and a link to register.  If you have Liferay Portal or are considering it, you will want to see what are your options for making sure that your mobile experience is a pleasant one.

Going Mobile with Your Liferay Portal

Mobile technology is expanding, and many marketing and IT organizations are working to catch up with their customers’ mobile demands. Customers expect to download your app, login, submit their order, deposit a check or even schedule their yoga sessions — all while picking their kids up after school or relaxing in the evenings.

The consumer-driven nature of mobile leaves many companies struggling to develop, enhance and provide the functionality needed to compete in today’s environment. Liferay Portal is one of the most aggressive open source portals available.

In this webinar, we will:

  • Review top mobile developments
  • Demonstrate why Liferay is a good open source option for portal development
  • Identify the options available to bring your Liferay portal to life on mobile devices
  • Review best practices for creating, supporting and deploying a full-mobile strategy

Click this link to register: Going Mobile with Your Liferay Portal

 

]]>
https://blogs.perficient.com/2014/06/06/upcoming-webinar-going-mobile-with-your-liferay-portal/feed/ 0 186374
WebSphere Portal v8.5 First Look: Install https://blogs.perficient.com/2014/05/28/websphere-portal-v8-5-first-look-install/ https://blogs.perficient.com/2014/05/28/websphere-portal-v8-5-first-look-install/#respond Wed, 28 May 2014 14:50:02 +0000 http://blogs.perficient.com/digitaltransformation/?p=7379

IBM announced the release of IBM Digital Experience Suite 8.5 on earlier this month. Today, I had the chance to download the software images from and I am writing this as I install WebSphere Portal v8.5 Extend edition on Windows 7 OS. I went ahead with the Extend edition because I wanted to get a hold of all the features that WP has to offer. 

WebSphere Portal v8.5 First Look: InstallDownloading the Installables
IBM made it easy for me to search for WebSphere Portal v8.5 installables and find all relevant e-Assemblies. The only thing that I find slightly irritating is that the relevant WebSphere Portal v8.5 e-Assembly was right at the bottom of the page. No worries – a quick browser text search for got me to the right e-Assembly.
Expanding the eAssembly – you can immediately see that IBM has change the packaging a little bit. The e-Assembly only has WebSphere Portal images.In the past, you would have to wade down through a whole list of other supporting software components (TDS, DB2, etc.). This has confused users (both new and old) in the past. No longer the case this time.  The right step towards a simpler “Digital Experience” perhaps? Excellent!

 

Note:
  1. You will have to download the image for WebSphere SDK JAVA edition v7.0.6.1. I don’t think I have downloadedthis in the past but this time around I had to download it (even though it says “optional” during the installation).
  2. No support for 32-bit Windows architecture (I found this out the hard way)
  3. The remote search server is truly optional (and is not required especially for a local install)
As from past installations of WP, I unzipped the downloaded zip files – taking care to ensure that I unzip all the files into a single folder. Total size of the downloaded zip files and the unzipped images together is about 19GB. Simple enough so far.

Installation Steps: Highlights
I don’t want to go into the details of how to install WebSphere Portal v8.5. IBM already has some excellent documentation available here. System requirements for v8.5 can be found here. I only illustrate the highlights of the installation.
  1. If you have a previous version of IBM Installation manager – ensure that it is 64-bit edition. I ran into issues during the installation (perhaps due to fact that my old Installation Manager was for a 32-bit).
  2. As I said earlier – I was forced to install WebSphere SDK JAVA Edition v7.0.6.1 (even though it says optional). Without this I was unable to proceed with the installation.
  3. I also noticed that the default options for the installation directories is no longer in “Program Files”. It has been changed to belong to the Windows users directory.
  4. The installation took surprising longer than I had expected. Approx run time (after Installation Manager is installed) was about 3-4 hours.
  5. Installation went smooth and i was successful in logging in and accessing the portal. No change to the default port – it is still 10039.

Next steps is to look at some of the new features in WebSphere Portal v8.5. You can find a complete list of all features in WebSphere Portal v8.5. My colleague Mark Polly wrote an interesting blog on Features removed from WebSphere Portal v8.5. I encourage you to look at that post as well.

If you like this post – follow us on Twitter @Perficient_IBM and like us on Facebook here.
]]>
https://blogs.perficient.com/2014/05/28/websphere-portal-v8-5-first-look-install/feed/ 0 186368