BI Articles / Blogs / Perficient https://blogs.perficient.com/tag/bi/ Expert Digital Insights Tue, 01 Jul 2025 13:33:21 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png BI Articles / Blogs / Perficient https://blogs.perficient.com/tag/bi/ 32 32 30508587 Using AI to Compare Retail Product Performance https://blogs.perficient.com/2025/06/30/using-ai-to-compare-retail-product-performance/ https://blogs.perficient.com/2025/06/30/using-ai-to-compare-retail-product-performance/#respond Mon, 30 Jun 2025 13:00:12 +0000 https://blogs.perficient.com/?p=383632

AI this, AI that. It seems like everyone is trying to shoehorn AI into everything even if it doesn’t make sense. Many of the use cases I come across online are either not a fit for AI or could be easily done without it. However, below I explore a use case that is not only a good fit, but also very much accelerated by the use of AI.

The Use Case

In the retail world, sometimes you have products that don’t seem to sell well even though they might be very similar to another product that does. Being able to group these products and analyze them as a cohort is the first useful step in understanding why.

The Data and Toolset

For this particular exercise I will be using a retail sales dataset from Zara that I got from Kaggle. It contains information about sales as well as the description of the items.

The tools I will be using are:

    • Python
        • Pandas
        • Langchain

High-level actions

I spend a lot of my time design solutions and one thing I’ve learned is that creating a high-level workflow is crucial in the early stages of solutioning. It allows for quick critique, communication, and change, if needed. This particular solution is not very complex, nevertheless, below are the high-level actions we will be performing.

  1. Load the csv data onto memory using Pandas
  2. Create a Vector Store to store our embeddings.
    1. Embed the description of the products
  3. Modify the Pandas dataframe to accommodate the results we want to get.
  4. Create a template that will be sent to the LLM for analysis
  5. Process each product on its own
    1. Get a list of comparable products based on the description. (This is where we leverage the LLM)
    2. Capture comparable products
    3. Rank the comparable products based on sales volume
  6. Output the data onto a new CSV
  7. Load the CSV onto PowerBI for visualization
    1. Add thresholding and filters.

The Code

All of the code for this exercise can be found here

The Template

Creating a template to send to the LLM is crucial. You can play around with it to see what works best and modify it to fit your scenario. What I used was this:

 
template = """<br>    You are an expert business analyst that specializes in retail sales analysis.<br>    The data you need is provided below. It is in dictionary format including:<br>    "Product Position": Where the product is positioned within the store,<br>    "Sales Volume": How many units of a given product were sold,<br>    "Product Category": The category for the product,<br>    "Promotion": Whether or not the product was sold during a promotion.<br>    There is additional information such as the name of the product, price, description, and more.<br>    Here is all the data you need to answer questions: {data}<br>    Here is the question to answer: {question}<br>    When referencing products, add a list of the Product IDs at the end of your response in the following format: 'product_ids = [<id1>, <id2>, ... ]'.<br>"""

When we iterate, we will use the following as the question:

 
question = f"Look for 5 products that loosely match this description: {product['description']}?"

The output

Once python does its thing and iterates over all the products we get something like this:

Product ID Product Name Product Description Sales Volume Comparable Product 1 Comparable Product 2 Group Ranking
185102 BASIC PUFFER JACKET Puffer jacket made of tear-resistant… 2823 133100 128179 1
187234 STRETCH POCKET OVERSHIRT Overshirt made of stretchy fabric…. 2575 134104 182306 0.75

Power BI

We then load the data onto Power BI to visualize it better. This will allow us to not only analyze the data using filtering and conditional formatting, but we can also explore the data even further with Copilot.

Look at the screenshot below. I’ve initially setup conditional formatting so that all the products that rank low within their group are highlighted.

I then used Copilot to ask how all of these relate to each other. It was quick to point out that all of them were jackets.

Pbi Copilot

This arms us with enough information to go down a narrower search to figure out why the products are not performing. Some other questions we could ask are:

  1. Is this data seasonal and only includes summer sales?
  2. How long have these jackets been on sale?
  3. Are they all sold within a specific region or along all the stores?
  4. etc.

Conclusion

Yes, there are many, many use cases that don’t make sense for AI, however there are many that do! I hope that what you just read sparks some creativity in how you can use AI to further analyze data. The one thing to remember is that in order for AI to work as it should, it needs contextual information about the data. That can be accomplished via semantic layers. To know more, got to my post on semantic layers

Do you have a business problem and need to talk to an expert about how to go about it? Are you unsure how AI can help? Reach out and we can talk about it!

]]>
https://blogs.perficient.com/2025/06/30/using-ai-to-compare-retail-product-performance/feed/ 0 383632
Data Virtualization with Oracle Enterprise Semantic Models https://blogs.perficient.com/2024/02/22/data-virtualization-with-oracle-enterprise-semantic-models/ https://blogs.perficient.com/2024/02/22/data-virtualization-with-oracle-enterprise-semantic-models/#respond Thu, 22 Feb 2024 22:51:57 +0000 https://blogs.perficient.com/?p=357386

A common symptom of organizations operating at suboptimal performance is when there is a prevalent challenge of dealing with data fragmentation. The fact that enterprise data is siloed within disparate business and operational systems is not the crux to resolve, since there will always be multiple systems. In fact, businesses must adapt to an ever-growing need for additional data sources. However, with this comes the challenge of mashing up data across systems to provide a holistic view of the business. This is the case for example for a customer 360 view that provides insight into all aspects of customer interactions, no matter where that information comes from, or whether it’s financial, operational or customer experience related. In addition, data movements are complex and costly. Organizations need the agility to adapt quickly to the additional sources, while maintaining a unified business view.

Data Virtualization As a Key Component Of a Data Fabric

That’s where the concept of data virtualization provides an adequate solution. Data stays where it is, but we report on it as if it’s stored together. This concept plays a key role in a data fabric architecture which aims at isolating the complexity of data management and minimizing disruption for data consumers. Besides data-intensive activities such as data storage management and data transformation, a robust data fabric requires a data virtualization layer as a sole interfacing logical layer that integrates all enterprise data across various source applications. While complex data management activities may be decentralized across various cloud and on-premises systems maintained by various teams, the virtual layer provides a centralized metadata layer with well-defined governance and security.

How Does This Relate To a Data Mesh?

What I’m describing here is also compatible with a data mesh approach whereby a central IT team is supplemented with products owners of diverse data assets that relate to various business domains.  It’s referred to as the hub-and-spoke model where business domain owners are the spokes, but the data platforms and standards are maintained by a central IT hub team. Again, the data mesh decentralizes data assets across different subject matter experts but centralizes enterprise analytics standards. Typically, a data mesh is applicable for large scale enterprises with several teams working on different data assets. In this case, an advanced common enterprise semantic layer is needed to support collaboration among the different teams while maintaining segregated ownerships. For example, common dimensions are shared across all product owners allowing them to report on the company’s master data such as product hierarchies and organization rollups. But the various product owners are responsible for consuming these common dimensions and providing appropriate linkages within their domain-specific data assets, such as financial transactions or customer support requests.

Oracle Analytics for Data Virtualization

Data Virtualization is achieved with the Oracle Analytics Enterprise Semantic Model. Both the Cloud version, Oracle Analytics Cloud (OAC) and the on-premises version, Oracle Analytics Server (OAS), enable the deployment of the semantic model. The semantic model virtualizes underlying data stores to simplify data access by consumers. In addition, it defines metadata for linkages across the data sources and enterprise standards such as common dimensions, KPIs and attribute/metric definitions. Below is a schematic of how the Oracle semantic model works with its three layers.

Oracle Enterprise Semantic Model

Outcomes of Implementing the Oracle Semantic Model

Whether you have a focused data intelligence initiative or a wide-scale program covering multi-cloud and on-premises data sources, the common semantic model has benefits in all cases, for both business and IT.

  • Enhanced Business Experience

With Oracle data virtualization, business users tap into a single source of truth for their enterprise data. The information available out of the Presentation Layer is trusted and is reported on reliably, no matter what front end reporting tool is used: such as self-service data visualization, dashboards, MS Excel, Machine Learning prediction models, Generative AI, or MS Power BI.

Another value-add for the business is that they can access new data sources quicker and in real-time now that the semantic layer requires no data movement or replication. IT can leverage the semantic model to provide this access to the business quickly and cost-effectively.

  • Future Proof Investment

The three layers that constitute the Oracle semantic model provide an abstraction of source systems from the presentation layer accessible by data consumers. Consequently, as source systems undergo modernization initiatives, such as cloud migrations, upgrades and even replacement with totally new systems, data consuming artifacts, such as dashboards, alerts, and AI models remain unaffected. This is a great way for IT to ensure any analytics investment’s lifespan is prolonged beyond any source system.

  • Enterprise Level Standardization

The semantic model enables IT to enforce governance when it comes to enterprise data shared across several departments and entities within an organization. In addition, very fine-grained object and data levels security configurations are applied to cater for varying levels of access and different types of analytics personas.

Connect with us for consultation on your data intelligence and business analytics initiatives.

]]>
https://blogs.perficient.com/2024/02/22/data-virtualization-with-oracle-enterprise-semantic-models/feed/ 0 357386
SQL Best Practices and Performance Tuning https://blogs.perficient.com/2023/01/10/sql-best-practices-and-performance-tuning/ https://blogs.perficient.com/2023/01/10/sql-best-practices-and-performance-tuning/#respond Tue, 10 Jan 2023 09:55:04 +0000 https://blogs.perficient.com/?p=324971

The goal of performance tuning in SQL is to minimize the execution time of the query and reduce the number of resources while processing the query. Whenever we run the query, performance depends on the amount of data and the complexity of the calculations we are working on. So, by reducing the no of calculations and data, we can improve the performance. For that, we have some best practices and major factors which we are going to discuss in detail.

 

Data Types – Deciding on the right data type can reduce the storage space and improve our performance. We should always choose the minimum size of the data type which will work for all the values in all columns.

  • Choosing a specific data type helps to ensure that only specific values are stored in a particular column and reduce storage size.
  • Sometimes we need to convert one datatype to another which increases resource utilization and thereby reduces performance. So, to avoid that while creating a table care should be taken that we use the correct datatype across the tables in our data model, by doing so we are reducing the chances of changing them in the future. In this way, we don’t need to convert data type implicitly or explicitly and our query runs faster.
  • We should use a new data type instead of the deprecated data type.
  • Store the date and time in a separate column. It helps to aggregate data on the date and timewise also helps when we filter the data.
  • When we have a column with a fixed length, go for the fixed length data type, for example- Gender, Flag value, Country code, Mobile number, Postal code, etc.

Filtering Data- Query performance depends on how much data we are processing so it is important to take only the required data for our query. Also, at which level we are filtering the data.

Let’s see some of the scenarios –

  • For example, if we want to see aggregation for the year 2022 and for the ABC department then we should always filter data before aggregation in the Where clause instead of Having.
  • If we want to join two tables with specific data, then we should filter the required data before joining the table.

 

Joins – Join is a very common and useful concept in Databases and data warehouses. In order to improve performance choosing the appropriate join for our requirements is very important. Below are some best practices of join.

  • If we want only matching records from joining tables, then we should go for inner join. If we want full data from any one table, then we should go for left or right outer join and if we want full data from both the table then we should go for full outer join. Always try to avoid Cross join
  • Use ON instead of writing join condition on where clause.
  • Use alias name for table and column.
  • Avoid OR in the join condition.
  • Always prefer to join instead of a correlated subquery. correlated subquery has poor performance as compared to the joins.

Exist vs IN

  • We should use EXIST instead of IN whenever the subquery returns a large amount of data.
  • We should use IN when the subquery returns a small amount of data.

Index- If we talk about performance tuning in SQL, Index plays a very important role. We can create an index either implicitly or explicitly. We have to use the index very carefully because, on one hand, it increases performance in searching, sorting, and grouping record, and on another hand, it increases disk space and takes more time while inserting, updating, and deleting data. There are two types of indexes, Cluster and Non-Cluster indexes. We can have only one Cluster index per table and whenever we create a primary key in the table, the database creates a clustered index implicitly. We can have multiple non-cluster indexes in the table. Whenever we create Unique Key on the table, the database creates a non-cluster index.

Below are some best practices for creating indexes.

  • It is always recommended that we should create a clustered index before creating a non-cluster index.
  • Integer data type works faster with index as compared to string because integer has low space requirement. That is why it is recommendable to create a primary key on the Integer column.
  • Indexing in OLTP database -One should avoid multiple indexes in OLTP (online transaction processing) database since there is a need to frequently insert and modify the data hence multiple indexes might have a negative impact on performance.
  • Indexing in OLAP database –OLAP (online analytical processing) database is mostly used for analytical purposes hence we commonly use select statements to get the data. In this scenario, we can use more indexes on multiple columns without affecting the performance.

Union vs Union All- Union all is faster as compared to Union because union check for duplicates and returns distinct values while Union all returns all records from both the table. If you know that both tables have unique and distinct records from each other, then we can go for union all for better performance.

These are some best practices that we can follow to improve the SQL performance.

]]>
https://blogs.perficient.com/2023/01/10/sql-best-practices-and-performance-tuning/feed/ 0 324971
What is Bravo for Power BI? https://blogs.perficient.com/2022/08/24/bravo-for-power-bi/ https://blogs.perficient.com/2022/08/24/bravo-for-power-bi/#comments Wed, 24 Aug 2022 13:28:16 +0000 https://blogs.perficient.com/?p=317062

What is Bravo for Power BI?

Bravo studio is a design before, no code tool which allows you to turn your app designs and prototypes into real publishable mobile apps (both iOS and Android). You can use Bravo to build even complex apps by connecting your design to external tools via APIs.

In this blog, I am going to go an overview of how we can take data from another website or source and how we can connect it to bravo to populate information on our App; now, we are going to be going over a few things, like data bases to something called an API and how we can those things to connect to our Power BI desktop.

Bravo is your trusted mate who helps you to create a power BI model with the simple user’s interface, including an option for light to dark mode.
Bravo 1

Picture6

Bravo can analyze your model and find the more expensive columns and tables, and Bravo can formulate your DAX measures.

Picture7

Bravo can create a date table and apply the time intelligence function to your measures. And last but not least, Bravo can export data from CSV files. It is not a replacement for a more advanced tool like DAX Studio and tabular editor.

Picture8

Bravo is for users who don’t need more details and options you should use one of those tools, but when moving your first step with Power BI, Bravo is here to help you. It’s a free open sources tool managed by SQL BI.

Picture9

So, let’s see the features in more detail.

Installing Bravo:

You download the Bravo from the bravo.bi website, you must be an administrator to run the setup for public preview.

How to Be Running Bravo in Power BI desktop: You can open bravo from the external tool menu in Power BI Desktop, or you can open Bravo and connect it to a Power BI desktop file or a dataset published on the Power BI Service.

Bravo5

Analyze Model:

On the Analyze Model, you can see the space consumed by your columns. You can group the memory consumption by table and quality to find the more expensive columns of your model.
You can also click the smaller columns and drill down into the details. Bravo helps you to find the more expensive once.

Capture1

Format DAX:

You can format the DAX measure of your model. Bravo highlights the measures that are not formatted, and you can review the formatted version before applying the format to the model.

Manage Dates:

With Manage Dates, you can create a date table with relationships with other date columns of your model. You can also add measures implementing time intelligence calculations from the DAX pattern.
If you don’t have an existing data table, using the feature is simple just remember to create the relationship in your model to connect the data table to the date columns in other tables.

Export Data:

You can select one or more tables from your Power BI model and export them in multiple CSV files in the same folder or in a single Excel file with one worksheet for each table.
Please be careful with the number of rows; Excel cannot have more than a million rows for each table if you want to export a selection of rows and columns, you should use more advanced features.

Have fun with Bravo for Power BI!

 

 

]]>
https://blogs.perficient.com/2022/08/24/bravo-for-power-bi/feed/ 4 317062
Introduction to “Export to PDF” in Power BI Desktop https://blogs.perficient.com/2022/08/23/introduction-to-export-to-pdf-in-power-bi-desktop/ https://blogs.perficient.com/2022/08/23/introduction-to-export-to-pdf-in-power-bi-desktop/#respond Tue, 23 Aug 2022 12:05:46 +0000 https://blogs.perficient.com/?p=316843

Microsoft has released the feature Export to PDF option from Power BI Desktop. This feature has been available in every update of the Power BI desktop since August 2018. Read to explore this option and how to use it.

Now, let’s get started.

The following screenshot refers to the sample of my report.

How To “Export to PDF” in Power BI

Step 1

Go to the File menu and click on the “Export to PDF” option.

Capture1

Step 2

When you click the “Export to PDF” option, it will show a pop-up Progress bar.

Capture2

Step 3

The exported report will look like the following below image.

Picture2

Key Points to consider for the “Export to PDF” feature

  • This feature is available only on the Power BI desktop.
  • Tooltip pages that are hidden will not be exported.
  • It will not print a wallpaper if you have used one in your background. For example, in my case, one of the pages uses this image/wallpaper background. It will not print it.

Picture3

Picture4

Conclusion

This is how we have seen the “Export to PDF” feature works for Power BI Desktop.

I hope you loved this article!

]]>
https://blogs.perficient.com/2022/08/23/introduction-to-export-to-pdf-in-power-bi-desktop/feed/ 0 316843
Power BI: Merge and Append Queries https://blogs.perficient.com/2022/07/19/power-bi-merge-and-append-queries/ https://blogs.perficient.com/2022/07/19/power-bi-merge-and-append-queries/#comments Tue, 19 Jul 2022 13:58:44 +0000 https://blogs.perficient.com/?p=313878

Introduction: 

Power BI’s merging and appending operations allow you to join data from multiple tables. 

The choice between the merge and append queries depends upon the type of concatenation you want to carry out based on your requirement.

  • When you have one or more columns that you’d like to add to another query, then you use merge the queries option.
  • When you have additional rows of data that you’d like to add to an existing query, you append the query.

Merge operations:

  • Merge operations join multiple datasets or tables horizontally based on standard criteria (common column) between the tables.
  • This means that data is added to the matching rows in the base or first table from the second and subsequent tables.
  • If you select the default merge operation, your base or primary table will have the same number of rows at the end of the process as it did at the start, but each row will contain a new column or new columns.
  • However, this will not be the case if you choose a different type of Merge. The default merge operates the same way as a left outer join in SQL.

Let’s consider we have two tables one is the Sales Data, and the other is the Product Data as below:

Sales Data:

Sales Data

Product Data:

            Product Data

Steps to follow for Merging the queries: –

  1. From the left pane of Power Query Editor, select the query (table) into which you want the other query (table) to merge. In this case, it’s Sales Data.
  2. Click on Sales Data Table. Click on Home Tab in the Ribbon Menu.
  3. Click on Merge in the Combine section.
  4. Click on Merge Queries as New.

Merge

     A pop-up menu appears.

  1. From the first drop-down menu, select Sales Data and click on Product_Key (common column between Sales and Product table)
  2. From the second drop-down menu, select Product Data and click on Product_Key.
  3. Click OK.

On ‘Merge Queries,’ you will get two options, ‘ Merge Queries’ and ‘Merge Queries as New.’

Merge Queries:

This option is used to merge two tables and does not create a new table.

Merge Queries as New:

This option is required to merge two or more tables and create a new one. You need to click on ‘Merge Queries as New’ to create a new one.

  • On the merge screen, we can select the two tables from the drop-down list and then select the column or columns (we can even select multiple columns to join upon), which will be joined together.
  • In the below example, we are using Product_Key from the Sales Data table and Product_Key from the Product Data table.
  • As you can see in the below image, the Join Kind defaults to a left outer join, meaning all rows from the 1st table (Sales Data) will be joined with the matching rows from the 2nd (Product Data) table.
  • Note that the join finds a match between 1,63,072 of the rows in each table.

Merge1

There are 6 different types of joins, including right and left outer joins, full outer join, inner join, and left and right anti joins. Anti joins find rows that do not match between the two query datasets.

Joins Type

The result of the Merge is shown below. A new column is added to the Sales Data dataset with a column name matching the 2nd table name, Product Data, in the below example. The data are just listed as “Table,” which can be confusing.

Mergeasnew

  • To see the related columns on the right-side column of the join, this column needs to be expanded using the double arrow button in the right corner of the column header.
  • Clicking on this button opens a window that allows for selecting specific columns from the second table that should be included in the merged dataset.
  • Checking the use of original column name as prefix can be checked to on or off which prefixes the table name to each column.

Expanded

  • Expanding the column adds the selected field from the right-side table to the merged dataset.

Fuzzy Match Option:

  • We can expand the reach of the Merge function by using the fuzzy match option.
  • It will increase the match count upon using the fuzzy matching option. The similarity threshold ranges from 0 to 1. Default 0 would generally mean match every row (a full outer join in SQL), whereas 1.00 would equate to match on exact matches (an inner join in SQL).

Fuzzy Match

  • The match by combining text parts option will look at combining two text values to find the matching join. The combing could be items such as left-side vs. left-side, part-of vs. part of, for example.

Append operations:

  • Append operations join two or more tables vertically.
  • The data rows from one table are appended (or added) at the end of the data rows in another table where the column values match.
  • So, in an append operation, the base table will have the same number of columns at the end of the processes as it did at the start, but each column will contain more rows.
  • Append means the results of two (or more) queries (which are tables themselves) will be combined into one query in this way:
  • Rows will be appended one after the other. (For example, appending a query with 150 rows with another query with 250 rows will return a result set of 400 rows)
  • Columns will be the same number of columns for each query*. (For example, column1, column2…column7 in the first query, after appending with the same columns in the second query, will result in one query with a single set of column1, column2…column7)

Consider two sample data sets: one for Sales-2019:

Sale 2019

and Sales-2020:

Sales 2020

Steps to follow for Appending the queries: –

  1. From the left pane of Power Query Editor, select the query (table) into which you want the other query (table) to append. In this case, it’s Sales Data.
  2. Click on Sales Data Table. Click on Home Tab in the Ribbon Menu.
  3. Click on Append Queries in the Combine section.
  4. Click on Append Queries as New.

A pop-up menu appears.

  1. From the first drop-down menu, select Sales-2019
  2. From the second drop-down menu, select Sales-2020.
  3. Click OK.

Append N

  • If you want to keep the existing query result as it is and create a new query with the appended result, choose Append Queries as New. Otherwise, just select Append Queries.
  • In this example, I’ll do Append Queries as New because I want to keep existing queries intact.

You can choose what is the primary table (typically, this is the query that you have selected before clicking on Append Queries) and the table to append

Append S

  • You can also choose to append Three or more tables and add tables to the list as you wish.
  • For this example, I have only two tables, so I’ll continue with the above configuration.
  • Append Queries simply append rows after each other, and because column names are exactly similar in both queries, the result set will have the same columns.

Append R

Append is like UNION ALL in SQL.

How Append Query handles the duplicate values?

Append queries will NOT remove duplicates; we must have to use Group by or remove duplicate rows to get rid of duplicates.

What if the columns do not match between the two source tables?

If columns in source queries are different, append still works, but it will create one column in the output per each new column. If one of the sources doesn’t have that column, the cell value of that column for those rows will be null. However, Append requires columns to be precisely like work in the best condition.

Conclusion

Power BI merge and append queries are very handy for concatenating data from multiple questions or tables when preparing your data for visualization.

The fuzzy matching feature makes merge queries even more powerful, allowing the combination of two tables based on partial matches.

]]>
https://blogs.perficient.com/2022/07/19/power-bi-merge-and-append-queries/feed/ 8 313878
Drill Down Feature in Power BI https://blogs.perficient.com/2022/07/01/drill-down-feature-in-power-bi/ https://blogs.perficient.com/2022/07/01/drill-down-feature-in-power-bi/#comments Fri, 01 Jul 2022 10:10:15 +0000 https://blogs.perficient.com/?p=312253

What is Drill Down in Power BI?

In Power BI, Drill Down is nothing but the next level of hierarchical insights of the data. For example, when you want to see a year-wise sales summary, you may want to look into the “Monthly” summary, “Quarterly Summary,” and day-wise summary. So, this is where the drill-down option we will use in Power BI.

Drilling down is essential because, in a yearly revenue chart, you may see overall sales. Still, there are chances where most of the revenue is generated in a single quarter or few months itself, so drilling down the general view of the summary to a deeper level.

How to Use Drill Down Option in Power BI?

Follow the below steps to use the Drill Down option in Power BI.

  1. We are going to use the Sales data, but you can use any other data for practice along with us.Pic2
  2. We create a clustered column chart to view the yearly sales summary for this drag and drop the “Order Date” column to the “X-axis” and the “Sales” column to the “Y-axis.”

      Picture3

  3. This has created a yearly column chart.

      Picture4

  4. Under the “X-axis,” when we drag and drop the “Order Date” column, we can see it has created a hierarchy of dates as “Year, Quarter, Month, and Day.”

      Picture5

  5. Whenever there is a hierarchy, we can make use of the “Drill Down & Drill Up” options. Looking closely at the bottom of the chart, we can see some arrow keys.
    Picture6

The first is “Up Arrow,” the second is “Down Arrow,” the third one is “Double Down Arrow,” and another one-off “Expand” options.

      6. As of now, “Up Arrow” is not active because in the hierarchy of order and date first option is “Year,” and the chart is showing the “Year” summary only, so we cannot go any further up.

      Picture7

    7. The next option below the “Year” is “Quarter,” so now if you click on “Double Down Arrow,” it will show “Quarterly-wise.”
Picture8

  8. You can see above we have clicked once on the “Double Down Arrow,” and it has taken deeper into the next level, i.e., the “Quarterly- wise” chart and “Up Arrow” is active now since we have moved one level down or one hierarchy down now, we can go up.

     Picture10
9. Similarly, when you click on this “Double Down Arrow,” it will move one more level further and shows a monthly summary.

      Picture11
10. Now, it is showing a monthly summary. Similarly, when you click “Double Down Arrow” one more time, it will take you to the last hierarchy level, i.e., “Days.”

     Picture12
Picture13

     11. After reaching the last hierarchy level, we no longer drill down.

       Picture14
12. So now, if we press the drill up option, it will take us above levels from the current level, i.e., as shown below.
               Days >> Months >> Quarters >> Years.

13. When we are at the first hierarchy level, i.e., “Years,” we can see the “Expand” option is enabled.
Picture15
14. This will expand everything at once. Click on this option to see its impact.
Picture16
15. By clicking on this option once it has taken us one hierarchy down, i.e., “Year & Quarter,” now click one more time to see “Yearly, Quarterly & Monthly.”

    Picture17
16. “X-Axis” values don’t look neat, isn’t it? This is more to do with settings of the “X-Axis,” first come back to the “1stHierarchy” level,i.e., “Year.”
Picture18

17. Now click on the “Format” option. Click on the “X-Axis” drop-down list.
Picture19
18. From “Type,” choose “Categorical” as the option.
Picture20
19. The moment you choose “Categorical” as the “Type” option at the bottom of the same “X-Axis” option, it will enable the “Concatenate  Labels” option. “Turn Off” this feature.
Picture21
20. After this, click on the “Expand” option to see the neat alignment of “X-Axis.”
Picture22
We can see “Year” only once for all the four quarters.

Drill Down Feature for Non-Date Columns

We can apply this drill-down feature not only for columns but also for non-date columns.

  • For example, we need to see “Category-wise” and “Sub-category-wise” drill down a summary for this first insert “Category-wise” chart.
]]>
https://blogs.perficient.com/2022/07/01/drill-down-feature-in-power-bi/feed/ 2 312253
Analytics with Incorta using 3rd Party Tools https://blogs.perficient.com/2022/05/23/analytics-with-incorta-using-3rd-party-tools/ https://blogs.perficient.com/2022/05/23/analytics-with-incorta-using-3rd-party-tools/#respond Mon, 23 May 2022 14:02:11 +0000 https://blogs.perficient.com/?p=309822

Incorta provides a comprehensive platform for data Acquisition, Data Enrichment, and Data Visualization.

It can be a one-stop for all your data needs, but it can also be combined with other BI Visualization tools to enhance the experience even further. Incorta provides a Postgres connection to connect other 3rd party Visualization tools like Power BI, and Tableau.

Incorta Analytics Service exposes all tenants in an Incorta cluster as separate databases via the SQL Interface (SQLi) using the PostgreSQL protocol. Now you can connect using PostgresSQL and run SQL statements to retrieve data.

Some considerations are

  • Create an Incorta Business schema that comprises all the required columns from your Data Schemas. One of the advantages of a Business Schema is that it can combine multiple Data Schemas and present the data in a Business-centric usable format. Its best practice to have all calculations be done on Incorta so that the 3rd part tool does not need to do any post-processing after it retrieves the data from Incorta for visualization.
  • Make sure that this new business schema includes all needed Tables/Data as this will avoid the need for creating joins in the 3rd party Visualization tool.
  • Set the base table for a business schema view. You can use the Business Schema Designer to set the base table for a business schema view only. This will enable a default query path for attribute-only queries, where one of the attributes is from the current business schema view.
  • Do not set Aggregations in Incorta Business schema as the 3rd party tools can do this on their end after it pulls the data from Incorta.
  • Do not create Incorta Views in Business Schema. (An Incorta View operates independently from other runtime business views, even when in the same business schema. An insight on a dashboard that queries an Incorta View is an applicable insight only for filterable columns from the Incorta View.)
  • Create Folders in Business Schema to organize Business Views for each subject area to be consumed by the 3rd party visualization tool. This will allow you to keep the business schemas created specifically for 3rd party visualization tools separate from other business schemas created for analysis within Incorta Analysis and Insights.
  • Enable Column labels in CMC. Enabling the Column labels instead of Column names will enable 3rd party tools to display the Column labels and not the Column names. This is useful when the column names are not very easy to understand and having the column label will let the 3rd party visualization tool user know the column better.

Recommended steps for setting permissions

  1. Set up your Incorta business views/folders/schemas and define your row-level security as appropriate.
  2. Enable SQL App in CMC
  3. Set up your service account user in both Incorta and the third-party visualization tool.
    • Set up service account user in Incorta
    • Set up service account user in third party tool
  4. Set up the connection in the third-party tool.
    1. Set up connection for PostgresSQL compliant visualization tool
  5. Share only the business schema(s) that the tool should have access to with the service account user.
  6. Validate that Incorta data is available and successfully pulls into the tool

Below is an example of connecting from DBVisualizer using PostgresSQL Driver.

  • I have the following “OnlineStore” Data Schema in my “default” tenant. There is a Business Schema “Online_Store” that is based on the Data Schema “OnlineStore”. Both Data Schema and Business schema are available for querying using PostgresSQL connection

 

Incorta can be used to extract and store data while you use any 3rd party enterprise reporting tool to report by connecting to Incorta.

]]>
https://blogs.perficient.com/2022/05/23/analytics-with-incorta-using-3rd-party-tools/feed/ 0 309822
Why Implement Incorta Analytics for Oracle Fusion Cloud ERP Reporting? https://blogs.perficient.com/2022/05/10/why-implement-incorta-analytics-for-oracle-fusion-cloud-erp-reporting/ https://blogs.perficient.com/2022/05/10/why-implement-incorta-analytics-for-oracle-fusion-cloud-erp-reporting/#respond Tue, 10 May 2022 20:50:02 +0000 https://blogs.perficient.com/?p=309457

Oracle Cloud ERP offers several built-in tools for reporting. While native reporting tools like Oracle Transactional Business Intelligence (OTBI) and Oracle BI Publisher are well-suited for specific types of operational reporting, they do have limitations when it comes to performing complex and enterprise-wide reporting. It is therefore crucial to complement the Oracle Cloud ERP application with an enterprise reporting solution.

A major consideration to keep in mind is that Oracle Cloud ERP is a SaaS application. Unlike Oracle E-Business Suite (EBS), direct access to the Oracle Cloud ERP database (OLTP) is typically restricted. Therefore, traditional approaches to ERP reporting that may have worked well with EBS, do not fit very well with Oracle Fusion SaaS applications. For example, you may have done EBS reporting with Noetix for IBM Cognos, OBIEE, Discoverer or other legacy BI tools. Or you may have several ETL processes that extracted, transformed, and loaded on-premises ERP data into a data warehouse. However, following a similar approach for Cloud ERP reporting is not ideal. The recommendation is to have the ERP Cloud implementation accompanied by a more innovative reporting methodology that fits well with the modernity of the Cloud ERP application, is scalable to perform adequately, and offers timely time to value when it comes to addressing continuously evolving business needs for analytical insights. In this blog, I will describe how Oracle Cloud ERP is supplemented with Incorta, an innovative data and reporting platform that transcends common challenges of the classical approach of the data warehouse.

What Differentiates Incorta Analytics for Oracle Cloud ERP?

Severals factors come into play when deciding on which type of reporting solution works best with the applications at hand. Here I am presenting Incorta as a very viable option for its capabilities in handling data and reporting features. Out of many reasons why, I am focusing here on the three I believe are most relevant with Oracle Cloud ERP.

  1. Expedited Deployment & Enhancements

Deploying Incorta for Oracle Cloud ERP follows a much faster cycle than implementing traditional data warehouse type deployments. Even after the initial deployment, rolling out additional reporting enhancements on Incorta follows a faster time to value due to several reasons:

    • Direct Data Mapping: While conventional data warehouses require extensive data transformation, Incorta leverages data structures out of Oracle Cloud ERP in their original form. Consequently, Incorta replaces ETL processing, star schemas and data transformations, with a Direct Data Mapping technology. With Direct Data Mapping, the Incorta approach maintains source application data models in their original form, with minimal transformation. Consequently, we end up with a one to one mapping to the corresponding data objects and relationships in Oracle Cloud ERP. Traditionally this didn’t work well for reporting due to a significant impact to querying performance. However, the innovation introduced with Incorta Direct Data Mapping enables high performing batch queries on massive amounts of data, without requiring extensive ETL transformation, as was previously the case with a data warehouse. Eliminating the overhead involved in doing extensive data transformation is at the root of why Incorta offers a more expedited path to initially implementing and regularly enhancing Incorta analytics.
    • Oracle Cloud Applications Connector: Unlike on-premises ERP applications, direct database access is not available, in a scalable manner, from Oracle Fusion Applications. Doing a reporting solution on Oracle Cloud ERP involves a major undertaking related to the initial setup, scheduling and ongoing refreshes of data extracts of hundreds of data objects typically used for ERP reporting. You may be thinking that using tools like Oracle BI Publisher or OTBI may be a way to go about getting the Cloud ERP data you need for reporting. While such a technique may get you going initially, it’s not a feasible approach to maintain data extracts out of Cloud ERP because it jeopardizes the performance of the Oracle Cloud ERP application itself, the integrity of the reporting data and its completeness, and the ability to scale to cover more data objects for reporting.

The whole data export process is however streamlined and managed from within Incorta. A built-in connector to Oracle Fusion applications allows Incorta to tap into any data object in Oracle Cloud ERP. The connector performs data discovery on Oracle Cloud ERP View Objects (VO), reads the metadata and date available in both Oracle VOs and custom VOs, and loads data into Incorta. The connector adheres to Oracle best practices for exporting Oracle Fusion data in bulk. The connectivity happens through the Oracle Fusion Business Intelligence Cloud Connector (BICC). There is no need to develop the BICC data exports from scratch as the Incorta Blueprint for Oracle Cloud ERP already includes pre-defined BICC offerings for various ERP functional areas (such AP, AR, GL, Fixed Assets, etc.). These offerings are available to import into BICC, with the option of updating with custom View Objects. Managing the data load from Oracle Cloud ERP into Incorta takes place from the Incorta web UI and therefore requires minimal setup on the Oracle Fusion side.

We can schedule multiple pre-configured offerings from the Incorta blueprint, depending on which modules are of interest to enable in Incorta for Oracle Cloud ERP reporting. This matrix provides a list of BICC offerings that get scheduled to support different functional areas of interest.

    • Pre-built Dashboards and Data Models for Oracle Cloud ERP: Time to value with Incorta is significantly shorter compared to doing analytics on other platforms because Incorta has a ready-to-use data pipeline, data model and pre-built dashboards specifically for Oracle Cloud ERP. The ready-to-use Cloud ERP blueprint also incorporates business schemas that enable power users to self-serve their needs for creating their own reports. The Incorta Oracle Cloud ERP blueprint includes pre-built dashboards for:
      • Financials: General Ledger, Accounts Payable, Employee Expenses, Accounts Receivable, Fixed Assets, and Projects
      • Supply Chain: Procurement and Spend, Order Management and Inventory
      • Human Capital Management: Workforce, Compensation, Absense and Payroll

In addition, pre-built dashboards include reporting on common business functions such as: Procure to Pay, Order to Cash, Bookings, Billings and Backlog.

  1. High Performing and Scalable to Handle Billions of Rows

If you are familiar with data warehouses and BI solutions, you are probably aware that the performance of a reporting solution is key to its success. And performance here includes both the data layer, whereby data refreshes happen in a timely manner, as well as front-end reporting response times. If the business is unable to get the information required to drive decisions in a timely manner, the reporting platform would have failed its purpose. Therefore, laying a solid foundation for an enterprise-wide reporting solution must have performance and scalability as a key criterion.

What I like about Incorta is that it is not only a data visualization or reporting platform, but it is a scalable data storage and optimized data querying engine as well. With Incorta we don’t need to setup a 3rd party database (data warehouse) to store the data. Incorta handles the storage and retrieval of data using data maps that offer very quick response times. Previously, with a data warehouse, when a table (like GL journals or sales invoices, for example) starts growing above a few million rows, you would need to consider performance optimization through several techniques like archiving, partitioning, indexing, and even adding several layers of aggregation to enhance reporting performance. All these activities are time consuming and hinders productivity and innovation. These traditional concepts for performance optimization are not needed anymore as Incorta is able to easily handle hundreds of millions and billions of rows without the need to intervene with additional levels of aggregate tables.

  1. Support for Multiple Data Source Applications

It is often the case that analytics encompasses information from multiple applications, not just Oracle Cloud ERP. A couple things to consider in this regard:

    • Multiple ERP Applications: The migration of an on-premises ERP application to Oracle Cloud ERP may not necessarily be a single-phased project. The migration process may very well consist of multiple sequential phases, based on different application modules (GL, AP, AR, Projects, Procurement, etc.) or based on staggered migrations for different entities within the same organization. Consequently, it is often the case that the ERP reporting solution needs to simultaneously support reporting from other ERP applications besides Oracle Cloud ERP. A typical use case is to source GL data from Oracle Cloud ERP while sub-ledger data is sourced from an on-premises application like EBS. Another common use case is to combine data for the same ERP module from both Oracle Cloud ERP and EBS. Incorta allows for multiple schemas to be mapped to and loaded from various applications, besides Oracle Cloud ERP. Incorta then handles the union of data sets from multiple schemas to be reported against seamlessly within the same report.
    • Cross-functional Reporting: Along the same lines, there is often a need to report on ERP data in conjunction with data external to ERP, such as from a Sales, Planning, Marketing, Service, or other applications. With a rich list of supported connectors and accelerator blueprints for various source applications, Incorta can connect to and establish separate schemas for each of the applications of interest. Data objects loaded into Incorta can then be joined across schemas and mapped appropriately to enable reporting on information from various source systems.

If you’re on your journey to Oracle Cloud ERP and wondering what to do with your legacy data warehouse and reporting platforms, I encourage you to reach out for a consultation on this. The Perficient BI team is highly experienced with ERP projects and has helped many customers with their upgrades and analytics initiatives leveraging a diverse set of technology vendors and platforms.

]]>
https://blogs.perficient.com/2022/05/10/why-implement-incorta-analytics-for-oracle-fusion-cloud-erp-reporting/feed/ 0 309457
Creating & Invoking Custom Function in Power Query https://blogs.perficient.com/2022/03/25/creating-invoking-custom-function-in-power-query-2/ https://blogs.perficient.com/2022/03/25/creating-invoking-custom-function-in-power-query-2/#comments Fri, 25 Mar 2022 13:22:43 +0000 https://blogs.perficient.com/?p=306776

Microsoft’s Power BI is a data and analytics reporting tool that lets you connect to multiple data sources. Once connected to a data source, raw data sets can be converted to dashboards and presented to the team, customer, anyone at any time.
In this blog, we will get introduced to the custom function feature available in Power Query, a data transformation component of Power BI Desktop.

Let’s Dive

Launch the Power BI Desktop and get the data in Power BI Desktop to work on. (Here, we are using sample financial data in .xlsx format).

We need to use our own custom functions when complex calculations that we want to perform with our data are not possible with the available standard, statistics, and scientific transformation features.
When we want to use our own function, first, we need to create that function.

Let’s have a look to create and use a simple custom function in Power BI Query Editor.

Step 1: After importing the data in Power BI Desktop, go to Power BI Query editor.

B1

Step 2: In Power Query, we can find the Queries section on our left-hand side. In the blank area of the Queries section, right-click and select new query and then blank query.

B2

After selecting the new blank query, we will be writing our custom function inside the formula bar.
The syntax for the custom function is as follows –
= (Variable as Data Type, Variable as Data Type) => (Output Expression)

  1. Our custom function will always start with the Equals to (=) operator.
  2. Variable like X, Y, Z, x, y, z, and its Data type like number must be declared inside the parentheses (). Each variable followed by its data type needs to be separated using comma (, ) from other declared variables.
  3. Symbol => refers to the end of variable declaration, and after => symbol, we need to write our Output Expression inside parentheses ().

Let’s understand it by creating a simple custom function to get a product of two numbers.

Step 3: In the formula bar, we will write the query as below and then hit Enter.
= ( X as number , Y as number ) => ( X * Y )

B3

Note: M-language is case sensitive, and therefore, the variable used must have a uniform case.

Our custom function to get the product of two numbers is ready. We can test the function by passing values to variables in the function.
To test the function, I am passing the value of X as 18 and the value of Y as 20 and clicking Invoke. The expected output is 360 as per our output expression.

B4

Here, we can observe the desired output, but the output is recorded as a new query. We can delete this output query.

Now, we will see how we can invoke the custom function for our desired query in Power BI Query editor.

Step 4: Select the desired query in which you like to Invoke the custom function and then go to Add columns tab.

B5

Step 5: The last thing we need to do is select the Invoke custom function feature to call the custom function.

B6

We can Invoke custom functions multiple times for different or the same queries.

Once we click OK, we can find the result in the same query table in a new column in which we invoked our custom function.

In this post, we have seen the procedure to create and invoke custom functions in Power BI Query Editor.
Hope you enjoyed the post.

]]>
https://blogs.perficient.com/2022/03/25/creating-invoking-custom-function-in-power-query-2/feed/ 1 306776
5 Snags to Avoid When Modifying Seeded Oracle Fusion Reports https://blogs.perficient.com/2021/07/08/5-snags-to-avoid-when-modifying-seeded-oracle-fusion-reports/ https://blogs.perficient.com/2021/07/08/5-snags-to-avoid-when-modifying-seeded-oracle-fusion-reports/#respond Thu, 08 Jul 2021 15:18:24 +0000 https://blogs.perficient.com/?p=294561

Oracle Cloud Applications provide many predefined reports to generate important documents necessary for running a business. These documents support internal operations, communicate with trading partners, or produce legal documentation. Still, they almost always require modifications to include company branding or rearranging and adding data elements to appear similar to legacy documents. Thoughtfully, Oracle Fusion provides a “Customize” feature to simplify the process of modifying standard reports instead of building new reports from scratch. As a matter of fact, many blogs explain the process of customizing a seeded report, so this blog will not beleaguer that topic. Instead, it points out several snags that are not intuitive to resolve without tacit knowledge.

Customize menu option. Now you see it, now you don’t

If the Customize option does not appear in the Edit menu, verify you accessed the Oracle BI Publisher tool and NOT the Oracle Transactional Business Intelligence (OTBI) tool. Frankly, they look very similar, as shown in the screenshots below, but don’t be fooled because the URL is the giveaway.

  • Oracle BI Publisher (BIP) URL (https://host:port/xmlpserver)
  • Oracle Transactional Business Intelligence (OTBI) URL (https://host:port/analytics)

Customizetable

Permissions? We don’t need no stinking permissions!

If you cannot customize a report or view the Custom folder, you probably do not have the right permissions.

  1. The following grants are needed to view a report in the Custom folder
    1. BI Consumer role
    2. Read and Run Publisher Report permissions on the original report
  2. The following grants are needed to customize a report
    1. Read and Run Publisher Report permissions on the Custom report
    2. BI Author role (or a role that includes the permission oracle.bi.publisher.developReport)
    3. Read and Run Publisher Report permissions on the original report
    4. Read and Write permissions on the Custom folder
    5. Access to the data model and data source of the original report if the same data model is used

Permissionstable2

Editing data models. Don’t let your hard work be in vain.

Heed this warning, do not directly edit a data model delivered with an Oracle Fusion application because future patches will overwrite any changes in the data model. Instead, copy the data model into the Custom folder and edit the copy. As a final step, all reports accessing the data model must be edited to point to the new data model.

Dmtable21

Migrating Reports using Archive/Unarchive. Just like the Customize option, but different.

Whereas the Customize option is available using the BI Publisher tool, the Archive/Unarchive, options are only available using the OTBI Analytics tool. If the Archive/Unarchive options do not appear in the Edit menu, verify you accessed the Reports and Analytics tool and NOT the BI Publisher tool. As mentioned in bullet #1, they look similar, as shown in the screenshots below, but don’t be fooled because the URL is the giveaway.

  1. Oracle Transactional Business Intelligence (OTBI) URL (https://host:port/analytics)
  2. Oracle BI Publisher (BIP) URL (https://host:port/xmlpserver)

Migratetable1

Action/Reaction between original and copied reports. Why did THAT happen?!?

The Customize option creates a copy of a seeded report in the “Custom” folder using the same folder structure as the original. The copy includes a link to the original, but not all actions on the original produce intuitive results. The table below explains the outcomes of actions performed on the original report when a customized version exists.

Actiontable2

Whether customizing reports, creating custom extracts, or importing data from other systems, Perficient is here to help.

 

]]>
https://blogs.perficient.com/2021/07/08/5-snags-to-avoid-when-modifying-seeded-oracle-fusion-reports/feed/ 0 294561
HyperIntelligence – Let the answers find you https://blogs.perficient.com/2021/01/28/hyperintelligence-let-the-answers-find-you/ https://blogs.perficient.com/2021/01/28/hyperintelligence-let-the-answers-find-you/#respond Thu, 28 Jan 2021 15:19:56 +0000 https://blogs.perficient.com/?p=286464
data in automotive

Reports

While Most of the people are happy with the current solutions provided by MicroStrategy there are still many who are looking for better solutions that can work in a better manner with their current infrastructure and can help with the largely growing data of the organization.


Expect epic” was one of the tag lines at the MicroStrategy world 2020 while another tagline was “Smarter Stronger Faster” which meant on making data driven decisions in a better way.
The most focused point of world 2020 was Hyperintelligence – Moving from Hyper Intelligent to Hyper productivity.

What is Hyperintelligence?

Conventional applications take several minutes—and too many clicks—to answer simple questions. Hyperintelligence provides the experience of revolutionary breakthrough that brings instant answers to everyday applications. Hence Hyperintelligence is making everyone not just hyperintelligent, but hyper productive.


All you need to do is to just hover over highlighted text for zero click insights and one click action. Transfer analyst into superheroes in split seconds. Make your salesforce one hundred times faster with no learning curve and accelerate information flow in every app. So new technology is making us not only hyper Intelligent but hyper productive.


How to Build a Hyperintelligence Card: –


By leveraging pre-built templates, interchangeable widgets, drag-and-drop card editor, anyone can build powerful cards in just a few minutes. With the use of Hyperintelligence we can perform the following activities: –

  • Build cards using drag-and-drop workflows in MicroStrategy Workstation.
  • You can mix and match templates, header styles, and widgets that matches your unique business use case.
  • Also One can control fine-grain formatting options like header color, text size, and number formatting.

 

We can work with Hyperintelligent care only on the MicroStrategy 2019 or on 2020 version. This is not compatible with older versions.

Step 1 – Connect your data source: –

You can connect to any existing datasets or you can create any new dataset. Right click on your chosen dataset and select “Create a new card”

Connect To Data

Step 2 – Choose a template: –

Once you are connected to data you can build hyperintelligent card by choosing a template. Select a pre-built template from the formatting panel on the right to get a head start on building your card.

Choose A Template

Step 3 – Build Your Card: –

All the formatting changes to the card can be done with the options given on the right side. You can select header, colors, etc.

Build Your Card

Step 4 – Add Data in Card: –

Data on which we wish to do the analysis can be added form the left side, which lists the objects of the selected dataset.

Add Data In Card

Step 5 – Define a Card Topic: –

Once this card is deployed, every time the user will hover over the subject (Selected in the header section of the card) and hyper Intelligent sees the keyword in the header section (coming from selected attribute) the user will see the hyper Intelligent Card.

 

Step 6 -Alternative Keyword Matching: –

You can specify on what basis hyperintelligent should match the keyword and header, like with some alternate keyword, etc. It gives you the option to select multiple attribute forms on which it can match to find this card.

 

Step 7 – Actions user can take on Card (Creating Dynamic Links): –

Hyperintelligent gives us the facility to choose actions that the user can perform once they see the Hyper Card. For Example – Reorder the product, contact an employee, etc. So, you can navigate it any web application by giving its link in the card.

 

Step 8 – Add Footer: –

You can put any attribute or metric in this action or also you can put some text to recommend the next action that the user can take.

 

Step 9 – Deploy the Card: –

You can deploy in the below 3 ways as shown in the figure given below: –


Deploy The Card

 

To know more on Hyperintelligence please use the below link: –

HyperIntelligence (microstrategy.com)

]]>
https://blogs.perficient.com/2021/01/28/hyperintelligence-let-the-answers-find-you/feed/ 0 286464