JavaScript is an object-based language; its objects are made up of attributes.
If an attribute contains a function, it is considered a method; otherwise, it is a property.
JavaScript provides objects and arrays for gathering and referring to related sets of data under a single name.
There are several ways to create an object in JavaScript:
The list of properties and values enclosed in curly braces, property, and value is separated by : (colon), also known as object initializer.
let employee = { id: 1001, name: "Neil Hanks", contact: 9876543210 };
Since the new object can inherit properties and methods from the prototype object, we can create a new object using an existing object as its prototype using the new keyword.
let employee = new Object(); employee.id = 1001; employee.name = "Neil Hanks"; employee.contact = 9876543210;
We can use the constructor function to create and initialize objects, create a constructor function with parameters, and assign values to its property by this keyword inside the function.
function employee(id, name, contact) { this.id = id; this.name = name; this.contact = contact; } let emp = new employee(1001, "Neil Hanks", 9876543210);
Methods are object properties with functions. They can also be part of the object when it is created or added later, like properties.
// constructor function function Person () { this.name = "John", this.age = 23, this.greet = function () { console.log("hello"); } }
Like Java, an object’s properties can be accessed using the dot notation.
employee.name; // Output : Neil Hanks
We can also access the property dynamically using square bracket notation, also called computed property.
employee[“name”]; // Output : Neil Hanks
The second method has the advantage that the property name is provided as a string, which means it can be calculated at runtime. It can also be used to set and obtain properties with reserved words.
By using the Object.values() method, we can return an array of an object’s values.
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; const emp1 = Object.values(employee); console.log(emp1) //[1001, "Neil Hanks", 9876543210]
By using the the Object.keys() method, we can return an array of an object’s keys.
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; const emp1 = Object.keys(employee); console.log(emp1) //["id", "name", "contact"]
By using Object. entries() method, we can create an array which contains arrays of key/value pairs of an object
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; const emp = Object.entries(employee); console.log(emp) //[["id", 1001], ["name", "Neil Hanks"], ["contact", 9876543210]]
Merging two objects by the spread operator returns a new object.
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; const newEmp = { ...employee, location: "India" } console.log(newEmp) //output newEmp = { id: 1001, name: "Neil Hanks", contact: 9876543210, location: "India" };
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; const emp1newVal = { location: "India" } const combineObj = Object.assign(employee, emp1newVal) console.log(combineObj) //output combineObj = { id: 1001, name: "Neil Hanks", contact: 9876543210, location: "India" };
Using this method, we can freeze an object, preventing the modification of existing properties or adding new properties and values.
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; Object.freeze(employee); employee.name = 'Tom'; console.log(user1.name) //"Neil Hanks"
Using this method, we can determine if an object is frozen or not, and it returns a Boolean value.
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; console.log(Object.isFrozen(employee)) // false Object.freeze(employee) console.log(Object.isFrozen(employee)) // ture
This method can be used to prevent new properties from being added, but Values of present properties can still be changed if they are writable.
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; Object.seal(employee); employee.age = 'Tom'; console.log(employee.name) //"Tom" employee.age = 26; console.log(employee.age) //undefined
Using this method, we can determine if an object is sealed or not, and it returns a Boolean value.
const employee = { id: 1001, name: "Neil Hanks", contact: 9876543210, }; console.log(Object.isSealed(employee)) //false Object.seal(employee); console.log(Object.isSealed(employee)) // ture
]]>
When we talk about metadata extraction, IDMC (Intelligent Data Management Cloud) can be trickier than PowerCenter. Let’s see why.
In PowerCenter, all metadata is stored in a local database. This setup lets us use SQL queries to get data quickly and easily. It’s simple and efficient.
In contrast, IDMC relies on the IICS Cloud Repository for metadata storage. This means we have to use APIs to get the data we need. While this method works well, it can be more complicated. The data comes back in JSON format. JSON is flexible, but it can be hard to read at first glance.
To make it easier to understand, we convert the JSON data into a table format. We use a tool called jq to help with this. jq allows us to change JSON data into CSV or table formats. This makes the data clearer and easier to analyze.
In this section, we will explore jq. jq is a command-line tool that helps you work with JSON data easily. It lets you parse, filter, and change JSON in a simple and clear way. With jq, you can quickly access specific parts of a JSON file, making it easier to work with large datasets. This tool is particularly useful for developers and data analysts who need to process JSON data from APIs or other sources, as it simplifies complex data structures into manageable formats.
For instance, if the requirement is to gather Succeeded Taskflow details, this involves two main processes. First, you’ll run the IICS APIs to gather the necessary data. Once you have that data, the next step is to execute a jq query to pull out the specific results. Let’s explore two methods in detail.
Step 2:
Construct a jq query to extract the specific details from the JSON file. This will allow you to filter and manipulate the data effectively.
Windows:- (echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv Linux:- jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv
Step 3:
To proceed, run the jq query in the Command Prompt or Terminal. Upon successful execution, the results will be saved in CSV file format, providing a structured way to analyze the data.
Step 1:
Formulate a cURL command that utilizes IICS APIs to access metadata from the IICS Cloud repository. This command will allow you to access essential information stored in the cloud.
Windows and Linux:- curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json"
Step 2:
Develop a jq query along with cURL to extract the required details from the JSON file. This query will help you isolate the specific data points necessary for your project.
Windows: (curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json") | (echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv Linux: curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json" | jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv
Step 3:
Launch the Command Prompt and run the cURL command that includes the jq query. Upon running the query, the results will be saved in CSV format, which is widely used for data handling and can be easily imported into various applications for analysis.
Conclusion
To wrap up, the methods outlined for extracting workflow metadata from IDMC are designed to streamline your workflow, minimizing manual tasks and maximizing productivity. By automating these processes, you can dedicate more energy to strategic analysis rather than tedious data collection. If you need further details about IDMC APIs or jq queries, feel free to drop a comment below!
Reference Links:-
IICS Data Integration REST API – Monitoring taskflow status with the status resource API
jq Download Link – Jq_Download
]]>The web is a vast source of information, but it is not always easy to access and use for natural language applications.
In this blog post, we will show you how to crawl and scrape the target URL, extract and clean the content, and store it in Azure Blob Storage. We will use Python as the programming language, and some popular libraries such as requests, asyncio, BeautifulSoup, and lxml.
By following this blog post, you will learn how to:
Scraping is a method to extract information from HTML content but to do this, we must know first the page structure from where we want to extract information. The first thing you need to do when scraping a web page is get the HTML content through an HTTP request, to process it. The native library from Python to work with HTTP requests is requests.
import requests
The main problem with this library is that it doesn’t support asynchronous requests directly. To solve this issue and use asynchronous calls we use another library called asyncio, which allows us to use tasks and async/await.
import asyncio
Now we can use both to make an async request to get the HTML:
async def getHTML(url: str): loop : asyncio.AbstractEventLoop = asyncio.get_event_loop() try: future = loop.run_in_executor(None, requests.get, url) return await future # Handle exceptions related to the requests module except requests.exceptions.RequestException as e: pass # Handle all other exceptions except Exception as e: print("An error occurred:", e)
Once we get the HTML content, we need to process it with a parser. For this there are several libraries, the most used are BeautifulSoup and lxml. This project uses BeautifulSoup, but there is also another class developed with lxml for experiment purposes.
At first, you must import the corresponding library:
from bs4 import BeautifulSoup
With the HTML that the request returned, you must build an object that will be used to process the HTML.
soup = BeautifulSoup (response.content, "html.parser", from_encoding="iso-8859-1")
For example, if you want to get the title of the web page, you can use:
title = soup.find("title").text
Or, if you want to get all the links in the web page, you can use:
links = soup.find_all("a") for link in links: print(link["href"])
Or, if you want to get the first paragraph with the class intro, you can use:
intro = soup.select_one("p.intro").text
At first, you must import the corresponding library:
from lxml import html
With the HTML that the request returned, you must build an object that will be used to process the HTML.
parsed_content = html.fromstring(content)
To get the information, the function to use is .xpath(), where the parameter is an XPath string. XPath is a syntax for defining parts of an XML document. You can use XPath expressions to select nodes or node-sets in an XML document.
For example, if you want to get the title of the web page, you can use:
title = parsed_content.xpath("//title/text()")[0]
BeautifulSoup is recommended for scenarios where flexibility on the search is necessary, for example search by two CSS classes without any particular order. lxml instead, since it uses XPath to make the search, it’s very strict and not so much flexible.
However, lxml has some advantages over BeautifulSoup, such as:
Therefore, the choice of the library depends on your needs and preferences. You can try both and see which one works better for you.
After extracting the content from the HTML, you may need to clean and normalize it before storing it in Azure Blob Storage.
The final step is to store the extracted content in Azure Blob Storage, which is a cloud service that provides scalable and secure storage for any type of data. Azure Blob Storage allows you to access and manage your data from anywhere, using any platform or device.
To use Azure Blob Storage, you need to have an Azure account and a storage account. You also need to install the Azure Storage, which provides a simple way to interact with Azure Blob Storage using Python.
To install the Azure Storage SDK for Python, you can use the following command:
pip install azure-storage-blob
To use the Azure Storage SDK for Python, you need to import the BlobServiceClient class and create a connection object that represents the storage account. You also need to get the connection string and the container name from the Azure portal. You can store these values in a .env file and load them using the dotenv module.
For example, if you want to create a connection object and a container client, you can use:
from azure.storage.blob import BlobServiceClient from dotenv import load_dotenv import os # Load the environment variables load_dotenv() # Get the connection string and the container name AZURE_BLOB_CONNECTION_STRING : str = os.getenv("AZURE_BLOB_CONNECTION_STRING") AZURE_PAGE_CONTAINER = os.getenv("AZURE_PAGE_CONTAINER") # Create a connection object blobServiceClient = BlobServiceClient.from_connection_string(AZURE_BLOB_CONNECTION_STRING) # Create a container client container_client = blobServiceClient.get_container_client(AZURE_PAGE_CONTAINER)
Then, you can upload the extracted content to Azure Blob Storage as a JSON document using the upload_blob method. You need to create a blob client that represents the blob that you want to upload and provide the data as a JSON string. You also need to generate a unique file name for the blob, which can be based on the current date and time.
If you want to upload the content from the previous steps, you can use:
import json from datetime import datetime # Create a document with the extracted content document = { "title": title, "summary": summary, "texts": texts } # Convert the document to a JSON string json_document = json.dumps(document) # Create a blob client dt = datetime.now() fileName = dt.strftime("%Y%m%d_%H%M%S%f") + ".json" blob = blobServiceClient.get_blob_client(container=AZURE_PAGE_CONTAINER, blob=fileName) # Upload the content blob.upload_blob(json_document)
You can also download the content from Azure Blob Storage as a JSON document using the download_blob method. There also is the need to create a blob client that represents the blob that you want to download and provide the file name as a parameter. After that, you can then read the data as a JSON string and parse it into a Python object.
For example, if you want to download the content with a given file name, you can use:
# Create a blob client blob = blobServiceClient.get_blob_client(container=AZURE_PAGE_CONTAINER, blob=fileName # Download the content data = blob.download_blob().readall() document = json.loads(data) print(document)
By following this blog post, you will gain the skills to crawl, scrape, and extract content from websites efficiently and store web content securely in Azure Blob Storage. The code provided utilizes both BeautifulSoup and LXML, giving you a comprehensive understanding of the two widely used libraries. The asynchronous approach enhances performance, making it suitable for large-scale web scraping tasks.
Web scraping is not only about data extraction but also about making that data usable. In this blog post, we’ve explored the intricacies of crawling, scraping, and storing web content. Stay tuned for the next part, where we step into utilizing Azure Blob Data and storing it in ACS along with vectors.
JSON stands for JavaScript Object Notation. It is a Data Format that supports a variety of data kinds, including Strings, Booleans, Lists, Numbers, and Objects. It is one of the most common, simple, and lightweight formats for service interaction. In this blog post, I’ll go through 4 JSON Tools that will help you parse, prepare, and visualize JSON in a more efficient and effective manner.
Online JSON Formatter / Beautifier and JSON Validator will format JSON data, and helps to validate, convert JSON to XML, JSON to CSV. Save and Share JSON.
It helps to:
JSONVue is a Chrome Extension featuring JSON support, syntax highlighting, collapsible trees with indent guides, clickable URLs, and the ability to toggle between raw and parsed JSON. JSON documents can be Formatted, Highlighted, and Arrays and Objects can be Collapsed with the JSONVue plugin. Rather than prompting for a download or presenting JSON as text, it Pretty-Prints it. Array and object portions are collapsible for convenient navigation. Copy-Paste is still a viable option.
We’ll use the converter and additional libraries like Jackson objectMapper to parse our JSON Input into JAVA Objects.
1. Copy and paste your JSON in the first code editor and click “Convert”
2. Click on “Copy to Clipboard” when the JAVA object classes appear in the second window
3. Import Jackson libraries
4. Create POJO classes to map your JSON string
5. Create ObjectMapper class and deserialize into a Root class
Quicktype converts the JSON to POJO In any Programming Language. In C#, Swift, JavaScript, Flow, Python, TypeScript, Go, Rust, Objective-C, Kotlin, C++, and other languages, Quicktype produces types and helper code for reading JSON. Quicktype is another excellent online tool for converting JSON to classes/structs in any of the programming languages
I’m hopeful that the JSON tools I’ve described above assist you in your development work. The JSON hosting platform can then be explored to host your application coding.
]]>REST stands for REpresentational State Transfer. It supports Behavior-Driven Development (BDD) syntax like Given, When, and Then notations. It helps us to integrate with testing frameworks like Junit or TestNG.
Code | Explanation |
Given() | ‘Given’ keyword, lets you set a background, here, you pass the request headers, query and path param, body, cookies. |
When() | ‘when’ keyword marks the premise of your scenario. For example, ‘when’ you get/post/put something, do something else. |
Method() | Substitute this with any of the CRUD operations (get/post/put/delete) |
Then() | Your assert and matcher conditions go here |
Let’s start by importing the package so that we can use its methods.
Let’s store the base URI before using it, user the following line to store the BaseURI
Note: Now that we are all set, Let’s look at the scenarios.
In the above example we have used SessionFilter which helps us to keep the track of the current session, we just must use the keyword filter(session) and the rest will be taken care automatically.
As we are using the body as JSON Data where we are passing the username and password to log into the application, we have to user the header as “application/json”.
Using the keyword relaxedHPPTSValidation() means that you’ll trust all hosts regardless if the SSL certificate is invalid.
The keyword log().all() is responsible to log the entire output for better understanding. Also, it is not necessary, but it is considered as a good coding standard.
The keyword extract().response().asString() does the work of extracting the response as a string as we might need the string to perform future operations.
Click on the link to learn more about the Cookie Authentication
Conclusion: Congratulations, you have Successfully Logged in the application and Created a Session using Login API.
Path Parameter when the user wants the information for key with the ID 10101 and not for all the ID, in such situation, we can use the PathParam.
Also, we can send dynamic data into the JSON by passing the data which is stored in a string called “expectedMessage”.
We are also using assertThat().statusCode(201) to assert that the status code should be 201 or fail stating that the expected statusCode “201” but found this “XXX”.
We can parse JSON Response with Rest Assured by using the JSONPath class.
Hence, we are passing the string to JsonPath. Now the individual data is accessible with the help of object created which is “js” in our case.
Conclusion: Congratulations, you have successfully Added your First Comment into the bug you just created with the help of REST API.
In the Documentation the add attachment uses the Curl Command:
curl -D- -u admin:admin -X POST -H “X-Atlassian-Token: no-check” -F “file=@myfile.txt” http://myhost/rest/api/2/issue/TEST-123/attachments
Decoding the upper curl command, means D – Different parameters that we are passing, u – Username Password, X – HTTP Method (Post, Delete, etc.), H – Header (key value), F – File for the attachment.
With the help of above data we can easily build our code using given, header, post.
Here we are Uploading a file and not using any raw data as a body, hence we are using the Token as “X-Atlassian-Token – no-check” and “Content-Type – multipart/form-data”.
Multipart requests combine one or more sets of data into a single body, separated by boundaries.
You typically use these requests for File Uploads and for Transferring Data of several types in a single request (for example, a file along with a JSON object).
Note: There’s a cool JSON Online Editor available which you should check out for sure.
Conclusion: Congratulations, you just Added your First Attachment into the existing bug you created with the help of Rest API.
There may be Scenarios where you need to fetch only a few or a single record. In such cases, query string parameters play an important role.
In our example we are interested in the Single Field comment, hence with the help of queryParam.
Steps:
Conclusion: Congratulations, you have successfully Added and Verified a Comment with the help of REST API.
In this blog, you Explored Jira APIs. You created your first issue in Jira using the Jira Cloud REST APIs and performed various operations on the issue you created. You can now use the REST API to build add-ons for JIRA, develop Integrations between JIRA and other applications.
Happy Coding!
]]>Project automation in Jira is necessary because it saves time, increases productivity, and improves team cooperation. In this blog, I’ll teach you how to automate various activities in Jira, reducing your workload and providing you with a detailed report. Which will also allow you to focus on other important aspects.
“Focus on the important things. Let automation do the rest.”
JIRA is a Software Testing/Bug Tracking Tool developed by the Australian Company Atlassian. Jira has risen in popularity over time, with more than 180,000 users in 190 countries. It’s a bug-tracking software that also enables agile project management.
Jira REST APIs are used to remotely connect to Jira Server Applications. The Jira Server platform provides a REST API for basic capabilities like issues and workflows, or you may develop any other type of interface.
Projects: It is used to manage the defects very effectively.
Issue: It is used to track and manage the defects/issues.
Workflow: Processes the Issue/Defect life cycle.
Search: Find with ease. Through Jira, we can know what happened in the earlier versions and how many defects occurred in the earlier projects.
Dashboards: A dashboard is a display you see when you log in to the Jira to keep track of the assignments and issues you are working on.
Automation frees you up to focus on the important job by Eliminating the need for Manual, Repetitive Chores and ensuring that it follows tight standards. It Checks for bugs, faults, and any other issues that may arise during the creation of a product.
Because it works Swiftly and Efficiently, it can drastically reduce the time it takes to evaluate products. Developers and production managers will have more time on their hands, which they may devote to other project elements. As a result, it can significantly increase productivity.
Using Jira APIs such as REST APIs and Java APIs, you can Extend Jira’s Capability to meet your business needs. So, are you ready to dive deeper into Jira APIs? If that’s the case, Let’s Get Started. This blog will teach you more about Jira and show you how to use its fantastic capabilities.
You can use this REST API Documentation to Build Add-ons for JIRA, develop integrations between JIRA and other applications, or script interactions with JIRA.
Also, we must make sure that we are using the JIRA server, so we must rely on JIRA Server API and not the cloud. Hence refer to the below link because we will be interacting with our JIRA server from a local machine. With the help of a connection, you can play around by hitting the links and getting the response populated with the local host software.
JIRA’s REST APIs provide access to resources (data entities) via URI paths. To use a REST API, your application will make an HTTP request and parse the response. The JIRA REST API uses JSON as its communication format and the standard HTTP methods like GET, PUT, POST, and DELETE.
http://host:port/context/rest/api-name/api-version/resource-name
After hitting every API, you can check the response by asserting the response code we got back. A few of them which are used widely are listed below:
JSON Response Structure
{
“success”: true,
“payload”: {
/* Application-specific data would go here. */
}
}
{
“success”: false,
“payload”: {
/* Application-specific data would go here. */
},
“error”: {
“code”: 123,
“message”: “An error occurred!”
}
}
Step 1:
As we are using The JIRA server, open postman and hit the URL:
http://localhost:8000 /rest/auth/1/session
Step 2:
Go under the Body section and select “raw” and select JSON (application/JSON)
Under this, enter your user credentials in the following format
{“username”: “myuser”, “password”: “mypassword”}
Step 3:
Verify the creation of a new session and return the requested session information, which will look like the following:
{
“session”:
{
“name”:”example.cookie.name”,
“value”:”6E3487971234567896704A9EB4AE501F”
}
}
Conclusion: Congratulations, you successfully logged in to the application and Created a Session as well.
And now, we are all set to perform operations on the account with the help of the session cookie. Let’s try some of the essential operations we can use on JIRA with the help of REST APIs.
Creates an issue or a sub-task from a JSON representation. Please look at the link containing all the data and steps to create a new issue. The data is provided to us by the Atlassian Documentation.
We have to use a POST request to create an issue. And the link we must use is /rest/api/2/issue.
Step 1:
As we are using The JIRA server, open postman and enter the URL:
http://localhost:8000/rest/api/2/issue
Step 2:
Select the “POST” method
Step 3:
Go under the Body section, select “raw” and select JSON(application/JSON) in place of Text
{
“fields:
{
“project”:
{
“key”: “RES”//projectKey
},
“summary”: “Occurred Defect”,
“description”: “This is my first bug”,
“issuetype”: {
“name”: “Bug”
}
}
}
Step 4:
Go to the Headers tab and enter the key as “Cookie” and value as JSESSIONID= “yourSessionKey.”
Step 5:
Verify that you get the response in such a manner:
{
“id”: “1001”,
“key”: “RES-1”,
“self”: “http://localhost:8000/rest/api/2/issue/1001”
}
Conclusion: Congratulations, you have successfully created your first bug with the help of REST API.
Delete API is used to delete an existing issue.
Step 1:
Open postman and enter the URL:
http://localhost:8000 /rest/api/2/issue/{issueIdOrKey}
Step 2:
Select the “DELETE” method
Step 3:
Go to the Headers tab and enter the key as “Cookie” and value as JSESSIONID= “yourSessionKey.”
Step 4:
Verify that the Status code we got is 204
Conclusion: Congratulations, you just deleted an issue with the help of Rest API.
Note: As you noticed, we barely touch the JIRA Dashboard, and our API request is working for us.
Add Comment API is responsible to add a new comment to an issue.
Step 1:
Open postman and enter the URL:
http://localhost:8000/rest/api/2/issue/{issueIdOrKey}/comment
Step 2:
Select the “POST” method
Step 3:
Go under the Body section, select “raw” and select JSON(application/JSON) in place of Text
Step 4:
Enter the below value
{
“body”: “Hello, this is my first comment from REST API”,
“visibility”: {
“type”: “role”,
“value”: “Administrators”
}
}
Step 5:
Go to the Headers tab, enter the key as “Cookie,” and value JSESSIONID= “yourSessionKey.”
Step 6:
Verify that the Status code we received is “201”. This means that the comment was added successfully.
Conclusion: Congratulations, you have successfully added a comment with the help of REST API.
Updates an existing comment using its JSON representation:
Step 1:
Open postman and enter the URL:
http://localhost:8000/rest/api/2/issue/{issueIdOrKey}/comment/{id}
Note: You will get the comment id when it is generated from the received JSON response.
Step 2:
Select the “PUT” method
Step 3:
Go under the Body section, select “raw” and select JSON(application/JSON) in place of Text
Step 4:
Enter the below value
{
“body”: “Hello, this is my updated comment from REST API”,
“visibility”: {
“type”: “role”,
“value”: “Administrators”
}
}
Step 5:
Go to the Headers tab and enter the key as “Cookie” and value as JSESSIONID= “yourSessionKey”
Step 6:
Verify that the Status code we received is “200”. This means that the comment was updated successfully.
Conclusion: Congratulations, you have successfully updated your existing comment with the help of REST API.
Conclusion: We are now familiar with JIRA and Automating it using REST APIs to perform POST, PUT, DELETE, and GET operations. Basics of JSON Response Structure. More specifically, you learned the Key Features of Jira that make it so popular among Developers. In addition, you understood the detailed steps to work with Jira APIs.
Happy Coding!
]]>cURL is frequently used by developers working with REST API’s to send and receive data using JSON notation. This has been a common pattern for years, but it has never been seamless. There have been a number of times when I’ve been trying to get a JSON payload to work against an endpoint for a quick test but I can’t get the quotes correct. Daniel Stenberg, founder and lead developer of cURL is now saying it’s time for a change.
cURL is a command line tool used to transfer data to and from servers. It can be used to download files, upload files, or simply query a server for information. cURL is often used in conjunction with scripts or applications that need to communicate with a server.
A REST API is a web-based tool that allows you to communicate with a server via JSON data transfers. A REST API exposes resources (such as user profiles, files, or comments) that may be accessed using HTTP operations (such as GET, POST, PUT, and DELETE). This lets you to easily send and receive data between a server and a client.
This is a common example of sending a JSON payload with curl:
curl -H "Content-Type: application/json" -d '{"name":"Bruce Wayne","occupation":"Batman"}' https://jobhire.com/
This will send the data contained in the JSON string {“name”:”Bruce Wayne”,”occupation”:”Batman”} to the server at https://jobhire.com/. The simplicity of this example is about as reasonable as the idea that Bruce Wayne would need to post to a job board. He’s rich.
The proposed idea would be to add a new tag –jp, which stands for JSON part. You can add multiple parts to build the body on the same command line. You can see how this composition would work for even complex types like lists or grouping in the examples provided in the newly-updated wiki for this proposed feature.
Even the simple example is uesfule because you can already see how you don’t need to deal with quotes.
Input:
--jp a=b --jp c=d --jp e=2 --jp f=false
Body:
{
"a": "b",
"c": "d",
"e": 2,
"f": false
}
There are already examples for more complex structures, such as lists and maps.
Input:
--jp ":list Monday, Tuesday, Wednesday, Thursday"
Body:
[
"Monday",
"Tuesday",
"Wednesday",
"Thursday"
]
--jp map=europe --jp prime[]=13 --jp prime[]=17 --jp target[x]=-10 --jp target[y]=32
{
"map": "europe",
"prime": [
13,
17
],
"target": {
"x": -10,
"y": 32
}
}
This is going to be a nice addition for cURL. Lack of direct JSON support was never a showstopper for me, but I’m looking forward to the new syntax. I appreciate the fact that Daniel Stenberg is taking it on, particularly since this apparently isn’t a use case he deals with frequently.
]]>One frequent use case most of the Adobe Experience Manager (AEM) Full Stack Developers would have come across is migrating content from different applications into AEM. Data from source applications can come in various formats like JSON, XML, CSV, etc. When the source file format is JSON, in order to transform the source data to target structure, we need to write complex programs to read and parse it. In this blog, I will show you an efficient and configurable solution to this problem.
I recently faced this challenge with one of our clients. They have a Solr service, in which the most recent information about their shops around the world is stored. I wanted to avoid calling that service every time we present that information in AEM. For this purpose, I decided to implement a daily batch process to get the data from the service and turn it into JCR nodes and properties.
The problem was that the information is stored in a format that does not directly match the structure required for shop pages in our current AEM implementation. Therefore, I needed to perform a transformation of that data before uploading it to the system.
My first thought was to write an elaborate transformation algorithm. Nevertheless, I realized that this probably was a pretty common problem and decided to look for a library to do the job. The best option I found was Jolt. It is an open-source library with zero external dependencies, tested in several other open-source projects like Apache Camel and Apache NiFi, and highly discussed in software development forums.
Jolt is an open-source JSON to JSON transformation library written in Java.
Jolt’s main features are:
Demo: http://jolt-demo.appspot.com/#inception
GitHub repo: https://github.com/bazaarvoice/jolt
Jolt supports the following transformations.
Further information and examples of these transformations can be found on Jolt’s GitHub page.
First, we need to add the Maven dependency. Apache ServiceMix offers an OSGi wrapper for Jolt, so we do not require embedding it in our bundle. Remember to install the jar via the Adobe Experience Manager Web Console Bundles.
<dependency> <groupId>org.apache.servicemix.bundles</groupId> <artifactId>org.apache.servicemix.bundles.bazaarvoice-jolt</artifactId> <version>0.1.1_1</version> </dependency>
Now, we can use Jolt to do all the heavy lifting work for us.
For demonstration proposes, imagine that you have this JSON document that comes from an external system:
{ "store_id": "1234", "address": { "street_addresses": ["742 Evergreen Terrace"], "city": "Springfield" } }
And we want to load it into AEM like this:
{ "storeId" : "1234", "location" : { "address" : [ "742 Evergreen Terrace" ], "city" : "Springfield" } }
Jolt can chain different transformations to process a record. So, our transformation specification file must consist of an array of operations. In our case, we only need one.
To do our transformation, we will use the shift operation. It copies properties from the input JSON to the desired location in the output file. In the spec property, we tell Jolt the initial and final locations for a given field.
Let’s see how to move street_addresses to address.
First, we start with a copy of the input:
"spec": { "address": { "street_addresses": } }
Then we define where we want to move the property:
"spec": { "address": { "street_addresses": "location" } }
Finally, we specify the new name for it:
"spec": { "address": { "street_addresses": "location.address" } }
With this simple specification, we have moved address.street_addresses
to location.address
The remaining transformations can be done with the following spec file:
[ { "operation": "shift", "spec":{ "store_id": "storeId", "address": { "street_addresses": "location.address", "city": "location.city" } } } ]
As you can see, the specifications file is not overcomplicated.
Now that we know how to do transformations using Jolt, it is time to use them in Java.
To use Jolt, we need to instance Chainr
. Here is an example code that executes the discussed transformation:
import com.bazaarvoice.jolt.Chainr; import com.bazaarvoice.jolt.JsonUtils; public class JoltTest { public static void transformJsonJolt() { final Chainr chainr = Chainr.fromSpec(JsonUtils.classpathToList("/path/to/specFile.json")); final Object jsonInput = JsonUtils.classpathToObject("/path/to/jsonInput.json"); final Object jsonOutput = chainr.transform(jsonInput); System.out.println(jsonOutput); } }
The output:
{storeId=1234, location={address=[742 Evergreen Terrace], city=Springfield}}
Jolt is a powerful tool that helps you to perform complex JSON transformations without reinventing the wheel. It is simple, open-source, and dependency-free.
Now that you know to transform your JSON data, check this blog from one of my colleagues to learn how to import it into AEM using the ContentImporter
component.
Let’s talk about extract, transform, and load, also known as ETL. If you are an AEM professional, this is something you have previously dealt with. It could be something along the lines of products, user bios, or store locations.
The extract and transform parts may differ depending on your source and requirements. The loading part is almost always going to be into AEM. While there may be a few ways to do that, let us talk about what is there for you out-of-the-box.
As an AEM developer, the Sling Post Servlet is something you should be familiar with. In particular, there is an import operation. This allows us to do the following:
curl -L https://www.boredapi.com/api/activity | \ curl -u admin:admin \ -F":contentFile=@-" \ -F":nameHint=activity" \ -F":operation=import" \ -F":contentType=json" \ http://localhost:4502/content/mysite/us/en/jcr:content/root/container
You can run this many times. You will get activity_*
nodes under /content/mysite/us/en/jcr:content/root/container
. This assumes that the source is already in the format you desire. Meaning you have already done the transform part.
And the import operation can deal with more complex JSON structures, even XML. Here is a possible output that could be provided by a transform:
{ "jcr:primaryType": "cq:Page", "jcr:content": { "jcr:primaryType": "cq:PageContent", "jcr:title": "My Page", "sling:resourceType": "mysite/components/page", "cq:template": "/conf/mysite/settings/wcm/templates/page-content", "root": { "jcr:primaryType": "nt:unstructured", "sling:resourceType": "mysite/components/container", "layout": "responsiveGrid", "container": { "jcr:primaryType": "nt:unstructured", "sling:resourceType": "mysite/components/container" } } } }
Save this to a file named mypage.json
and run the following curl
command.
curl -u admin:admin \ -F":name=my-page" \ -F":contentFile=@mypage.json" \ -F":operation=import" \ -F":contentType=json" \ -F":replace=true" \ -F":replaceProperties=true" \ http://localhost:4502/content/mysite/us/en
And boom! You have an instant page. This time instead of the :nameHint
I used the :name
and :replace
properties. Running this command again will update the page. The loading part becomes really trivial and you need only worry about extracting and transforming.
While the Sling Post Servlet is well documented, its internal implementation is not. Luckily, it is open source. You won’t have to do any decompiling today! Let’s read the doPost function of the implementation. There are too many goodies we could dive into. Let’s stay focused. We are looking for the import operation. Did you find it?
You should have wound up at the doRun function of the ImportOperation.java
. This is where all those request parameters from the curl commands above come into play. Go further down. You will find a call to ContentImporter.importContent(Node, String, String, InputStream, ImportOptions, ContentImportListener). Can you find its implementation?
Finally, you should have wound up on the DefaultContentImporter.java implementation. An OSGi component that implements the ContentImporter
interface.
Yes! Programatically doing things. Now that we know that the ContentImporter
is available as an OSGi component all we need is:
@Reference private ContentImporter contentImporter;
And assuming you have your content via InputStream
we can import the content under any node. As an example, I am using the SimpleServlet
generated as part of the AEM Maven Archtype. I’m using Lombok to speed things up a little.
@Component(service = { Servlet.class }) @SlingServletResourceTypes(resourceTypes = "mysite/components/page", methods = HttpConstants.METHOD_GET, extensions = "txt") @ServiceDescription("Simple Demo Servlet") @Slf4j public class SimpleServlet extends SlingSafeMethodsServlet { private static final long serialVersionUID = 1L; @Reference private ContentImporter contentImporter; @Override protected void doGet(final SlingHttpServletRequest request, final SlingHttpServletResponse response) throws IOException { final MyContentImportListener contentImportListener = new MyContentImportListener(); final Node node = request.getResource().adaptTo(Node.class); if (node != null) { final MyImportOptions importOptions = MyImportOptions.builder() .overwrite(true) .propertyOverwrite(true) .build(); try (InputStream inputStream = IOUtils.toInputStream("{\"foo\":\"bar\"}", StandardCharsets.UTF_8)) { this.contentImporter.importContent(node, "my-imported-structure", "application/json", inputStream, importOptions, contentImportListener); } catch (final RepositoryException e) { log.error(e.getMessage(), e); } } response.setContentType("text/plain"); response.getWriter().println(contentImportListener); } @Builder @Getter private static final class MyImportOptions extends ImportOptions { private final boolean checkin; private final boolean autoCheckout; private final boolean overwrite; private final boolean propertyOverwrite; @Override public boolean isIgnoredImportProvider(final String extension) { return false; } } @Getter @ToString private static final class MyContentImportListener implements ContentImportListener { private final com.google.common.collect.Multimap<String, String> changes = com.google.common.collect.ArrayListMultimap.create(); @Override public void onReorder(final String orderedPath, final String beforeSibbling) {this.changes.put("onReorder", String.format("%s, %s", orderedPath, beforeSibbling)); } @Override public void onMove(final String srcPath, final String destPath) { this.changes.put("onMove", String.format("%s, %s", srcPath, destPath)); } @Override public void onModify(final String srcPath) { this.changes.put("onModify", srcPath); } @Override public void onDelete(final String srcPath) { this.changes.put("onDelete", srcPath); } @Override public void onCreate(final String srcPath) { this.changes.put("onCreate", srcPath); } @Override public void onCopy(final String srcPath, final String destPath) { this.changes.put("onCopy", String.format("%s, %s", srcPath, destPath)); } @Override public void onCheckin(final String srcPath) { this.changes.put("onCheckin", srcPath); } @Override public void onCheckout(final String srcPath) { this.changes.put("onCheckout", srcPath); } } }
Often times, clients want to empower their customers to export their eCommerce transactions, such as order history, approval lists, and saved orders, and other data, like wish lists and cart lists, by themselves without admin involvement. These features removed dependency on Insite admins, subsequently improving the user experience, efficiency, and overall satisfaction for Insite users.
To facilitate these in-demand functionalities, we created an elegant solution using typescript and jQuery that can be implemented on an Insite Commerce website. The solution below works in Insite Commerce 4.x SDK, as well as Cloud version (4.0 to 4.5), and can help reduce the development hours needed to export transaction data. The code below exports data into an Excel spreadsheet, but with modification can allow exporting into other formats like CSV.
The example shown below showcases how to export order history data into an Excel sheet in B2B Commerce Cloud by Insite (formerly Insite Commerce) 4.5 on the cloud. However, you can export any data, including wishlists, order approval lists, and more. To export data, follow the steps below:
XLSX: any; exportToExcel(data: any) { if (typeof (XLSX) === "undefined") { $.getScript("/Themes/MyTheme/Scripts/xlsx.full.min.js", () => { this.XLSX = XLSX; }); } else { this.XLSX = XLSX; } /* make the worksheet */ var ws = XLSX.utils.json_to_sheet(data); /* add to workbook */ var wb = this.XLSX.utils.book_new(); XLSX.utils.book_append_sheet(wb, ws, "Export Orders"); /* generate an XLSX file */ XLSX.writeFile(wb, "exportorders.xlsx"); }
<button type="button" id="export" class="btn primary btn-search" ng-click="vm.exportToExcel(vm.orderHistory.orders);"> [% translate 'Export Orders' %]</button>
Helpful Tips:
This helpful addition to B2B Commerce Cloud by Insite will enhance the shopping experience and empower users who want to export their shopping data. For more information on B2B Commerce Cloud by Insite, check out our other blogs.
]]>When Salesforce acquired MuleSoft earlier in 2018, the Perficient team couldn’t help but smile. We not only have an expert Salesforce team, but also a dedicated API practice that specializes in MuleSoft that we’ve brought in for work with clients like this major leading beverage company, OneAmerica, and Ameren.
Here are a few key articles our API practice recommends to learn more about how Mule API versions like MuleSoft work and how they can impact your business.
Are you ready to get more intelligence out of your data? We’ve helped businesses integrate data systems with tools like MuleSoft, Salesforce Connect, and more that allow access to the insights that truly drive business decisions. What questions do you have for our team? Leave us a note in the comments below.
]]>This blog helps to understand JSON parser in IIB and how to validate the incoming JSON message.
JavaScript Object Notation is a lightweight plain text format used for data interchange. It’s a collection of Name-Value pair.
In IIB JSON message realized as Object (Name-Value pair) and Array. IIB provides the feature called JSON domain. JSON parser and serializer process the messages below Data under JSON domain.
JSON parser converts the incoming bit stream into logical tree structure. It just validates the syntax of the incoming JSON message but it won’t validate the content/value of incoming message against any schema (Swagger.json). Because JSON modelling is not supported by IIB. Serializer converts the logical tree structure into bit stream.
Below picture describes the JSON logical tree structure created by JSON parser,
If syntax of the incoming JSON message is wrong, IIB sends the json parser error response as like below:
E.g.: BIP5705E: JSON parsing errors have occurred. : F:\build\S1000_slot1\S1000_P\src\DataFlowEngine\JSON\ImbJSONParser.cpp: 257: ImbJSONParser::parseLastChild: ComIbmWSInputNode: MF_JSON_POC#FCMComposite_1_1
BIP5701E: A JSON parsing error occurred on line 6 column 1. An invalid JSON character (UTF-8: ‘0x00000022’) was found in the input bit stream. The JSON parser was expecting to find one of the following characters or types: ‘”}”, “,”‘. The internal error code is ‘0x00000108’. : F:\build\S1000_slot1\S1000_P\src\DataFlowEngine\JSON\ImbJSONDocHandler.cpp: 550: ImbJSONDocHandler::onInvalidCharacter: ComIbmWSInputNode: MF_JSON_POC#FCMComposite_1_1</text>
This section describes how to create REST API to validate the JSON message using xsd
STEP 1: Create REST API project and specify the API base path.
STEP 2: Define the JSON schema (Swagger.json) under Model Definitions. Model definition helps to create a JSON schema to define a structure of the JSON message.
JSON Schema will be created under OtherResources folder with the default name “swagger.json”.
JSON Schema:
{“swagger” : “2.0”,
“info” : {
“title” : “JSONoverHTTP”,
“version” : “1.0.0”,
“description” : “JSON meesage custom validation”
},
“paths” : {
“/customValidation/xsd” : {
“post” : {
“operationId” : “postXsd”,
“responses” : {
“200” : {
“description” : “The operation was successful.”
}},
“consumes” : [ “application/json” ],
“produces” : [ “application/json” ],
“description” : “Insert a xsd”,
“parameters” : [ {
“name” : “body”,
“in” : “body”,
“schema” : {
“$ref” : “#/definitions/PersonDetail”
},
“description” : “The request body for the operation”,
“required” : true
} ]
}
}
},
“basePath” : “/json_overhttp/v1”,
“definitions” : {
“PersonDetail” : {
“type” : “object”,
“properties” : {
“name” : {
“type” : “string”
},
“age” : {
“type”: “number”
},
“address” : {
“type” : “object”,
“properties” : {
“street” : {
“type” : “string”
},
“city” : {
“type” : “string”
},
“phoneNumber” : {
“type” : “number”,
“format” : “length=10”
}
}
},
“ValidationFlag” : {
“type” : “string”
}
},
“required” : [ “name” ]
}
}
}
STEP 3: Create a new resource.
STEP 4: Define the resource path and select the operation as post
STEP 5: click the subflow icon to implement conversion of JSON to XML and validation.
REST API Message flow will be created after doing above mentioned step .postXsd is a sub flow where the actual implementations are done.
STEP 6: Create an XSD as per the Swagger document under a shared library and name it as PersonDeatailSchema.xsd.
STEP 7: Refer the Library to REST API.
Right click the REST API Select à Manage Library references à choose the shared library in which you have created the PersonDeatailSchema.xsd schema.
STEP 8: Add mapping node in to subflow (postXsd) to convert JSON message to xml message.
STEP 9: select Swagger.JSON as input and PersonDeatailSchema.xsd as output format. This will convert the incoming JSON message to an XML.
STEP 10: After mapping node add a validator node. configure properties Domain as XMLNSC and Validation as content and value.
Note: If the XSD is created under library no need to refer explicitly in message model. At run time broker will take the right schema to validate the incoming message.
below is the postXsd Subflow,
In the below test util got error response back from the REST API as “The value “1234567B91” is not a valid value for the “phonetype” datatype”.
Phone number defined as data type: number as per the JSON schema and equally phone number defined as integer in the xsd schema definition.
After converting message from JSON to XML validator node validates the incoming message against the Schema definition. As PhoneNumber field has the string value as “1234567B91” in the incoming message so validator node throws the error
IIB doesn’t support JSON message model but it allows you to access and manipulate the JSON message. So, validation can be achieved by creating customized code.
]]>