Integration & IT Modernization Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/integration-it-modernization/ Expert Digital Insights Mon, 03 Feb 2025 15:30:02 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Integration & IT Modernization Articles / Blogs / Perficient https://blogs.perficient.com/category/technical/integration-it-modernization/ 32 32 30508587 Migrating from MVP to Jetpack Compose: A Step-by-Step Guide for Android Developers https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/ https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/#respond Mon, 03 Feb 2025 15:30:02 +0000 https://blogs.perficient.com/?p=376701

Migrating an Android App from MVP to Jetpack Compose: A Step-by-Step Guide

Jetpack Compose is Android’s modern toolkit for building native UI. It simplifies and accelerates UI development by using a declarative approach, which is a significant shift from the traditional imperative XML-based layouts. If you have an existing Android app written in Kotlin using the MVP (Model-View-Presenter) pattern with XML layouts, fragments, and activities, migrating to Jetpack Compose can bring numerous benefits, including improved developer productivity, reduced boilerplate code, and a more modern UI architecture.

In this article, we’ll walk through the steps to migrate an Android app from MVP with XML layouts to Jetpack Compose. We’ll use a basic News App to explain in detail how to migrate all layers of the app. The app has two screens:

  1. A News List Fragment to display a list of news items.
  2. A News Detail Fragment to show the details of a selected news item.

We’ll start by showing the original MVP implementation, including the Presenters, and then migrate the app to Jetpack Compose step by step. We’ll also add error handling, loading states, and use Kotlin Flow instead of LiveData for a more modern and reactive approach.

1. Understand the Key Differences

Before diving into the migration, it’s essential to understand the key differences between the two approaches:

  • Imperative vs. Declarative UI: XML layouts are imperative, meaning you define the UI structure and then manipulate it programmatically. Jetpack Compose is declarative, meaning you describe what the UI should look like for any given state, and Compose handles the rendering.
  • MVP vs. Compose Architecture: MVP separates the UI logic into Presenters and Views. Jetpack Compose encourages a more reactive and state-driven architecture, often using ViewModel and State Hoisting.
  • Fragments and Activities: In traditional Android development, Fragments and Activities are used to manage UI components. In Jetpack Compose, you can replace most Fragments and Activities with composable functions.

2. Plan the Migration

Migrating an entire app to Jetpack Compose can be a significant undertaking. Here’s a suggested approach:

  1. Start Small: Begin by migrating a single screen or component to Jetpack Compose. This will help you understand the process and identify potential challenges.
  2. Incremental Migration: Jetpack Compose is designed to work alongside traditional Views, so you can migrate your app incrementally. Use ComposeView in XML layouts or AndroidView in Compose to bridge the gap.
  3. Refactor MVP to MVVM: Jetpack Compose works well with the MVVM (Model-View-ViewModel) pattern. Consider refactoring your Presenters into ViewModels.
  4. Replace Fragments with Composable Functions: Fragments can be replaced with composable functions, simplifying navigation and UI management.
  5. Add Error Handling and Loading States: Ensure your app handles errors gracefully and displays loading states during data fetching.
  6. Use Kotlin Flow: Replace LiveData with Kotlin Flow for a more modern and reactive approach.

3. Set Up Jetpack Compose

Before starting the migration, ensure your project is set up for Jetpack Compose:

  1. Update Gradle Dependencies:
    Add the necessary Compose dependencies to your build.gradle file:

    android {
        ...
        buildFeatures {
            compose true
        }
        composeOptions {
            kotlinCompilerExtensionVersion '1.5.3'
        }
    }
    
    dependencies {
        implementation 'androidx.activity:activity-compose:1.8.0'
        implementation 'androidx.compose.ui:ui:1.5.4'
        implementation 'androidx.compose.material:material:1.5.4'
        implementation 'androidx.compose.ui:ui-tooling-preview:1.5.4'
        implementation 'androidx.lifecycle:lifecycle-viewmodel-compose:2.6.2'
        implementation 'androidx.navigation:navigation-compose:2.7.4' // For navigation
        implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.6.2' // For Flow
        implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-core:1.7.3' // For Flow
    }
  2. Enable Compose in Your Project:
    Ensure your project is using the correct Kotlin and Android Gradle plugin versions.

4. Original MVP Implementation

a. News List Fragment and Presenter

The NewsListFragment displays a list of news items. The NewsListPresenter fetches the data and updates the view.

NewsListFragment.kt

class NewsListFragment : Fragment(), NewsListView {

    private lateinit var presenter: NewsListPresenter
    private lateinit var adapter: NewsListAdapter

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        val view = inflater.inflate(R.layout.fragment_news_list, container, false)
        val recyclerView = view.findViewById<RecyclerView>(R.id.recyclerView)
        adapter = NewsListAdapter { newsItem -> presenter.onNewsItemClicked(newsItem) }
        recyclerView.adapter = adapter
        recyclerView.layoutManager = LinearLayoutManager(context)
        presenter = NewsListPresenter(this)
        presenter.loadNews()
        return view
    }

    override fun showNews(news: List<NewsItem>) {
        adapter.submitList(news)
    }

    override fun showLoading() {
        // Show loading indicator
    }

    override fun showError(error: String) {
        // Show error message
    }
}

NewsListPresenter.kt

class NewsListPresenter(private val view: NewsListView) {

    fun loadNews() {
        view.showLoading()
        // Simulate fetching news from a data source (e.g., API or local database)
        try {
            val newsList = listOf(
                NewsItem(id = 1, title = "News 1", summary = "Summary 1"),
                NewsItem(id = 2, title = "News 2", summary = "Summary 2")
            )
            view.showNews(newsList)
        } catch (e: Exception) {
            view.showError(e.message ?: "An error occurred")
        }
    }

    fun onNewsItemClicked(newsItem: NewsItem) {
        // Navigate to the news detail screen
        val intent = Intent(context, NewsDetailActivity::class.java).apply {
            putExtra("newsId", newsItem.id)
        }
        startActivity(intent)
    }
}

NewsListView.kt

interface NewsListView {
    fun showNews(news: List<NewsItem>)
    fun showLoading()
    fun showError(error: String)
}

b. News Detail Fragment and Presenter

The NewsDetailFragment displays the details of a selected news item. The NewsDetailPresenter fetches the details and updates the view.

NewsDetailFragment.kt

class NewsDetailFragment : Fragment(), NewsDetailView {

    private lateinit var presenter: NewsDetailPresenter

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        val view = inflater.inflate(R.layout.fragment_news_detail, container, false)
        presenter = NewsDetailPresenter(this)
        val newsId = arguments?.getInt("newsId") ?: 0
        presenter.loadNewsDetail(newsId)
        return view
    }

    override fun showNewsDetail(newsItem: NewsItem) {
        view?.findViewById<TextView>(R.id.title)?.text = newsItem.title
        view?.findViewById<TextView>(R.id.summary)?.text = newsItem.summary
    }

    override fun showLoading() {
        // Show loading indicator
    }

    override fun showError(error: String) {
        // Show error message
    }
}

NewsDetailPresenter.kt

class NewsDetailPresenter(private val view: NewsDetailView) {

    fun loadNewsDetail(newsId: Int) {
        view.showLoading()
        // Simulate fetching news detail from a data source (e.g., API or local database)
        try {
            val newsItem = NewsItem(id = newsId, title = "News $newsId", summary = "Summary $newsId")
            view.showNewsDetail(newsItem)
        } catch (e: Exception) {
            view.showError(e.message ?: "An error occurred")
        }
    }
}

NewsDetailView.kt

interface NewsDetailView {
    fun showNewsDetail(newsItem: NewsItem)
    fun showLoading()
    fun showError(error: String)
}

5. Migrate to Jetpack Compose

a. Migrate the News List Fragment

Replace the NewsListFragment with a composable function. The NewsListPresenter will be refactored into a NewsListViewModel.

NewsListScreen.kt

@Composable
fun NewsListScreen(viewModel: NewsListViewModel, onItemClick: (NewsItem) -> Unit) {
    val newsState by viewModel.newsState.collectAsState()

    when (newsState) {
        is NewsState.Loading -> {
            // Show loading indicator
            CircularProgressIndicator()
        }
        is NewsState.Success -> {
            val news = (newsState as NewsState.Success).news
            LazyColumn {
                items(news) { newsItem ->
                    NewsListItem(newsItem = newsItem, onClick = { onItemClick(newsItem) })
                }
            }
        }
        is NewsState.Error -> {
            // Show error message
            val error = (newsState as NewsState.Error).error
            Text(text = error, color = Color.Red)
        }
    }
}

@Composable
fun NewsListItem(newsItem: NewsItem, onClick: () -> Unit) {
    Card(
        modifier = Modifier
            .fillMaxWidth()
            .padding(8.dp)
            .clickable { onClick() }
    ) {
        Column(modifier = Modifier.padding(16.dp)) {
            Text(text = newsItem.title, style = MaterialTheme.typography.h6)
            Text(text = newsItem.summary, style = MaterialTheme.typography.body1)
        }
    }
}

NewsListViewModel.kt

class NewsListViewModel : ViewModel() {

    private val _newsState = MutableStateFlow<NewsState>(NewsState.Loading)
    val newsState: StateFlow<NewsState> get() = _newsState

    init {
        loadNews()
    }

    private fun loadNews() {
        viewModelScope.launch {
            _newsState.value = NewsState.Loading
            try {
                // Simulate fetching news from a data source (e.g., API or local database)
                val newsList = listOf(
                    NewsItem(id = 1, title = "News 1", summary = "Summary 1"),
                    NewsItem(id = 2, title = "News 2", summary = "Summary 2")
                )
                _newsState.value = NewsState.Success(newsList)
            } catch (e: Exception) {
                _newsState.value = NewsState.Error(e.message ?: "An error occurred")
            }
        }
    }
}

sealed class NewsState {
    object Loading : NewsState()
    data class Success(val news: List<NewsItem>) : NewsState()
    data class Error(val error: String) : NewsState()
}

b. Migrate the News Detail Fragment

Replace the NewsDetailFragment with a composable function. The NewsDetailPresenter will be refactored into a NewsDetailViewModel.

NewsDetailScreen.kt

@Composable
fun NewsDetailScreen(viewModel: NewsDetailViewModel) {
    val newsState by viewModel.newsState.collectAsState()

    when (newsState) {
        is NewsState.Loading -> {
            // Show loading indicator
            CircularProgressIndicator()
        }
        is NewsState.Success -> {
            val newsItem = (newsState as NewsState.Success).news
            Column(modifier = Modifier.padding(16.dp)) {
                Text(text = newsItem.title, style = MaterialTheme.typography.h4)
                Text(text = newsItem.summary, style = MaterialTheme.typography.body1)
            }
        }
        is NewsState.Error -> {
            // Show error message
            val error = (newsState as NewsState.Error).error
            Text(text = error, color = Color.Red)
        }
    }
}

NewsDetailViewModel.kt

class NewsDetailViewModel : ViewModel() {

    private val _newsState = MutableStateFlow<NewsState>(NewsState.Loading)
    val newsState: StateFlow<NewsState> get() = _newsState

    fun loadNewsDetail(newsId: Int) {
        viewModelScope.launch {
            _newsState.value = NewsState.Loading
            try {
                // Simulate fetching news detail from a data source (e.g., API or local database)
                val newsItem = NewsItem(id = newsId, title = "News $newsId", summary = "Summary $newsId")
                _newsState.value = NewsState.Success(newsItem)
            } catch (e: Exception) {
                _newsState.value = NewsState.Error(e.message ?: "An error occurred")
            }
        }
    }
}

sealed class NewsState {
    object Loading : NewsState()
    data class Success(val news: NewsItem) : NewsState()
    data class Error(val error: String) : NewsState()
}

6. Set Up Navigation

Replace Fragment-based navigation with Compose navigation:

class MainActivity : ComponentActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            NewsApp()
        }
    }
}

@Composable
fun NewsApp() {
    val navController = rememberNavController()
    NavHost(navController = navController, startDestination = "newsList") {
        composable("newsList") {
            val viewModel: NewsListViewModel = viewModel()
            NewsListScreen(viewModel = viewModel) { newsItem ->
                navController.navigate("newsDetail/${newsItem.id}")
            }
        }
        composable("newsDetail/{newsId}") { backStackEntry ->
            val viewModel: NewsDetailViewModel = viewModel()
            val newsId = backStackEntry.arguments?.getString("newsId")?.toIntOrNull() ?: 0
            viewModel.loadNewsDetail(newsId)
            NewsDetailScreen(viewModel = viewModel)
        }
    }
}

7. Test and Iterate

After migrating the screens, thoroughly test the app to ensure it behaves as expected. Use Compose’s preview functionality to visualize your UI:

@Preview(showBackground = true)
@Composable
fun PreviewNewsListScreen() {
    NewsListScreen(viewModel = NewsListViewModel(), onItemClick = {})
}

@Preview(showBackground = true)
@Composable
fun PreviewNewsDetailScreen() {
    NewsDetailScreen(viewModel = NewsDetailViewModel())
}

8. Gradually Migrate the Entire App

Once you’re comfortable with the migration process, continue migrating the rest of your app incrementally. Use ComposeView and AndroidView to integrate Compose with existing XML

]]>
https://blogs.perficient.com/2025/02/03/migrating-from-mvp-to-jetpack-compose-a-step-by-step-guide-for-android-developers/feed/ 0 376701
Newman Tool and Performance Testing in Postman https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/ https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/#respond Thu, 16 Jan 2025 12:13:41 +0000 https://blogs.perficient.com/?p=375112

Postman is an application programming interface (API) testing tool for designing, testing, and changing existing APIs. It has almost every capability a developer may need to test any API included in Postman.

Postman simplifies the testing process for both REST APIs and SOAP web services with its robust features and intuitive interface. Whether you’re developing a new API or testing an existing one, Postman provides the tools you need to ensure your services are functioning as intended.

  • Using Postman to test the APIs offers a wide range of benefits that eventually help in the overall testing of the application. Postman’s interface is very user-friendly, which allows users to easily create and manage requests without extensive coding knowledge, making it accessible to both developers and testers.
  • Postman supports multiple protocols such as HTTP, SOAP, GraphQL, and WebSocket APIs, which ensures a versatile testing set-up for a wide range of services.
  • To automate the process of validating the API Responses under various scenarios, users can write tests in JavaScript to ensure that the API behavior is as expected.
  • Postman offers an environment management feature that enables the user to set up different environments with environment-specific variables, which makes switching between development, staging, and production settings possible without changing requests manually.
  • Postman provides options for creating collection and organization, which makes it easier to manage requests, group tests, and maintain documentation.
  • Postman supports team collaboration, which allows multiple users to work on the same collections, share requests, and provide feedback in real-time.

Newman In Postman

Newman is a command-line runner that is used to perform commands and check Postman’s response. The Newman can be used to initiate requests in a Postman Collection in addition to the Collection Runner.

Newman is proficient with GitHub and the NPM registry. Additionally, Jenkin and other continuous integration technologies can be linked to it. If every request is fulfilled correctly, Newman produces code.

In the case of errors, code 1 is generated. Newman uses the npm package management, which is built on the Node.js platform.

How to install Newman

Step 1: Ensure that your system has Node.js downloaded and installed. If not, then download and install Node.js.

Step 2: Run the following command in your cli: npm install -g newman

How to use Newman: 

Step 1: Export the Postman collection and save it to your local device.

Step 2: Click on the eye icon in the top right corner of the Postman application.

Step 3: The “MANAGE ENVIRONMENTS” window will open. Provide a variable URL for the VARIABLE field and for INITIAL VALUE. Click on the Download as JSON button. Then, choose a location and save.

Step 4: Export the Environment to the same path where the Collection is available.

Step 5: In the command line, move from the current directory to the direction where the Collection and Environment have been saved.

Step 6: Run the command − newman run <“name of file”>. Please note that the name of the file should be in quotes.

Helpful CLI Commands to Use Newman

-h, --helpGives information about the options available
-v, --versionTo check the version
-e, --environment [file URL]Specify the file path or URL of environment variables.
-g, --globals [file URL]Specify the file path or URL of global variables.
-d, --iteration-data [file]Specify the file path or URL of a data file (JSON or CSV) to use for iteration data.
-n, --iteration-count [number]Specify the number of times for the collection to run. Use with the iteration data file.
--folder [folder Name]Specify a folder to run requests from. You can specify more than one folder by using this option multiple times, specifying one folder for each time the option is used.
--working-dir [path]Set the path of the working directory to use while reading files with relative paths. Defaults to the current directory.
--no-insecure-file-readPrevents reading of files located outside of the working directory.
--export-environment [path]The path to the file where Newman will output the final environment variables file before completing a run
--export-globals [path]The path to the file where Newman will output the final global variables file before completing a run.
--export-collection [path]The path to the file where Newman will output the final collection file before completing a run.
--postman-api-key [api-key]The Postman API Key used to load resources using the Postman API.
--delay-request [number]Specify a delay (in milliseconds) between requests.
--timeout [number]Specify the time (in milliseconds) to wait for the entire collection run to complete execution.
--timeout-request [number]Specify the time (in milliseconds) to wait for requests to return a response.
--timeout-script [number]Specify the time (in milliseconds) to wait for scripts to complete execution.
--ssl-client-cert [path]The path to the public client certificate file. Use this option to make authenticated requests.
-k, --insecureTurn off SSL verification checks and allow self-signed SSL certificates.
--ssl-extra-ca-certs Specify additionally trusted CA certificates (PEM)

Picture2

 

Picture3 Min

Picture4

Performance Testing in Postman

API performance testing involves mimicking actual traffic and watching how your API behaves. It is a procedure that evaluates how well the API performs regarding availability, throughput, and response time under the simulated load.

Testing the performance of APIs can help us in:

  • Test that the API can manage the anticipated load and observe how it reacts to load variations.
  • To ensure a better user experience, optimize and enhance the API’s performance.
  • Performance testing also aids in identifying the system’s scalability and fixing bottlenecks, delays, and failures.

How to Use Postman for API Performance Testing

Step 1: Select the Postman Collection for Performance testing.

Step 2: Click on the 3 dots beside the Collection.

Step 3:  Click on the “Run Collection” option.

Step 4:  Click on the “Performance” option

Step 5: Set up the Performance test (Load Profile, Virtual User, Test Duration).

Step 6: Click on the Run button.

After completion of the Run, we can also download a report in a.pdf format, which states how our collection ran.

A strong and adaptable method for ensuring your APIs fulfill functionality and performance requirements is to use Newman with Postman alongside performance testing. You may automate your tests and provide comprehensive reports that offer insightful information about the functionality of your API by utilizing Newman’s command-line features.

This combination facilitates faster detection and resolution of performance issues by streamlining the testing process and improving team collaboration. Using Newman with Postman will enhance your testing procedures and raise the general quality of your applications as you continue improving your API testing techniques.

Use these resources to develop dependable, strong APIs that can handle the demands of practical use, ensuring a flawless user experience.

]]>
https://blogs.perficient.com/2025/01/16/newman-tool-and-performance-testing-in-postman/feed/ 0 375112
Unlock the Future of Integration with IBM ACE https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/ https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/#respond Wed, 15 Jan 2025 07:13:29 +0000 https://blogs.perficient.com/?p=375312

Have you ever wondered about integration in API development or how to become familiar with the concept?

In this blog, we will discuss one of the integration technologies that is very easy and fun to learn, IBM ACE.

What is IBM ACE?

IBM ACE stands for IBM App Connect Enterprise. It is an integration platform that allows businesses to connect various applications, systems, and services, enabling smooth data flow and communication across diverse environments. IBM ACE supports the creation of Integrations using different patterns, helping organizations streamline their processes and improve overall efficiency in handling data and business workflows.

Through a collection of connectors to various data sources, including packaged applications, files, mobile devices, messaging systems, and databases, IBM ACE delivers the capabilities needed to design integration processes that support different integration requirements.

One advantage of adopting IBM ACE is that it allows current applications to be configured for Web Services without costly legacy application rewrites. By linking any application or service to numerous protocols, including SOAP, HTTP, and JMS, IBM ACE minimizes the point-to-point pressure on development resources.

Modern secure authentication technologies, including LDAP, X-AUTH, O-AUTH, and two-way SSL, are supported through MQ, HTTP, and SOAP nodes, including the ability to perform activities on behalf of masquerading or delegated users.

How to Get Started

Refer to Getting Started with IBM ACE: https://www.ibm.com/docs/en/app-connect/12.0?topic=enterprise-get-started-app-connect

For installation on Windows, follow the document link below. Change the IBM App Connect version to 12.0 and follow along: https://www.ibm.com/docs/en/app-connect/11.0.0?topic=software-installing-windows

IBM ACE Toolkit Interface

Interface

This is what an IBM ACE toolkit interface looks like. You can see all the applications/APIs and libraries you created during application development. In Pallete, you can see all the nodes and connectors needed for application development.

Learn more about nodes and connectors: https://www.ibm.com/docs/en/app-connect/12.0?topic=development-built-in-nodes

IBM ACE provides flexibility in creating an Integration Servers and Integration Node where you can deploy and test your developed code and application, which you can do with the help of mqsi commands.

How to Create a New Application

  • To create a new application, click on File -> New -> Application.

Picture3

  • Give the Application a name and click finish.

Picture4

 

  • To add a message flow, click on New under Application, then Message Flow.

Picture5

  • Give the message flow a name and click finish.

Picture6

  • Once your flow is created, double-click on its name. The message flow will open, and you can implement the process.
  • Drag the required node and connectors to the canvas for your development.

Picture7

How to Create an Integration Node and Integration Server

  • Open your command window for your current installation.

Picture8

  • To create an Integration server, run the following command in the command shell and specify the parameter for the integration server you want to create: mqsicreateexecutiongroup IBNODE -e IServer_2
  • To create an Integration node, run the following command in the command shell and specify the parameter for the integration node you want to create.
    • For example, If you want to create an Integration node with queue manager ACEMQ, use the following command: mqsicreatebroker MYNODE -i wbrkuid -a wbrkpw -q ACEMQ

 How to Deploy the Application

  • Right-click on the application, then click on Deploy.

Picture9

  • Then click on the Integration node and Finish.

Picture10

Advantages of IBM ACE

  • ACE offers powerful integration possibilities. Allowing for smooth communication between different applications, systems, and data sources.
  • It supports a variety of message patterns and data formats, allowing it to handle a wide range of integration scenarios.
  • It meets industry standards, ensuring compatibility and interoperability with many technologies and protocols.
  • ACE has complete administration and monitoring features, allowing administrators to track integration processes’ performance and health.
  • The platform encourages the production of reusable integration components, which decreases development time and effort for comparable integration tasks.
  • ACE offers comprehensive security measures that secure data during transmission and storage while adhering to enterprise-level security standards.
  • ACE offers a user-friendly development environment and tools to design, test, and deploy integration solutions effectively.

Conclusion

In this introductory blog, we have explored IBM ACE and how to create a basic application to learn about this integration technology.

Here at Perficient, we develop complex, scalable, robust, and cost-effective solutions using IBM ACE. This empowers our clients to improve efficiency and reduce manual work, ensuring seamless communication and data flow across their organization.

Contact us today to explore more options for elevating your business.

]]>
https://blogs.perficient.com/2025/01/15/unlock-the-future-of-integration-with-ibm-ace/feed/ 0 375312
How to Upgrade MuleSoft APIs to Java 17: A Comprehensive Guide https://blogs.perficient.com/2025/01/09/how-to-upgrade-mulesoft-apis-to-java-17-a-comprehensive-guide/ https://blogs.perficient.com/2025/01/09/how-to-upgrade-mulesoft-apis-to-java-17-a-comprehensive-guide/#respond Thu, 09 Jan 2025 11:38:52 +0000 https://blogs.perficient.com/?p=370174

The Evolution of Java and Its Significance in Enterprise Applications

Java has been the go-to language for enterprise software development for decades, offering a solid and reliable platform for building scalable applications. Over the years, it has evolved with each new version.

Security Enhancements of Java 17

Long-Term Support

Java 17, being a Long-Term Support (LTS) release, is a strategic choice for enterprises using MuleSoft. The LTS status ensures that Java 17 will receive extended support, including critical security updates and patches, over the years.

This extended support is crucial for maintaining the security and stability of MuleSoft applications, often at the core of enterprise integrations and digital transformations.

By upgrading to Java 17, MuleSoft developers can ensure that their APIs and integrations are protected against newly discovered vulnerabilities, reducing the risk of security breaches that could compromise sensitive data.

The Importance of Long-Term Support

  1. Stay Secure: Java 17 is an LTS release with long-term security updates and patches. Upgrading ensures your MuleSoft Applications are protected against the latest vulnerabilities, keeping your deep, safe data.
  2. Better Performance: With Java 17, you get a more optimized runtime to make your MuleSoft application run faster. This means quicker response times and a smoother experience for you.
  3. Industry Standards Compliance: Staying on an LTS version like Java 17 helps meet industry standards and compliance requirements. It shows that your applications are built on a stable, well-supported platform.

Getting Started with Java 17 and Anypoint Studio

Before you start upgrading your MuleSoft APIs to Java 17, it’s important to make sure your development environment is set up properly. Here are the key prerequisites to help you transition smoothly.

Install and Set Up Java 17

  • Download Java 17: Get Java 17 from the Oracle Java SE or Eclipse Adoptium Downloads page, or use OpenJDK for your OS.
  • Install Java 17: Run the installer and set JAVA_HOME to the Java 17 installation directory.
  • Verify the Installation: Confirm Java 17 is installed by typing java-version into the terminal or command prompt.

Download and Install Anypoint Studio 7.1x Version

Upgrading to Java 17 and Anypoint Studio

As we begin upgrading our MuleSoft Application to Java 17, we have undertaken several initial setup steps in our local and developed environments. These steps are outlined below:

Step 1

  • Update the Anypoint Studio to the latest version 7.17.0.Picture1
  • Please Note: If Anypoint Studio isn’t working after the update, make sure to follow Step 2 and Step 6 for troubleshooting.

Step 2

  • Downloaded and installed Java 17 JDK in the local system.

Picture2

Step 3

  • In Anypoint Studio, we must download the latest Mule runtime, 4.6.x. For that, click on ‘Install New Software…’ under the Help section.

Picture3

  • Click on the Mule runtimes and select and install the 4.6.x version.

Picture4

Step 6

  • Now, close Anypoint Studio.
  • Navigate to the Studio configuration files in Anypoint Studio and open the AnypointStudio.ini file.
  • Update the path for Java 17 in the Anypoint Studio ‘ini’ file as mentioned below.Picture6
  • Restart the Anypoint studio.

Step 7

  • In Anypoint Studio, navigate to the Run section at the top and select Run Configurations.
  • Go to the JRE section and select the Runtime JRE – Project JRE (jdk-17.0.11-9-hotspot).
    Picture7
  • Go to preferences select tooling, and select Java VM Studio ServiceProject JRE (jdk-17.0.11-9-hotspot)Picture8

 

So, our setup is complete after following all the above steps, and you can deploy your MuleSoft application on Java 17!

Conclusion

Upgrading to Java 17 is essential for enhancing the security, performance, and stability of your MuleSoft APIs. As a Long-Term Support (LTS) release, Java 17 provides extended support, modern features, and critical security updates, ensuring your applications stay robust and efficient. By installing Java 17 and configuring Anypoint Studio accordingly, you position your MuleSoft integrations for improved performance.

]]>
https://blogs.perficient.com/2025/01/09/how-to-upgrade-mulesoft-apis-to-java-17-a-comprehensive-guide/feed/ 0 370174
From Code to Cloud: AWS Lambda CI/CD with GitHub Actions https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/ https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/#respond Tue, 31 Dec 2024 02:31:48 +0000 https://blogs.perficient.com/?p=374755

Introduction:

Integrating GitHub Actions for Continuous Integration and Continuous Deployment (CI/CD) in AWS Lambda deployments is a modern approach to automating the software development lifecycle. GitHub Actions provides a platform for automating workflows directly from your GitHub repository, making it a powerful tool for managing AWS Lambda functions.

Understanding GitHub Actions CI/CD Using Lambda

Integrating GitHub Actions for CI/CD with AWS Lambda streamlines the deployment process, enhances code quality, and reduces the time from development to production. By automating the testing and deployment of Lambda functions, teams can focus on building features and improving the application rather than managing infrastructure and deployment logistics. This integration is essential to modern DevOps practices, promoting agility and efficiency in software development.

Prerequisites:

  • GitHub Account and Repository:
  • AWS Account:
  • AWS IAM Credentials:

DEMO:

First, we will create a folder structure like below & open it in Visual Studio.

Image 1

After this, open AWS Lambda and create a function using Python with the default settings. Once created, we will see the default Python script. Ensure that the file name in AWS Lambda matches the one we created under the src folder.

Image 2

Now, we will create a GitHub repository with the same name as our folder, LearnLambdaCICD. Once created, it will prompt us to configure the repository. We will follow the steps mentioned in the GitHub Repository section to initialize and sync the repository.

Image 3

Next, create a folder named .github/workflows under the main folder. Inside the workflows folder, create a file named deploy_cicd.yaml with the following script.

Image 4

As per this YAML, we need to set up the AWS_DEFAULT_REGION according to the region we are using. In our case, we are using ap-south-1. We will also need the ARN number from the AWS Lambda page, and we will use that same value in our YAML file.

We then need to configure the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. To do this, navigate to the AWS IAM role and create a new access key.

Once created, we will use the same access key and secret access key in our YAML file. Next, we will map these access keys in our GitHub repository by navigating to Settings > Secrets and variables > Actions and configuring the keys.

Updates:

We will update the default code in the lambda_function.py file in Visual Studio. This way, once the pipeline builds successfully, we can see the changes in AWS Lambda as well. This modified the file as shown below:

Image 5

Our next step will be to push the code to the Git repository using the following commands:

  • Git add .
  • Git commit -m “Last commit”
  • Git push

Once the push is successful, navigate to GitHub Actions from your repository. You will see the pipeline deploying and eventually completing, as shown below. We can further examine the deployment process by expanding the deploy section. This will allow us to observe the steps that occurred during the deployment.

Image 6

Now, when we navigate to AWS Lambda to check the code, we can see that the changes we deployed have been applied.

Image 7

We can also see the directory changes in the left pane of AWS Lambda.

Conclusion:

As we can see, integrating GitHub Actions for CI/CD with AWS Lambda automates and streamlines the deployment process, allowing developers to focus on building features rather than managing deployments. This integration enhances efficiency and reliability, ensuring rapid and consistent updates to serverless applications. By leveraging GitHub’s powerful workflows and AWS Lambda’s scalability, teams can effectively implement modern DevOps practices, resulting in faster and more agile software delivery.

]]>
https://blogs.perficient.com/2024/12/30/from-code-to-cloud-aws-lambda-ci-cd-with-github-actions/feed/ 0 374755
Building GitLab CI/CD Pipelines with AWS Integration https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/ https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/#respond Wed, 18 Dec 2024 11:05:19 +0000 https://blogs.perficient.com/?p=373778

Building GitLab CI/CD Pipelines with AWS Integration

GitLab CI/CD (Continuous Integration/Continuous Deployment) is a powerful, integrated toolset within GitLab that automates the software development lifecycle (SDLC). It simplifies the process of building, testing, and deploying code, enabling teams to deliver high-quality software faster and more efficiently.

Understanding GitLab CI/CD

Getting started with GitLab CI/CD is simple. Start by creating a GitLab account and setting up a project for your application if you don’t have then install and configure a GitLab Runner, a tool responsible for executing the tasks defined in your .gitlab-ci.yml file. The runner handles building, testing, and deploying your code, ensuring the pipeline works as intended. This setup streamlines your development process and helps automate workflows efficiently.

What is a GitLab Pipeline?

A pipeline automates the process of building, testing, and deploying applications. CI (Continuous Integration) means regularly merging code changes into a shared repository. CD (Continuous Deployment/Delivery) automates releasing the application to its target environment.

Gitlab 1

Related CODE: In this step, you push your local code changes to the remote repository and commit any updates or modifications.

CI Pipeline: Once your code changes are committed and merged, you can run the build and test jobs defined in your pipeline. After completing these jobs, the code is ready to be deployed to staging and production environments.

Important Terms in GitLab CI/CD

1. .gitlab-ci.yaml file

A .gitlab-ci.yml file in a GitLab repository is used to define the Continuous Integration/Continuous Deployment (CI/CD) pipeline configuration. This file contains instructions on building, testing, and deploying your project.

2. Gitlab-Runner

In GitLab CI/CD, a “runner” refers to the agent that executes the jobs defined in the .gitlab-ci.yml pipeline configuration. Runners can be either shared or specific to the project.

Here’s how runners work:

  1. Shared Runners: GitLab provides shared runners available to all projects within a GitLab instance. These runners are managed by GitLab administrators and can be used by any project. Shared runners are convenient if we don’t want to set up and manage our own runners.
  2. Specific Runners: We can also set up our own runners that are dedicated to our project. These runners can be deployed on our infrastructure (e.g., on-premises servers, cloud instances) or using a variety of methods like Docker, Kubernetes, shell, or Docker Machine. Specific runners offer more control over the execution environment and can be customized to meet the specific needs of our project.

3. Pipeline:

Pipelines are made up of jobs and stages:

  • Jobs define what you want to do. For example, test code changes, or deploy to a dev environment.
  • Jobs are grouped into stages. Each stage contains at least one job. Common stages include build, test, and deploy.
  • You can run the pipeline either manually or from the pipeline schedule Job.

First is manually means directly commit, when you merged or commit any changes into code pipeline directly trigger.

And second is by using rules for that, you need to create a scheduled job.

 

Gitlab 2

 

 4. Schedule Job:

We use scheduled jobs to automate pipeline execution. To create a scheduled job, follow these steps:

  1. Navigate to Schedule Settings: Go to Build, select Pipeline Schedules, and click Create New Schedule.
  2. Configure Schedule Details:
    1. Description: Enter a name for the scheduled job.
    2. Cron Timezone: Set the timezone according to your requirements.
    3. Interval Pattern: Define the cron schedule to determine when the pipeline should run. If you   prefer to run it manually by clicking the play button when needed, uncheck the Activate button at the end.
    4. Target Branch: Specify the branch where the cron job will run.
  3. Add Variables: Include any variables mentioned in the rules section of your .gitlab-ci.yml file to ensure the pipeline runs correctly.
    1. Input variable key = SCHEDULE_TASK_NAME
    2. Input variable value = prft-deployment

Gitlab 3

 

Gitlab3.1

Demo

Prerequisites for GitLab CI/CD 

  • GitLab Account and Project: You need an active GitLab account and a project repository to store your source code and set up CI/CD workflows.
  • Server Environment: You should have access to a server environment, like a AWS Cloud, where your install gitlab-runner.
  • Version Control: Using a version control system like Git is essential for managing your source code effectively. With Git and a GitLab repository, you can easily track changes, collaborate with your team, and revert to previous versions whenever necessary.

Configure Gitlab-Runner

  • Launch an AWS EC2 instance with any operating system of your choice. Here, I used Ubuntu. Configure the instance with basic settings according to your requirements.
  • SSH into the EC2 instance and follow the steps below to install GitLab Runner on Ubuntu.
  1. sudo apt install -y curl
  2. curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
  3. sudo apt install gitlab-runner

After installing GitLab Runner, proceed to register it. Navigate to GitLab, go to Settings, then CI/CD, and under Runners, click on the three dots to access the registration options.

And copy-paste the below cmd:

Gitlab 4

Run the following command on your EC2 instance and provide the necessary details for configuring the runner based on your requirements:

  1. URL: Press enter to keep it as the default.
  2. Token: Use the default token and press enter.
  3. Description: Add a brief description for the runner.
  4. Tags: This is critical; the tag names define your GitLab Runner and are referenced in your .gitlab-ci.yml file.
  5. Notes: Add any additional notes if required.
  6. Executor: Choose shell as the executor.

Gitlab 5

Check GitLab-runner status and active status using the below cmd:

  • gitlab-runner verify
  • gitlab-runner list

Gitlab 6

Check gitlab-runner is active in gitlab also:

Navigate to GitLab, then go to Settings and select GitLab Runners.

 

Gitlab 7

 Configure gitlab-ci.yaml file

  • Stages: Stages that define the sequence in which jobs are executed.
    • build
    • deploy
  • Build-job: This job is executed in the build stage, the first run stage.
    • Stage: build
    • Script:
      • Echo “Compiling the code…”
      • Echo “Compile complete.”‘
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner
  • Deploy-job: This job is executed in the deploy stage.
    • Stage: deploy   #It will only execute when both jobs in the build job & test job (if added) have been successfully completed.
    • script:
      • Echo “Deploying application…”
      • Echo “Application successfully deployed.”
    • Rules:
      • if: ‘$CI_PIPELINE_SOURCE == “schedule” && $SCHEDULE_TASK_NAME == “prft-deployment”‘
    • Tags:
      • prft-test-runner

Note: If needed, you can add a test job similar to the BUILD and DEPLOY jobs.

Run Pipeline

Since the Cron job is already configured in the schedule, simply click the Play button to automatically trigger your pipeline.

Gitlab 8

To check pipeline status, go to Build and then Pipeline. Once the Build Job is successfully completed, the Test Job will start, and once the Test Job is completed, the deploy job will start.

Gitlab 9

Output

We successfully completed BUILD & DEPLOY Jobs.

Gitlab 10

Build Job

Gitlab 11

Deploy Job

Gitlab 12

Conclusion

As we can see, the BUILD & DEPLOY jobs pipeline has successfully passed.

We’ve provided a brief overview of GitLab CI/CD pipelines and a practical demonstration of how its components work together. Hopefully, everything is running smoothly on your end!

 

]]>
https://blogs.perficient.com/2024/12/18/building-gitlab-ci-cd-pipelines-with-aws-integration/feed/ 0 373778
CCaaS Migration Best Practices: Tips for moving your customer care platform to the cloud https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/ https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/#respond Fri, 06 Dec 2024 16:28:56 +0000 https://blogs.perficient.com/?p=373159

Migrating to a cloud-delivered Contact Center as a Service (CCaaS) solution can revolutionize how your organization delivers customer service. However, this transition requires careful planning and execution to avoid disruptions. Assuming you have selected a CCaaS platform that aligns with your organizational needs, the following best practices outline key considerations for a seamless migration.

A successful migration to CCaaS not only enhances operational efficiency and scalability but also ensures a significant improvement in service delivery, directly impacting customer satisfaction and retention. Organizations should consider the risks of not embracing modern cloud-based customer care solutions, which can

include diminished customer service capabilities and potential costs due to outdated or inflexible systems. Moreover, organizations that delay this shift risk falling behind competitors who can adapt more quickly to market demands and customer needs. Thus, embarking on a well-planned migration journey is imperative for companies aiming to optimize their customer care operations and secure a competitive advantage in their respective markets.

 

  1. Physical Infrastructure Migration

Understanding your current environment is critical for a successful transition. Start with a thorough site review to document the infrastructure and identify unique user requirements. Engage with call center managers, team leaders, and power users to uncover specific needs and configured features such as whisper settings, omnichannel components, call management, etc.

Factors such as bandwidth and latency are paramount for seamless operations. Evaluate your facility’s connectivity for both on-site and remote users, ensuring it aligns with the CCaaS product requirements. Fortunately, modern CCaaS solutions such as Amazon Connect, Twilio Flex and Five9 supply agent connectivity tools to verify that workers have sufficient resources to provide good customer service over various channels.

Additionally, document call treatments and station-specific configurations like call coverage paths. Legacy components requiring continued functionality should be cataloged to prepare for integration.

 

  1. Change Management Planning

Change management is essential to mitigate risks and maximize adoption. A staged cutover strategy is recommended over a single-event migration, allowing for gradual testing and adjustments.

Develop a robust testing strategy to validate the platform’s performance under real-world conditions. Complement this with an organizational enablement strategy to train users and ensure they are comfortable with the new system. Adoption by your business units and users is one of the most critical factors which will determine the success of your CCaaS migration.

 

  1. Operational Considerations

Operational continuity is vital during migration. Start by understanding the reporting requirements for business managers to ensure no loss of visibility into critical metrics. Additionally, review monitoring processes to maintain visibility into system performance post-migration.

 

  1. Integration Planning

Integrating legacy infrastructure with the new CCaaS platform can present significant challenges. Document existing components, including FXO/FXS interfaces, Workforce Management solutions, FAX systems, wallboards, and specialty dialers. Verify that integrations comply with any regulatory requirements, such as HIPAA or FINRA.

Interactive Voice Response (IVR) systems often require specific integrations with local data sources or enterprise middleware. Assess these integrations to ensure call flows function as intended. For specialized applications, verify that they meet operational needs within the new environment.

 

  1. Fault Tolerance and Disaster Recovery

Testing fault tolerance and disaster recovery capabilities are critical steps in any CCaaS migration. Develop and execute a failsafe testing plan to ensure resilience against both premise-level and carrier-level failures. It is important to align to your IT organization’s standards for recovery time objective (RTO) and business up-time expectations. Disaster recovery plans must reflect these measures and be tested to protect against potential downtime.

 

  1. Scalability and Compliance

CCaaS solutions must scale with your business. Validate scalability by conducting load tests and documenting performance metrics. Compliance is equally important—ensure your migration adheres to industry standards like HIPAA, FedRAMP, or FINRA through thorough compliance testing and documentation.

 

Conclusion

A successful CCaaS migration hinges on meticulous planning, comprehensive testing, and strong change management. By following these best practices, you can minimize risks, ensure operational continuity, and set your organization up for long-term success with its new contact center platform. The result? An enhanced customer experience and a contact center infrastructure that grows with your business.

 

 

]]>
https://blogs.perficient.com/2024/12/06/ccaas-migration-best-practices-tips-for-moving-your-customer-care-platform-to-the-cloud/feed/ 0 373159
Legacy Systems Explained: Why Upgrading Them is Crucial for Your Business https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/ https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/#comments Wed, 04 Dec 2024 06:16:49 +0000 https://blogs.perficient.com/?p=372961

What are Legacy Systems? Why is Upgrading those Systems Required?

Upgrading means more than just making practical improvements to keep things running smoothly. It addresses immediate needs rather than chasing a perfect but impractical solution. The situation could spiral out of control if things don’t function properly in real-time.

One such incident happened on January 4, 2024, when South Africa’s Department of Home Affairs was taken offline nationwide due to a mainframe failure. In simple terms, Mainframe failures in such contexts are usually high-stakes issues because they impact the core infrastructure that supports vital public services. In South Africa, where the Department of Home Affairs handles a range of essential services such as passports, identity documents, and immigration control,  A mainframe failure can have widespread repercussions, leading to backlogs, delays, and potential administrative chaos. The South African Department of Home Affairs provides a clear example of a critical legacy system facing significant risks due to its outdated technology and operational challenges. 

Addressing these issues through modernization and digital transformation is crucial for improving service delivery and ensuring the system’s continued effectiveness and security. One cannot work on migrating the legacy system in one go, as the business and functional side of testing is a must. A planned and systematic approach is needed while upgrading the legacy system.

 

Question: What is the solution to avoid such a case?
Answer: Modernization of Legacy code.

Legacy code modernization is improving and updating outdated software systems to make them more maintainable, scalable, and compatible with modern technologies. Let’s understand this using Apigee (an API Management tool).

1. Scalability

Legacy system: Legacy systems were designed to provide solutions for the respected tasks but there was no scalability as records used to get limited within Infrastructure for improvements in business.
Apigee: Due to its easy scalability, centralized monitoring, and integration capabilities, Apigee helped the organization plan its approach to business improvements.

2. Security

Legacy system: One of the simplest methods for authenticating users in Legacy Systems was “Basic Authentication,” where the client sends a username and password in every HTTP request. This method was Vulnerable to man-in-the-middle (MITM) attacks if not combined with HTTPS. Credentials are exposed on each request.

Apigee: Using Apigee, the organization can quickly implement modern security features like OAuth, API key validation, rate limiting, and threat protection (e.g., bot detection) without changing the core logic of the APIs.

3. User and Developer Experience

Legacy system: The legacy API lacks good documentation, making it harder for external developers to integrate with it. Most systems tend to have a SOAP-based communication format.
Apigee: Apigee provides a built-in API portal, automatic API documentation, and testing tools, improving the overall developer experience and adoption of the APIs so that integration with other tools can be easy and seamless with modern standards.


There are now multiple ways to migrate data from legacy to modern systems, which are listed below.

1. Big Bang Migration
2. Phased Migration
3. Parallel Migration
4. Pilot Migration
5. Hybrid Migration
and more…

Although these things are known to the legacy system owners, they are very selective and picky when finalizing a migration plan. They are only aware of the short-term goal, i.e., to get the code up and running in production. Because when we are speaking of legacy systems, all there is left is code and a sigh of relief that it is still up and running.  For most systems, there is no documentation, code history, revisions, etc., and that’s why it could fail on a large scale if something goes wrong.

I have found some points that need to be ensured before finalizing the process of migrating from legacy systems to modern systems.

1. Research and Analysis

We need to understand the motives behind the development of the Legacy system since there is no or insufficient documentation.

In the study, we can plan to gather historical data to understand the system’s behavior. We need to dig deeper to find something that could help us understand the system better.

2. Team Management

After studying the system, we can estimate the team size and resource management. Such systems are way older when it comes to the tech on which they are running. So, it is hard to gather resources with such outdated skills. In that case, management can cross-skill existing resources into such technologies.

I believe adding the respective numbers of junior engineers would be best, as they would get exposure to challenges, which can help them improve their skills.

3. Tool to Capture Raw Logs

Analyzing the raw logs can talk more about the system, as this is the way communication is happening to complete each task requested by the system. By breaking down the data into layman’s language, understanding at what time requests are high by timestamps,

What parameter data consists of, and by such information, we can tell system behavior and plan properly.

4. Presentation of the Logs

Sometimes we may need to present the case study to high-level management before proceeding with the plan. So to simplify the presentation, we can use tools like Datadog and Splunk to get data in a format such as tabular, graphic, etc. so that other team members can understand.

5. Replicate the Architect with Proper Functionality

This is the most important part. End-to-end development is the only solution for smooth migration activity. We need to ensure standards here, such as maintaining core functionality, risk management, conveying data pattern changes to other associated clients, and ensuring user access, business processes, etc. The point 1 study can help us understand the behavior of systems to check on what modern tech we can land our migration on.

We can implement and plan using one of the migration methods I mentioned above in the blog.

6. End-to-end Testing

Once the legacy system is replicated to Modern Tech, we need to ensure that we have a User Acceptance Testing (UAT) environment to perform the system testing. It could be challenging if the respective legacy systems didn’t have a testing environment back in the day. We may need to call mock backend URLs to mock the behavior of services.

7. Before Moving to Production, do Pre-production Testing Properly

Only after successful UAT testing one can ensure the functionality and may think to move changes to production hassle-free. However, some points must be ensured, such as following standards and maintaining documentation. In standards, we need to ensure that no risk may lead to the failure of services in modern technology and that they are compatible properly.

In the documentation, we need to ensure that all service flows are appropriately documented and that testing is done according to the requirement gathering.

Legacy systems and their workings are among the most complex and time-consuming topics. But to make the job easier, one must put effort into it.

]]>
https://blogs.perficient.com/2024/12/04/legacy-systems-explained-why-upgrading-them-is-crucial-for-your-business/feed/ 2 372961
Unit Testing in Android Apps: A Deep Dive into MVVM https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/ https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/#respond Tue, 26 Nov 2024 19:56:40 +0000 https://blogs.perficient.com/?p=372567

Understanding Unit Testing

Unit testing is a crucial aspect of software development, especially in complex applications like Android apps. It involves testing individual units of code, such as methods or classes, in isolation. This ensures the correctness of each component, leading to a more robust and reliable application.

Why Unit Testing in MVVM?

The Model-View-ViewModel (MVVM) architectural pattern is widely adopted in Android app development. It separates the application into three distinct layers:

  • Model: Handles data logic and interacts with data sources.
  • View: Responsible for the UI and user interactions.
  • ViewModel: Acts as a bridge between the View and Model, providing data and handling UI logic.

Unit testing each layer in an MVVM architecture offers numerous benefits:

  • Early Bug Detection: Identify and fix issues before they propagate to other parts of the app.
  • Improved Code Quality: Write cleaner, more concise, and maintainable code.
  • Accelerated Development: Refactor code and add new features with confidence.
  • Enhanced Collaboration: Maintain consistent code quality across the team.

Setting Up the Environment

  1. Android Studio: Ensure you have the latest version installed.
  2. Testing Framework: Add the necessary testing framework to your app/build.gradle file:

    testImplementation 'junit:junit:4.13.2'
    androidTestImplementation 'androidx.test.ext:junit:1.1.5'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
  3. Testing Library: Consider using a testing library like Mockito or MockK to create mock objects for testing dependencies.

Unit Testing ViewModels

  1. Create a Test Class: Create a separate test class for each ViewModel you want to test.
  2. Set Up Test Dependencies: Use dependency injection frameworks like Dagger Hilt or Koin to inject dependencies into your ViewModel. For testing, use mock objects to simulate the behavior of these dependencies.
  3. Write Test Cases: Write comprehensive test cases covering various scenarios:
  • Input Validation: Test how the ViewModel handles invalid input.
  • Data Transformation: Test how the ViewModel transforms data from the Model.
  • UI Updates: Test how the ViewModel updates the UI through LiveData or StateFlow.
  • Error Handling: Test how the ViewModel handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyViewModelTest {

    @Test
    fun `should update the UI when data is fetched successfully`() {
        // ... (Arrange)
        val viewModel = MyViewModel(mockRepository)

        // ... (Act)
        viewModel.fetchData()

        // ... (Assert)
        viewModel.uiState.observeForever { uiState ->
            assertThat(uiState.isLoading).isFalse()
            assertThat(uiState.error).isNull()
            assertThat(uiState.data).isEqualTo(expectedData)
        }
    }
}

Unit Testing Repositories

  1. Create Test Classes: Create separate test classes for each Repository class.
  2. Set Up Test Dependencies: Use dependency injection to inject dependencies into your Repository. For testing, use mock objects to simulate the behavior of data sources like databases or network APIs.
  3. Write Test Cases: Write test cases to cover:
  • Data Fetching: Test how the Repository fetches data from remote or local sources.
  • Data Storage: Test how the Repository stores and retrieves data.
  • Data Manipulation: Test how the Repository processes and transforms data.
  • Error Handling: Test how the Repository handles errors and exceptions.

Example:

@RunWith(AndroidJUnit4::class)
class MyRepositoryTest {

    @Test
    fun `should fetch data from remote source successfully`() {
        // ... (Arrange)
        val mockApi = mock(MyApi::class.java)
        val repository = MyRepository(mockApi)

        // ... (Act)
        repository.fetchData()

        // ... (Assert)
        verify(mockApi).fetchData()
    }
}

Implementing SonarQube

SonarQube is a powerful tool for code quality and security analysis. Here’s a detailed guide on how to integrate SonarQube with your Android project:

  1. Set Up SonarQube Server:
  • Install SonarQube Server: Download and install the SonarQube server on your machine or a server.
  • Configure SonarQube: Configure the server with database settings, user authentication, and other necessary parameters.
  • Start SonarQube Server: Start the SonarQube server.
  1. Configure SonarQube Scanner:
  • Install SonarQube Scanner: Download and install the SonarQube Scanner.
  • Configure Scanner Properties: Create a sonar-scanner.properties file in your project’s root directory and configure the following properties:

    sonar.host.url=http://localhost:9000
    sonar.login=your_sonar_login
    sonar.password=your_sonar_password
    sonar.projectKey=my-android-project
    sonar.projectName=My Android Project
    sonar.sources=src/main/java
    sonar.java.binaries=build/intermediates/javac/release/classes
  1. Integrate SonarQube with Your Build Process:
  • Gradle: Add the SonarQube Gradle plugin to your build.gradle file:

    plugins {
        id 'org.sonarsource.scanner-gradle' version '3.3'
    }

    Configure the plugin with your SonarQube server URL and authentication token.

  • Maven: Add the SonarQube Maven plugin to your pom.xml file. Configure the plugin with your SonarQube server URL and authentication token.
  1. Run SonarQube Analysis:
  • Execute the SonarQube analysis using the SonarQube Scanner. This can be done manually or integrated into your CI/CD pipeline.
  1. Analyze the Results:
  • Once the analysis is complete, you can view the results on the SonarQube dashboard. The dashboard provides insights into code quality, security vulnerabilities, and potential improvements.

Implementing Test Coverage with Bitrise

Test coverage measures the percentage of your code that is covered by tests. It’s a crucial metric to assess the quality of your test suite. Here’s how to measure test coverage with Bitrise:

  1. Configure Code Coverage Tool: Add a code coverage tool like JaCoCo to your project. Configure it to generate coverage reports in a suitable format (e.g., XML).
  2. Add Code Coverage Step to Bitrise Workflow: Add a step to your Bitrise Workflow to generate the code coverage report. This step should execute your tests and generate the report.
  3. Upload Coverage Report to SonarQube: Add a step to upload the generated code coverage report to SonarQube. This will allow SonarQube to analyze the report and display the coverage metrics.

Best Practices for Unit Testing

  • Write Clear and Concise Tests: Use descriptive names for test methods and variables.
  • Test Edge Cases: Consider testing scenarios with invalid input, empty data, or network errors.
  • Use a Testing Framework: A testing framework like JUnit provides a structured way to write and run tests.
  • Leverage Mocking: Use mocking frameworks like Mockito or MockK to isolate units of code and control their behavior.
  • Automate Testing: Integrate unit tests into your CI/CD pipeline to ensure code quality.
  • Review and Refactor Tests: Regularly review and refactor your tests to keep them up-to-date and maintainable.

By following these guidelines and incorporating unit testing into your development process, you can significantly improve the quality and reliability of your Android apps.

]]>
https://blogs.perficient.com/2024/11/26/unit-testing-in-android-apps-a-deep-dive-into-mvvm/feed/ 0 372567
A Comprehensive Guide to IDMC Metadata Extraction in Table Format https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/ https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/#respond Sun, 17 Nov 2024 00:00:27 +0000 https://blogs.perficient.com/?p=372086

Metadata Extraction: IDMC vs. PowerCenter

When we talk about metadata extraction, IDMC (Intelligent Data Management Cloud) can be trickier than PowerCenter. Let’s see why.
In PowerCenter, all metadata is stored in a local database. This setup lets us use SQL queries to get data quickly and easily. It’s simple and efficient.
In contrast, IDMC relies on the IICS Cloud Repository for metadata storage. This means we have to use APIs to get the data we need. While this method works well, it can be more complicated. The data comes back in JSON format. JSON is flexible, but it can be hard to read at first glance.
To make it easier to understand, we convert the JSON data into a table format. We use a tool called jq to help with this. jq allows us to change JSON data into CSV or table formats. This makes the data clearer and easier to analyze.

In this section, we will explore jq. jq is a command-line tool that helps you work with JSON data easily. It lets you parse, filter, and change JSON in a simple and clear way. With jq, you can quickly access specific parts of a JSON file, making it easier to work with large datasets. This tool is particularly useful for developers and data analysts who need to process JSON data from APIs or other sources, as it simplifies complex data structures into manageable formats.

For instance, if the requirement is to gather Succeeded Taskflow details, this involves two main processes. First, you’ll run the IICS APIs to gather the necessary data. Once you have that data, the next step is to execute a jq query to pull out the specific results. Let’s explore two methods in detail.

Extracting Metadata via Postman and jq:-

Step 1:
To begin, utilize the IICS APIs to extract the necessary data from the cloud repository. After successfully retrieving the data, ensure that you save the file in JSON format, which is ideal for structured data representation.
Step 1 Post Man Output

Step 1 1 Save File As Json

Step 2:
Construct a jq query to extract the specific details from the JSON file. This will allow you to filter and manipulate the data effectively.

Windows:-
(echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:-
jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
To proceed, run the jq query in the Command Prompt or Terminal. Upon successful execution, the results will be saved in CSV file format, providing a structured way to analyze the data.

Step 3 1 Executing Query Cmd

Step 3 2 Csv File Created

Extracting Metadata via Command Prompt and jq:-

Step 1:
Formulate a cURL command that utilizes IICS APIs to access metadata from the IICS Cloud repository. This command will allow you to access essential information stored in the cloud.

Windows and Linux:-
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json"

Step 2:
Develop a jq query along with cURL to extract the required details from the JSON file. This query will help you isolate the specific data points necessary for your project.

Windows:
(curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json") | (echo Taskflow_Name,Start_Time,End_Time & jq -r ".[] | [.assetName, .startTime, .endTime] | @csv" C:\Users\christon.rameshjason\Documents\Reference_Documents\POC.json) > C:\Users\christon.rameshjason\Documents\Reference_Documents\Final_results.csv

Linux:
curl -s -L -X GET -u USER_NAME:PASSWORD "https://<BASE_URL>/active-bpel/services/tf/status?runStatus=Success" -H "Accept: application/json" | jq -r '["Taskflow_Name","Start_Time","End_Time"],(.[] | [.assetName, .startTime, .endTime]) | @csv' /opt/informatica/test/POC.json > /opt/informatica/test/Final_results.csv

Step 3:
Launch the Command Prompt and run the cURL command that includes the jq query. Upon running the query, the results will be saved in CSV format, which is widely used for data handling and can be easily imported into various applications for analysis.

Step 3 Ver 2 Cmd Prompt

Conclusion
To wrap up, the methods outlined for extracting workflow metadata from IDMC are designed to streamline your workflow, minimizing manual tasks and maximizing productivity. By automating these processes, you can dedicate more energy to strategic analysis rather than tedious data collection. If you need further details about IDMC APIs or jq queries, feel free to drop a comment below!

Reference Links:-

IICS Data Integration REST API – Monitoring taskflow status with the status resource API

jq Download Link – Jq_Download

]]>
https://blogs.perficient.com/2024/11/16/a-comprehensive-guide-to-idmc-metadata-extraction-in-table-format/feed/ 0 372086
A Step-by-Step Guide to Extracting Workflow Details for PC-IDMC Migration Without a PC Database https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/ https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/#respond Fri, 08 Nov 2024 06:29:05 +0000 https://blogs.perficient.com/?p=371403

In the PC-IDMC conversion process, it can be challenging to gather detailed information about workflows. Specifically, we often need to determine:

  • The number of transformations used in each mapping.
  • The number of sessions utilized within the workflow.
  • Whether any parameters or variables are being employed in the mappings.
  • The count of reusable versus non-reusable sessions used in the workflow etc.

To obtain these details, we currently have to open each workflow individually, which is time-consuming. Alternatively, we could use complex queries to extract this information from the PowerCenter metadata in the database tables.

This section focuses on XQuery, a versatile language designed for querying and extracting information from XML files. When workflows are exported from the PowerCenter repository or Workflow Manager, the data is generated in XML format. By employing XQuery, we can effectively retrieve the specific details and data associated with the workflow from this XML file.

Step-by-Step Guide to Extracting Workflow Details Using XQuery: –

For instance, if the requirement is to retrieve all reusable and non-reusable sessions for a particular workflow or a set of workflows, we can utilize XQuery to extract this data efficiently.

Step 1:
Begin by exporting the workflows from either the PowerCenter Repository Manager or the Workflow Manager. You have the option to export multiple workflows together as one XML file, or you can export a single workflow and save it as an individual XML file.

Step 1 Pc Xml Files

Step 2:-
Develop the XQuery based on our specific requirements. In this case, we need to fetch all the reusable and non-reusable sessions from the workflows.

let $header := "Folder_Name,Workflow_Name,Session_Name,Mapping_Name"
let $dt := (let $data := 
    ((for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return
        for $w in $f/WORKFLOW
        let $wn:= data($w/@NAME)
        return
            for $s in $w/SESSION
            let $sn:= data($s/@NAME)
            let $mn:= data($s/@MAPPINGNAME)
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>)
    |           
    (for $f in POWERMART/REPOSITORY/FOLDER
    let $fn:= data($f/@NAME)
    return          
        for $s in $f/SESSION
        let $sn:= data($s/@NAME)
        let $mn:= data($s/@MAPPINGNAME)
        return
            for $w in $f/WORKFLOW
            let $wn:= data($w/@NAME)
            let $wtn:= data($w/TASKINSTANCE/@TASKNAME)
            where $sn = $wtn
            return
                <Names>
                    {
                        $fn ,
                        "," ,
                        $wn ,
                        "," ,
                        $sn ,
                        "," ,
                        $mn
                    }
                </Names>))
       for $test in $data
          return
            replace($test/text()," ",""))
      return
 string-join(($header,$dt), "
")

Step 3:
Select the necessary third-party tools to execute the XQuery or opt for online tools if preferred. For example, you can use BaseX, Altova XMLSpy, and others. In this instance, we are using Basex, which is an open-source tool.

Create a database in Basex to run the XQuery.

Step 3 Create Basex Db

Step 4: Enter the created XQuery into the third-party tool or online tool to run it and retrieve the results.

Step 4 Execute XqueryStep 5:
Export the results in the necessary file extensions.

Step 5 Export The Output

Conclusion:
These simple techniques allow you to extract workflow details effectively, aiding in the planning and early detection of complex manual conversion workflows. Many queries exist to fetch different kinds of data. If you need more XQueries, just leave a comment below!

]]>
https://blogs.perficient.com/2024/11/08/a-step-by-step-guide-to-extracting-workflow-details-for-pc-idmc-migration-without-a-pc-database/feed/ 0 371403
3 Key Insurance Takeaways From InsureTech Connect 2024 https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/ https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/#respond Tue, 29 Oct 2024 16:49:00 +0000 https://blogs.perficient.com/?p=371156

The 2024 InsureTech Connect (ITC) conference was truly exhilarating, with key takeaways impacting the insurance industry. Each year, it continues to improve, offering more relevant content, valuable industry connections, and opportunities to delve into emerging technologies.

This year’s event was no exception, showcasing the importance of personalization to the customer, tech-driven relationship management, and AI-driven underwriting processes. The industry is constantly evolving, and ITC displays the alignment of everyone within the insurance industry surrounding the same purpose.

The Road Ahead: Transformative Trends

As I reflect on ITC and my experience, it is evident the progression of the industry is remarkable. Here are a few key takeaways from my perspective that will shape our industry roadmap:

1. Personalization at Scale

We’ve spoken for many years about the need to drive greater personalization across our interactions in our industry. We know that customers engage with companies that demonstrate authentic knowledge of their relationship. This year, we saw great examples of how companies are treating personalization, not as an incremental initiative, but rather embedding it at key moments in the insurance experience, particularly underwriting and claims.

For example, New York Life highlighted how personalization is driving generational loyalty. We’ve been working with industry leading insurers to help drive personalization across the distribution network: carriers to agents and the final policyholder.

Success In Action: Our client wanted to integrate better contact center technology to improve internal processes and allow for personalized, proactive messaging to clients. We implemented Twilio Flex and leveraged its outbound notification capabilities to support customized messaging while also integrating their cloud-based outbound dialer and workforce management suite. The insurer now has optimized agent productivity and agent-customer communication, as well as newfound access to real-time application data across the entire contact center.

2. Holistic, Well-Connected Distribution Network

Insurance has always had a complex distribution network across platforms, partnerships, carriers, agents, producers, and more. Leveraging technology to manage these relationships opens opportunities to gain real-time insights and implement effective strategies, fostering holistic solutions and moving away from point solutions. Managing this complexity and maximizing the value of this network requires a good business and digital transformation strategy.

Our proprietary Envision process has been leading the way to help carriers navigate this complex system with proprietary strategy tools, historical industry data, and best practices.

3. Artificial Intelligence (AI) for Process Automation

Not surprisingly, AI permeated many of the presentations and demos across the session. AI Offers insurers unique decisioning throughout the value chain to create differentiation. It was evident that while we often talk about AI as an overarching technology, the use cases were more point solutions across the insurance value chain. Moreover, AI is not here to replace the human, but rather assist the human. By automating the mundane process activities, mindshare and human capital can be invested toward more value-added activity and critical problems to improve customer experience. Because these point solutions are available across many disparate groups, organizational mandates demand safe and ethical use of AI models.

Our PACE framework provides a holistic approach to responsibly operationalize AI across an organization. It empowers organizations to unlock the benefits of AI while proactively addressing risks.

Our industry continues to evolve in delivering its noble purpose – to protect individual’s and businesses’ property, liability, and financial obligations. Technology is certainly an enabler of this purpose, but transformation must be managed to be effective.

Perficient Is Driving Success and Innovation in Insurance

Want to know the now, new, and next of digital transformation in insurance? Contact us and let us help you meet the challenges of today and seize the opportunities of tomorrow in the insurance industry.

]]>
https://blogs.perficient.com/2024/10/29/3-key-insurance-takeaways-from-insuretech-connect-2024/feed/ 0 371156