Mobile Solutions BU Articles / Blogs / Perficient https://blogs.perficient.com/tag/mobile-solutions-bu/ Expert Digital Insights Mon, 02 Jun 2025 14:07:24 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Mobile Solutions BU Articles / Blogs / Perficient https://blogs.perficient.com/tag/mobile-solutions-bu/ 32 32 30508587 Over The Air Updates for React Native Apps https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/ https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/#respond Mon, 02 Jun 2025 14:07:24 +0000 https://blogs.perficient.com/?p=349211

Mobile App development is rapidly growing and so is the expectation of robust support. “Mobile first” is the set paradigm for many application development teams. Unlike web deployment, an app release has to go through the review process via App Store Connect and Google Play. Minor or major releases follow the app review same process, which can take 1-4 days. Hot fixes or critical security patches are also bound by the review cycle restrictions.  This may lead to service disruptions, negative app and customer reviews.

Let’s say that the latest version of an app is version 1.2. However, a critical bug was identified in version 1.1. The app developers may release version 1.3, but the challenge would be that it may take a while to release the new version (unless a forced update mechanism is implemented for the app). Another potential challenge would be the fact that there is no guarantee that the user would have auto updates on.

Luckily, “Over The Air” updates comes to the rescue in such situations.

The Over The Air (OTA), deployment process for mobile apps allows developers to push updates without going through the traditional review process. The OTA update process enables faster delivery for any hot fix or patch.

While this is very exciting, it does come with a few limitations:

  • This feature is not intended for major updates or large feature launches.
  • OTA primarily works with JavaScript bundlers so native feature changes cannot be deployed via OTA deployment.

Mobile OTA Deployment

React Native consists of JavaScript and Native code. When the app gets compiled, it creates the JSbundles for Android and iOS apps along with the native builds. OTA also relies on the JavaScript bundles and hence React Native apps are great candidates to take advantage of the OTA update technology.

One of our client’s app has an OTA deployment process implemented using App Center. However, Microsoft has decided to retire App Center as of March 31, 2025. Hence, we started exploring the alternatives. One of the alternate solutions on the the table was provided by App Center and the other was to find a similar PAAS solution from another provider. Since back-end stack was AWS, we chose to go with EAS Update.

EAS Update

EAS Update is a hosted service that serves updates for projects using expo-updates library. Once the EAS Update is configured correctly, the app will be listening for any targeted version of the app on the EAS dev cloud server. Expo provides a great documentation on setup and configuration.

How Does It Work?

In a nutshell;

  1. Integrate “EAS Updates” in the app project.
  2. The user has the app installed on their device.
  3. The development team made a bug fix/patch and generated JSbundle for the targeted app version and uploaded to the Expo.dev cloud server.
  4. Next time the user opens the app (frequencies can be configurable, we can set on app resume/start), the app will check if any bundle is available to be installed. If there is an update available, the newer version of the app from Expo will be installed on user’s device.
Over The Air Update process flow

OTA deployment process

Additional details can be found at https://docs.expo.dev/eas-update/how-it-works/.

Implementation Details:

If you are new to React Native app development, this article may help Ramp Up On React/React Native In Less Than a Month. And if you are transitioning from React to React Native, you may find this React Native – A Web Developer’s Perspective on Pivoting to Mobile useful.

I am using my existing React-Native 0.73.7 app. However, one can start a fresh React Native App for your test.

Project configuration requires us to setup expo-modules. The Expo installation guide provides an installer which handles configuration.  Our project needed an SDK 50 version of the installer.

  • Using npx install-expo-modules@0.8.1, I installed Expo, SDK-50, in alignment with our current React native version 0.73.7, which added the following dependencies.
"@expo/vector-icons": "^14.0.0",
"expo-asset": "~9.0.2",
"expo-file-system": "~16.0.9",
"expo-font": "~11.10.3",
"expo-keep-awake": "~12.8.2",
"expo-modules-autolinking": "1.10.3",
"expo-modules-core": "1.11.14",
"fbemitter": "^3.0.0",
"whatwg-url-without-unicode": "8.0.0-3"
  • Installed Expo-updates v0.24.14 package which added the following dependencies.
"@expo/code-signing-certificates": "0.0.5",
"@expo/config": "~8.5.0",
"@expo/config-plugins": "~7.9.0",
"arg": "4.1.0",
"chalk": "^4.1.2",
"expo-eas-client": "~0.11.0",
"expo-manifests": "~0.13.0",
"expo-structured-headers": "~3.7.0",
"expo-updates-interface": "~0.15.1",
"fbemitter": "^3.0.0",
"resolve-from": "^5.0.0"
  • Created expo account at https://expo.dev/signup
  • To setup the account execute, eas configure
  • This generated the project id and other account details.
  • Following channels were created: staging, uat, and production.
  • Added relevant project values to app.json, added Expo.plist, and updated same in AndroidManifest.xml.
  • Scripts block of package.json has been updated to use npx expo to launch the app.
  • AppDelegate.swift was refactored as part of the change.
  • App Center and CodePush assets and references were removed.
  • Created custom component to display a modal prompt when new update is found.

OTA Deployment:

  • Execute the command via terminal:
EAS_CHANNEL=staging RUNTIME_VERSION="7.13" eas update --message "build:[QA] - 7.13.841 - 25.5.9.4 - OTA Test2 commit"
  • Once the package is published, I can see my update available in expo.dev as shown in the image below.
EAS update OTA deployment

EAS update screen once OTA deployment is successful.

Test:

  1. Unlike App center, Expo provides the same package for iOS and Android targets.
  2. The targeted version package is available on the expo server.
  3. App restart or resume will display the popup (custom implementation) informing “A new update is available.”.
  4. When a user hits “OK” button in the popup, the update will be installed and content within the App will restart.
  5. If the app successfully restarts, the update is successfully installed.

Considerations:

  • In metro.config.js – the @rnx-kit/metro-serializer had to be commented out due to compatibility issue with EAS Update bundle process.
  • @expo/vector-icons package causes Android release build to crash on app startup. This package can be removed but if package-lock.json is removed the package will reinstall as an expo dependency and again, cause the app to crash. The issue is described in the comments here: https://github.com/expo/expo/issues/26521. There is no solution available at the moment. The expo vector icons package isn’t being handled correctly during the build process. It is caused by the react-native-elements package. When removed, the files are no longer added to app.manifest and the app builds and runs as expected.
  • Somehow the font require statements in node_modules/react-native-elements/dist/helpers/getIconType.js are being picked up during the expo-updates generation of app.manifest even though the files are not used our app. The current solution is to go ahead and include the fonts in the package but this is not optimal. Better solution is to filter those fonts from expo-updates process.

Deployment Troubleshooting:

  • Error fetching latest Expo update: Error: “channel-name” is not allowed to be empty.

The headers “expo-runtime-version”, “expo-channel-name”, and “expo-platform” are required. They can also be set with the query parameters “runtime-version”, “channel-name”, and “platform”. Learn more: https://github.com/expo/fyi/blob/main/eas-update-missing-headers.md

The configuration values for iOS app are maintained in Supporting/Expo.plist. The above error indicates that the EXUpdatesRequestHeadersblock in the plist might be missing.

OTA deployment is very useful when large number of customers are using the app and any urgent hot fix or patch needs to be released. You can set this for your lower environments as well as the production.

In my experience, it is very reliable and the expo team is doing great job on maintaining it.

So take advantage of this amazing service and Happy coding!

 

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/06/02/over-the-air-ota-deployment-process-for-mobile-app/feed/ 0 349211
Ramp Up On React/React Native In Less Than a Month https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/ https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/#comments Mon, 17 Feb 2025 14:57:23 +0000 https://blogs.perficient.com/?p=370755

I’ve had plenty of opportunities to guide developers new to the React and React Native frameworks. While everyone is different, I wanted to provide a structured guide to help bring a fresh developer into the React fold.

Prerequisites

This introduction to React is intended for a developer that at least has some experience with JavaScript, HTML and basic coding practices.

Ideally, this person has coded at least one project using JavaScript and HTML. This experience will aid in understanding the syntax of components, but any aspiring developer can learn from it as well.

 

Tiers

There are several tiers for beginner level programmers who would like to learn React and are looking for someone like you to help them get up to speed.

Beginner with little knowledge of JavaScript and/or HTML

For a developer like this, I would recommend introductory JavaScript and HTML knowledge. Maybe a simple programming exercise or online instruction, before introducing them to React. You can compare JavaScript to a language they are familiar with and cover core concepts. A basic online guide should be sufficient to get them up and running with HTML.

Junior/Intermediate with some knowledge of JavaScript and/or HTML

I would go over some basics of JavaScript and HTML to make sure they have enough to grasp the syntax and terminologies used in React. A supplementary course or online guide might be good for a refresher before introducing them to modern concepts.

Seasoned developer that hasn’t used React

Even if they haven’t used JavaScript or HTML much, they should be able to ramp up quickly. Reading through React documentation should be enough to jumpstart the learning process.

 

Tips and Guidelines

You can begin their React and React Native journey with the following guidelines:

React Documentation

The React developer documentation is a great place to start if the developer has absolutely no experience or is just starting out. It provides meaningful context in the differences between standard JavaScript and HTML and how React handles them. It also provides a valuable reference on available features and what you can do within the framework.

Pro tip: I recommend starting them right off with functional components. They are more widely used and often have better performance, especially with hooks. I personally find them easier to work with as well.

Class component:

function MyButton() {
    return (
        <button>I'm a button</button>
    );
}

 

Functional component:

const MyButton = () => {
    return (
        <button>I'm a button</button>
    )
}

 

The difference with such a small example isn’t very obvious, but it becomes much different once you introduce hooks. Hooks allow you to extract functionality into a reusable container, this allows you to keep logic separate or import it in other components. There are also several built-in hooks that make life easier. Hooks always start with “use” (useState, useRef, etc.). You are also able to create custom hooks for your own logic.

Concepts

Once they understand basic concepts, it’s time to focus on advanced React concepts. State management is an important factor in React which covers component and app-wide states. Learning widely used packages might come in handy. I recommend Redux Toolkit as it’s easy to learn, but extremely extensible. It is great for both big and small projects and offers simple to complex state management features.

Now might be a great time to point out the key differences between React and React Native. They are very similar with a few minor adjustments:

ReactReact Native
LayoutUses HTML tags“core components” (View instead of div for example).
StylingCSSStyle objects
X/Y Coordinate PlanesFlex direction: rowFlex direction: column
NavigationURLsRoutes react-navigation

Tic-Tac-Toe

I would follow the React concepts with an example project. This allows the developer to see how a project is structured and how to code within the framework. Tic-Tac-Toe is a great example for a new React developer to give a try to see if they understand the basic concepts.

Debugging

Debugging in Chrome is extremely useful for things like console logs and other logging that is beneficial for defects. The Style Inspector is another mandatory tool for React that lets you see how styles are applied to different elements. For React Native, the documentation contains useful links to helpful tools.

Project Work

Assign the new React developer low-level bugs or feature enhancements to tackle. Closely monitoring their progress via pair programing has been extremely beneficial in my experience. This provides the opportunity to ask real-time questions to which the experienced developer can offer guidance. This also provides an opportunity to correct any mistakes or bad practices before they become ingrained. Merge requests should be reviewed together before approval to ensure code quality.

In Closing

These tips and tools will give a new React or React Native developer the skills they can develop to contribute to projects. Obviously, the transition to React Native will be a lot smoother for a developer familiar with React, but any developer that is familiar with JavaScript/HTML should be able to pick up both quickly.

Thanks for your time and I wish you the best of luck with onboarding your new developer onto your project!

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/17/ramp-up-on-react-react-native-in-less-than-a-month/feed/ 1 370755
Extending the Capabilities of Your Development Team with Visual Studio Code Extensions https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/ https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/#respond Tue, 11 Feb 2025 20:53:23 +0000 https://blogs.perficient.com/?p=377088

Introduction

Visual Studio Code (VS Code) has become a ubiquitous tool in the software development world, prized for its speed, versatility, and extensive customization options. At its heart, VS Code is a lightweight, open-source code editor that supports a vast ecosystem of extensions. These extensions are the key to unlocking the true potential of VS Code, transforming it from a simple editor into a powerful, tailored IDE (Integrated Development Environment).

This blog post will explore the world of VS Code extensions, focusing on how they can enhance your development team’s productivity, code quality, and overall efficiency. We’ll cover everything from selecting the right extensions to managing them effectively and even creating your own custom extensions to meet specific needs.

What are Visual Studio Code Extensions?

Extensions are essentially plugins that add new features and capabilities to VS Code. They can range from simple syntax highlighting and code completion tools to more complex features like debuggers, linters, and integration with external services. The Visual Studio Code Marketplace hosts thousands of extensions, catering to virtually every programming language, framework, and development workflow imaginable.

Popular examples include Prettier for automatic code formatting, ESLint for identifying and fixing code errors, and Live Share for real-time collaborative coding.

Why Use Visual Studio Code Extensions?

The benefits of using VS Code extensions are numerous and can significantly impact your development team’s performance.

  1. Improve Code Quality: Extensions like ESLint and JSHint help enforce coding standards and identify potential errors early in the development process. This leads to more robust, maintainable, and bug-free code.
  2. Boost Productivity: Extensions like Auto Close Tag and IntelliCode automate repetitive tasks, provide intelligent code completion, and streamline your workflow. This allows developers to focus on solving complex problems rather than getting bogged down in tedious tasks.
  3. Enhance Collaboration: Extensions like Live Share enable real-time collaboration, making it easier for team members to review code, pair program, and troubleshoot issues together, regardless of their physical location.
  4. Customize Your Workflow: VS Code’s flexibility allows you to tailor your development environment to your specific needs and preferences. Extensions like Bracket Pair Colorizer and custom themes can enhance readability and create a more comfortable and efficient working environment.
  5. Stay Current: Extensions provide support for the latest technologies and frameworks, ensuring that your team can quickly adapt to new developments in the industry and leverage the best tools for the job.
  6. Save Time: By automating common tasks and providing intelligent assistance, extensions like Path Intellisense can significantly reduce the amount of time spent on mundane tasks, freeing up more time for creative problem-solving and innovation.
  7. Ensure Consistency: Extensions like EditorConfig help enforce coding standards and best practices across your team, ensuring that everyone is following the same guidelines and producing consistent, maintainable code.
  8. Enhance Debugging: Powerful debugging extensions like Debugger for Java provide advanced debugging capabilities, making it easier to identify and resolve issues quickly and efficiently.

Managing IDE Tools for Mature Software Development Teams

As software development teams grow and projects become more complex, managing IDE tools effectively becomes crucial. A well-managed IDE environment can significantly impact a team’s ability to deliver high-quality software on time and within budget.

  1. Standardization: Ensuring that all team members use the same tools and configurations reduces discrepancies, improves collaboration, and simplifies onboarding for new team members. Standardized extensions help maintain code quality and consistency, especially in larger teams where diverse setups can lead to confusion and inefficiencies.
  2. Efficiency: Streamlining the setup process for new team members allows them to get up to speed quickly. Automated setup scripts can install all necessary extensions and configurations in one go, saving time and reducing the risk of errors.
  3. Quality Control: Enforcing coding standards and best practices across the team is essential for maintaining code quality. Extensions like SonarLint can continuously analyze code quality, catching issues early and preventing bugs from making their way into production.
  4. Scalability: As your team evolves and adopts new technologies, managing IDE tools effectively facilitates the integration of new languages, frameworks, and tools. This ensures that your team can quickly adapt to new developments and leverage the best tools for the job.
  5. Security: Keeping all tools and extensions up-to-date and secure is paramount, especially for teams working on sensitive or high-stakes projects. Regularly updating extensions prevents security issues and ensures access to the latest features and security patches.

Best Practices for Managing VS Code Extensions in a Team

Effectively managing VS Code extensions within a team requires a strategic approach. Here are some best practices to consider:

  1. Establish an Approved Extension List: Create and maintain a list of extensions that are approved for use by the team. This ensures that everyone is using the same core tools and configurations, reducing inconsistencies and improving collaboration. Consider using a shared document or a dedicated tool to manage this list.
  2. Automate Installation and Configuration: Use tools like Visual Studio Code Settings Sync or custom scripts to automate the installation and configuration of extensions and settings for all team members. This ensures that everyone has the same setup without manual intervention, saving time and reducing the risk of errors.
  3. Implement Regular Audits and Updates: Regularly review and update the list of approved extensions to add new tools, remove outdated ones, and ensure that all extensions are up-to-date with the latest security patches. This helps keep your team current with the latest developments and minimizes security risks.
  4. Provide Training and Documentation: Offer training and documentation on the approved extensions and best practices for using them. This helps ensure that all team members are proficient in using the tools and can leverage them effectively.
  5. Encourage Feedback and Collaboration: Encourage team members to provide feedback on the approved extensions and suggest new tools that could benefit the team. This fosters a culture of continuous improvement and ensures that the team is always using the best tools for the job.

Security Considerations for VS Code Extensions

While VS Code extensions offer numerous benefits, they can also introduce security risks if not managed properly. It’s crucial to be aware of these risks and take steps to mitigate them.

  1. Verify the Source: Only install extensions from trusted sources, such as the Visual Studio Code Marketplace. Avoid downloading extensions from unknown or unverified sources, as they may contain malware or other malicious code.
  2. Review Permissions: Carefully review the permissions requested by extensions before installing them. Be cautious of extensions that request excessive permissions or access to sensitive data, as they may be attempting to compromise your security.
  3. Keep Extensions Updated: Regularly update your extensions to ensure that you have the latest security patches and bug fixes. Outdated extensions can be vulnerable to security exploits, so it’s important to keep them up-to-date.
  4. Use Security Scanning Tools: Consider using security scanning tools to automatically identify and assess potential security vulnerabilities in your VS Code extensions. These tools can help you proactively identify and address security risks before they can be exploited.

Creating Custom Visual Studio Code Extensions

In some cases, existing extensions may not fully meet your team’s specific needs. Creating custom VS Code extensions can be a powerful way to add proprietary capabilities to your IDE and tailor it to your unique workflow. One exciting area is integrating AI Chatbots directly into VS Code for code generation, documentation, and more.

  1. Identify the Need: Start by identifying the specific functionality that your team requires. This could be anything from custom code snippets and templates to integrations with internal tools and services. For this example, we’ll create an extension that allows you to highlight code, right-click, and generate documentation using a custom prompt sent to an AI Chatbot.

  2. Learn the Basics: Familiarize yourself with the Visual Studio Code Extension API and the tools required to develop extensions. The API documentation provides comprehensive guides and examples to help you get started.

  3. Set Up Your Development Environment: Install the necessary tools, such as Node.js and Yeoman, to create and test your extensions. The Yeoman generator for Visual Studio Code extensions can help you quickly scaffold a new project.

  4. Develop Your Extension: Write the code for your extension, leveraging the Visual Studio Code Extension API to add the desired functionality. Be sure to follow best practices for coding and testing to ensure that your extension is reliable, maintainable, and secure.

  5. Test Thoroughly: Test your extension in various scenarios to ensure that it works as expected and doesn’t introduce any new issues. This includes testing with different configurations, environments, and user roles.

  6. Distribute Your Extension: Once your extension is ready, you can distribute it to your team. You can either publish it to the Visual Studio Code Marketplace or share it privately within your organization. Consider using a private extension registry to manage and distribute your custom extensions securely.

Best Practices for Extension Development

Developing robust and efficient VS Code extensions requires careful attention to best practices. Here are some key considerations:

  • Resource Management:

    • Dispose of Resources: Properly dispose of any resources your extension creates, such as disposables, subscriptions, and timers. Use the context.subscriptions.push() method to register disposables, which will be automatically disposed of when the extension is deactivated.
    • Avoid Memory Leaks: Be mindful of memory usage, especially when dealing with large files or data sets. Use techniques like streaming and pagination to process data in smaller chunks.
    • Clean Up on Deactivation: Implement the deactivate() function to clean up any resources that need to be explicitly released when the extension is deactivated.
  • Asynchronous Operations:

    • Use Async/Await: Use async/await to handle asynchronous operations in a clean and readable way. This makes your code easier to understand and maintain.
    • Handle Errors: Properly handle errors in asynchronous operations using try/catch blocks. Log errors and provide informative messages to the user.
    • Avoid Blocking the UI: Ensure that long-running operations are performed in the background to avoid blocking the VS Code UI. Use vscode.window.withProgress to provide feedback to the user during long operations.
  • Security:

    • Validate User Input: Sanitize and validate any user input to prevent security vulnerabilities like code injection and cross-site scripting (XSS).
    • Secure API Keys: Store API keys and other sensitive information securely. Use VS Code’s secret storage API to encrypt and protect sensitive data.
    • Limit Permissions: Request only the necessary permissions for your extension. Avoid requesting excessive permissions that could compromise user security.
  • Performance:

    • Optimize Code: Optimize your code for performance. Use efficient algorithms and data structures to minimize execution time.
    • Lazy Load Resources: Load resources only when they are needed. This can improve the startup time of your extension.
    • Cache Data: Cache frequently accessed data to reduce the number of API calls and improve performance.
  • Code Quality:

    • Follow Coding Standards: Adhere to established coding standards and best practices. This makes your code more readable, maintainable, and less prone to errors.
    • Write Unit Tests: Write unit tests to ensure that your code is working correctly. This helps you catch bugs early and prevent regressions.
    • Use a Linter: Use a linter to automatically identify and fix code style issues. This helps you maintain a consistent code style across your project.
  • User Experience:

    • Provide Clear Feedback: Provide clear and informative feedback to the user. Use status bar messages, progress bars, and error messages to keep the user informed about what’s happening.
    • Respect User Settings: Respect user settings and preferences. Allow users to customize the behavior of your extension to suit their needs.
    • Keep it Simple: Keep your extension simple and easy to use. Avoid adding unnecessary features that could clutter the UI and confuse the user.

By following these best practices, you can develop robust, efficient, and user-friendly VS Code extensions that enhance the development experience for yourself and others.

Example: Creating an AI Chatbot Integration for Documentation Generation

Let’s walk through creating a custom VS Code extension that integrates with an AI Chatbot to generate documentation for selected code. This example assumes you have access to an AI Chatbot API (like OpenAI’s GPT models). You’ll need an API key. Remember to handle your API key securely and do not commit it to your repository.

1. Scaffold the Extension:

First, use the Yeoman generator to create a new extension project:

yo code

2. Modify the Extension Code:

Open the generated src/extension.ts file and add the following code to create a command that sends selected code to the AI Chatbot and displays the generated documentation:

import * as vscode from 'vscode';
import axios from 'axios';

export function activate(context: vscode.ExtensionContext) {
 let disposable = vscode.commands.registerCommand('extension.generateDocs', async () => {
  const editor = vscode.window.activeTextEditor;
  if (editor) {
   const selection = editor.selection;
   const selectedText = editor.document.getText(selection);

   const apiKey = 'YOUR_API_KEY'; // Replace with your actual API key
   const apiUrl = 'https://api.openai.com/v1/engines/davinci-codex/completions';

   try {
    const response = await axios.post(
     apiUrl,
     {
      prompt: `Generate documentation for the following code:\n\n${selectedText}`,
      max_tokens: 150,
      n: 1,
      stop: null,
      temperature: 0.5,
     },
     {
      headers: {
       'Content-Type': 'application/json',
       Authorization: `Bearer ${apiKey}`,
      },
     }
    );

    const generatedDocs = response.data.choices[0].text;
    vscode.window.showInformationMessage('Generated Documentation:\n' + generatedDocs);
   } catch (error) {
    vscode.window.showErrorMessage('Error generating documentation: ' + error.message);
   }
  }
 });

 context.subscriptions.push(disposable);
}

export function deactivate() {}

3. Update package.json:

Add the following command configuration to the contributes section of your package.json file:

"contributes": {
    "commands": [
        {
            "command": "extension.generateDocs",
            "title": "Generate Documentation"
        }
    ]
}

4. Run and Test the Extension:

Press F5 to open a new VS Code window with your extension loaded. Highlight some code, right-click, and select “Generate Documentation” to see the AI-generated documentation.

Packaging and Distributing Your Custom Extension

Once you’ve developed and tested your custom VS Code extension, you’ll likely want to share it with your team or the wider community. Here’s how to package and distribute your extension, including options for local and private distribution:

1. Package the Extension:

VS Code uses the vsce (Visual Studio Code Extensions) tool to package extensions. If you don’t have it installed globally, install it using npm:

npm install -g vsce

Navigate to your extension’s root directory and run the following command to package your extension:

vsce package

This will create a .vsix file, which is the packaged extension.

2. Publish to the Visual Studio Code Marketplace:

To publish your extension to the Visual Studio Code Marketplace, you’ll need to create a publisher account and obtain a Personal Access Token (PAT). Follow the instructions on the Visual Studio Code Marketplace to set up your publisher account and generate a PAT.

Once you have your PAT, run the following command to publish your extension:

vsce publish

You’ll be prompted to enter your publisher name and PAT. After successful authentication, your extension will be published to the marketplace.

3. Share Privately:

If you prefer to share your extension privately within your organization, you can distribute the .vsix file directly to your team members. They can install the extension by running the following command in VS Code:

code --install-extension your-extension.vsix

Alternatively, you can set up a private extension registry using tools like Azure DevOps Artifacts or npm Enterprise to manage and distribute your custom extensions securely.

Conclusion

Visual Studio Code extensions are a powerful tool for enhancing the capabilities of your development environment and improving your team’s productivity, code quality, and overall efficiency. By carefully selecting, managing, and securing your extensions, you can create a tailored IDE that meets your specific needs and helps your team deliver high-quality software on time and within budget. Whether you’re using existing extensions from the marketplace or creating your own custom solutions, the possibilities are endless. Embrace the power of VS Code extensions and unlock the full potential of your development team.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/02/11/extending-the-capabilities-of-your-development-team-with-visual-studio-code-extensions/feed/ 0 377088
How Copilot Vastly Improved My React Development https://blogs.perficient.com/2025/01/08/how-copilot-vastly-improved-my-react-development/ https://blogs.perficient.com/2025/01/08/how-copilot-vastly-improved-my-react-development/#respond Wed, 08 Jan 2025 18:37:01 +0000 https://blogs.perficient.com/?p=375355

I am always looking to write better, more performant and cleaner code. GitHub Copilot checks all the boxes and makes my life easier. I have been using it since the 2021 public beta, the hype is real!

According to the GitHub Copilot website, it is:

“The world’s most widely adopted AI developer tool.”  

While that sounds impressive, the proof is in the features that help the average developer produce higher quality code, faster. It doesn’t replace a human developer, but that is not the point. The name says it all, it’s a tool designed to work alongside developers. 

When we look at the stats, we see some very impressive numbers:

  • 75% of developers report more satisfaction with their jobs 
  • 90% of Fortune 100 companies use Copilot 
  • With 55% of developers prefer Copilot 
  • Developers report a 25% increase in speed 

Day in the Life

I primarily use Copilot for code completion and test cases for ReactJS and JavaScript code.

When typing predictable text such as “document” in a JavaScript file, Copilot will review the current file and public repositories to provide a context correct completion. This is helpful when I create new code or update existing code. Code suggestion via Copilot chat enables me to ask for possible solutions to a problem. “How do I type the output of this function in Typescript?”  

Additionally, it can explain existing code, “Explain lines 29-54.” Any developer out there should be able to see the value there. An example of this power comes from one of my colleagues: 

“Copilot’s getting better all the time. When I first started using it, maybe 10% of the time I’d be unable to use its suggestions because it didn’t make sense at all. The other day I had it refactor two classes by moving the static functions and some common logic into a static third class that the other two used, and it was pretty much correct, down to style. Took me maybe thirty seconds to figure out how to tell Copilot what to do and another thirty seconds for it to do the work.” 

Generally, developers dislike writing comments.  Worry not, Copilot can do that! In fact, I use it to write the first draft of every comment in my code.  Copilot goes a step further and writes user tests from the context of a file — “Write Jest tests for this file.”  

One of my favorite tools is /fix– which provides an attempt to resolve any errors in the code. This is not limited to errors visible in the IDE. Occasionally after compilation, there will be one or more errors. Asking Copilot to fix these errors is often successful, even though the error(s) may not visible. The enterprise version will even create commented pull requests! 

Although these features are amazing, there are methods to get the most out of it. You must be as specific as possible. This is most important when using code suggestions.

If I ask “I need this code to solve the problem created by the other functions” — I am not likely to get a helpful solution. However, if I ask “Using lines 10 – 150, and the following functions (a, b, and c) from file two, give me a solution that will solve the problem.”

It is key whenever possible, to break up the requests into small tasks. 

Copilot Wave 2 

The future of Copilot is exciting, indeed. While I have been talking about GitHub Copilot, the entire Microsoft universe is getting the “Copilot” treatment. In what Microsoft calls Copilot Wave 2, it is added to Microsoft 365.  

Wave 2 features include: 

  • Python for Excel 
  • Email prioritization in Outlook 
  • Team Copilot 
  • Better transcripts with the ability to ask Copilot a simple question as we would a co-worker, “What did I miss?”  

The most exciting new Copilot feature is Copilot Agents.  

“Agents are AI assistants designed to automate and execute business processes, working with or for humans. They range in capability from simple, prompt-and-response agents to agents that replace repetitive tasks to more advanced, fully autonomous agents.” 

With this functionality, the entire Microsoft ecosystem will benefit. Using agents, it would be possible to find information quickly in SharePoint across all the sites and other content areas. Agents can autonomously function and are not like chatbots. Chatbots work on a script, whereas Agents function with the full knowledge of an LLM. I.E. a service agent could provide documentation on the fly based on an English description of a problem. Or answer questions from a human with very human responses based on technical data or specifications. 

There is a new Copilot Studio, providing a low code solution allowing more people the ability to create agents. 

GitHub Copilot is continually updated as well. Since May, there is a private beta for Copilot extensions. This allows third-party vendors to utilize the natural language processing power of Copilot inside of GitHub, a major enhancement jumping Copilot to GPT-4o, and Copilot extensions which will provide customers the ability to use plugins and extensions to expand functionality. 

Conclusion

Using these features with Copilot, I save between 15-25% of my day writing code. Freeing me up for other tasks. I’m excited to see how Copilot Agents will evolve into new tools to increase developer productivity.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2025/01/08/how-copilot-vastly-improved-my-react-development/feed/ 0 375355
Set Your API Performance on Fire With BlazeMeter https://blogs.perficient.com/2024/05/20/set-your-api-performance-on-fire-with-blazemeter/ https://blogs.perficient.com/2024/05/20/set-your-api-performance-on-fire-with-blazemeter/#respond Mon, 20 May 2024 15:45:43 +0000 https://blogs.perficient.com/?p=358370

BlazeMeter, continuous testing platform,  is a perfect solution for your performance needs. BlazeMeter is an open-source tool that supports Web, Mobile and API implementations. You can perform large scale load and performance testing with the ability to tweak parameters to suit your needs.

We will learn step by step process on using BlazeMeter for API testing.

Register for BlazeMeter

Enter your information on the BlazeMeter site to register and get started

Configure Your First Scenario

The first time you login, you will be taken to default view of BlazeMeter with default workspace and project. Let us start configuring a new scenario.

Create a New Project

  1. Select Projects -> Create new project
  2. Name project
  3. Select Create Test
  4. Select Performance Test
  5. Now you are taken to configuration tab

 

Update Your Scenario

  1. The left section here has your test specifications
  2. Tap on Edit link and start updating your project name, let it be “FirstLoadTest”
  3. You can define scenario and test data in Scenario Definition section
  4. For this Demo we will configure API endPoint, tap on Enter URL/API calls (see picture below)
  5. In Scenario Definition enter “https://api.demoblaze.com/entries“. So we are load testing this endpoint with GET call
  6. Lets Name this scenario “DemoWithoutParameters”
  7. Tap on three dots next to Scenario definition and duplicate the scenario
  8. Name this as “DemoWithParameters”

Test Specifications

Create TestData

Create New Csvfile

  1. Next to Scenario Definition we have TestData section, tap on it
  2. You can choose from options available, for this demo we will go with “Create New Data Entity”
  3. Lets name it “DemoTestData” and Add it
  4. Tap on + icon next to entity created for parameterization options
  5. In this example we will select New CSV File
  6. You will be taken to a data table. Rename “variableName1” to “Parameter1” and “variableName2” to “Parameter2″(our variable names are “Parameter1” and “Parameter”)
  7. Enter values as “Value1” and “Value2” and Save
  8. Configure these parameters in Query Parameters section (See picture below)
  9. Now we have successfully completed building a scenario with two endpoints, you can configure one or more endpoints in one scenario

Scenariodefinition

Configure Your First Test Run

  1. Scroll down the scenario definition window to see Load Configuration section
  2. Enter Total Users, Duration, Ramp up Time. For now we can just test with 2 users, Duration: 1minute, RampupTime: 0
  3. Once you update these details observe the graphical representation of how your Load Test is going to be in the graph displayed in this section.
  4. We can also limit Requests Per Second(RPS) by enabling the toggle button for “Limit RPS” and select requests you need to limit per second
  5. We can also change number of users at run time, but this is available with only Enterprise Plan.
  6. Lets configure LoadDistribution now in “Load Distribution” section which is right below the “Load Configuration” section
  7. Select the location from where you need the requests to trigger.
  8. We can select multiple locations and distribute load across different locations, but again this feature is available with only enterprise plan
  9. For now, lets proceed by selecting one location

Load Configuration

Failure Criteria

  1. Failure Criteria is the best approach to immediately know your LoadTest Results
  2. Do you have your failure criteria defined? If yes, you can configure that in this section. This is optional, you can skip if you don’t have failure criteria defined.
  3. You can configure multiple failure criteria as well
  4. Enable “1-min slide window eval” for evaluating your loudest prior to execution
  5. Select “Stop Test?” checkbox if you want to stop the execution in case of failure
  6. Select “Ignore failure criteria during rampup” to ignore the failures during ramp-ups
  7. You can add one or more failure criteria and select this option uniquely for each criteria
  8. Select the option “Enable 1-min slide window eval for all” on top right of this section to enable for all provided failure criteria

Failure Criteria

Test Your Scenario

  1. Run your scenario by clicking on “RunTest”
  2. Wait for launch Test window to load completely
  3. Now click on “Launch Servers” button
  4. Click on “Abort Test” to abort your execution any time
  5. Observe your execution go through different stages (Pending, Booting, Downloading and Ready)
  6. Once it reaches Ready you can see your execution progress
  7. Once the execution is done you can view the summary with status as passed/failed

Blaze Executionstatus

Analyze Your LoadTest Results

  1. The important part of performance test is to analyze your KPIs
  2. You can see different KPIs in test results summary
  3. To understand more navigate to “Timeline Report” section, bottom left you can see “KPI Panel”,this panel contains different KPIS.These KPIs can be analyzed as required
  4. By default it provides generalized view, you can select single endpoint to analyze KPIs for one particular endpoint

Blazemeter Analyze Results

Schedule Your Load Tests

  1. BlazeMeter is continuous Integration tool, you can schedule your executions and view results when required
  2. Select your test from Tests Menu on top
  3. On to left of project description window you can find SCHEDULE section
  4. Tap on Add button next to it Schedule to see schedule window
  5. Configure the scheduler with required timings and Save the scheduler
  6. The new scheduler will be added to your project
  7. Delete it by tapping on Delete icon
  8. You can add multiple schedulers
  9. Toggle on/off to activate/deactivate the schedulers

Schedule Section

BlazeMeter Pros/Cons

ProsCons
Open sourceRequires a license for additional features and support
Provides Scriptless performance testingTest results analysis requires expertise
Integration with Selenium, JMeter, Gatling, LocustNeed to integrate with Selenium/JMeter to test functional scenarios
User-friendly UI
Report Monitoring from any geographic location
Integrates with CI/CD pipelines

If you are looking for a tool that services your performance needs, BlazeMeter is your best option. You can generate scripts with its scriptless UI, simulate loads and run your tests. You can also simulate the spinning up servers, script runs and results generated within seconds.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2024/05/20/set-your-api-performance-on-fire-with-blazemeter/feed/ 0 358370
Level Up Your Map with the ArcGIS SDK https://blogs.perficient.com/2024/05/09/level-up-your-map-with-the-arcgis-sdk/ https://blogs.perficient.com/2024/05/09/level-up-your-map-with-the-arcgis-sdk/#respond Thu, 09 May 2024 16:12:08 +0000 https://blogs.perficient.com/?p=356410

In today’s tech-driven world, the ability to visualize data spatially has been vital for various industries. Enter ArcGIS, a Geographic Information System (GIS) developed by ESRI, which is here to help us solve our client’s needs. Let’s chart our way into the world of ArcGIS and how it empowers businesses to harness the full capabilities of today’s mapping software.

Overview

At its core, ArcGIS is a comprehensive mapping solution that enables you to deliver a high quality experience for your users. It integrates various geographic data sets, allows users to overlay layers, analyze spatial relationships and extract meaningful insights. The user-friendly features and wide array of capabilities differentiates ArcGIS from competitors.

Standard Features

ArcGIS offers a plethora of map features designed to level up your user’s experience. Basic features such as customizable basemap tiles, display the user’s location in real-time and intuitive pan and zoom functions all makes map navigation a smooth and familiar experience.

However, the true power of ArcGIS lies in its ability to visualize and interact with objects on a map. Custom-styled map markers with the same look and feel of pre-existing symbols, enables users to identify and track objects just as they’re used to seeing them. And if you have many objects in close proximity to one another? Group them together with “clusters” that can break apart or regroup at specific zoom levels.

Advanced Features

By providing methods to display object details or toggle visibility based on predefined groups, ArcGIS gives businesses the power to streamline asset management. And that just scratches the surface of the advanced features available!

With ArcGIS, you can draw on the map to indicate an area, or even let your users draw on the map themselves. You can apply a “highlight” styling on visible objects that meet a criteria. You can search for objects with a multitude of filters, such as object type, any custom attributes (defined and set by your organization’s data management team), or even search for objects within a defined geographical boundary.

The limit of its applications is your imagination.

Offline Maps

But what happens when you’re off the grid? Won’t we lose all of these convenient features? Fear not, as ArcGIS enables continued productivity even in offline environments.

By downloading map sections for offline use, users can still access critical data and functionalities without internet connectivity, a feature especially useful for your on-the-go users.

If storage space is a concern, you can decide which data points for objects are downloaded. So if your users just need to see the symbols on the map, you can omit the attributes data to cut down on payload sizes.

In conclusion, ArcGIS stands as one of the leaders in mapping technology, empowering businesses to unlock new opportunities. From basic map features to advanced asset management capabilities, ArcGIS is more than just a mapping solution—it’s a gateway to spatial intelligence. So, embrace the power of ArcGIS and chart your path to success in the digital age!

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2024/05/09/level-up-your-map-with-the-arcgis-sdk/feed/ 0 356410
React Native – A Web Developer’s Perspective on Pivoting to Mobile https://blogs.perficient.com/2024/04/29/from-react-to-react-native-a-high-level-view/ https://blogs.perficient.com/2024/04/29/from-react-to-react-native-a-high-level-view/#respond Mon, 29 Apr 2024 20:35:43 +0000 https://blogs.perficient.com/?p=362248

Making the Switch

I’ve been working with React Web for the last 6 years of my dev career. I’m most familiar and comfortable in this space and enjoy working with React. However, I was presented with an opportunity to move into mobile development towards the end of 2023. Having zero professional mobile development experience on past projects, I knew I needed to ramp up fast if I was going to be able to seize this opportunity. I was excited to take on the challenge!

I have plenty to learn, but I wanted to share some of the interesting things that I have learned along the way. I also wanted to share this with a perspective since I’m already comfortable with React. Just how much is there to learn in order to be a successful contributor on a React Native project?

Existing React Knowledge I Leveraged

Components! It’s still React.

You have functional components that return stuff. These components have access to the same hooks you are familiar with (useState, useEffect etc.) which means you have the same state management/rendering. The “stuff” I mentioned above is JSX, a familiar syntax. You can also leverage Redux for global application state. All of the things I mentioned have very thorough and reliable documentation as well. Bringing all of this to the table when you pivot to React-Native gets you over 50% of the way there.

The New Bits

There is no DOM. But that’s OK! Because you were already leveraging JSX instead of HTML anyways. The JSX you use for React Native is almost identical, except with no HTML elements.

Example code snippet (Source: https://reactnative.dev/docs/tutorial)

import React, {useState} from 'react';
import {View, Text, Button, StyleSheet} from 'react-native';

const App = () => {
  const [count, setCount] = useState(0);

  return (
    <View style={styles.container}>
      <Text>You clicked {count} times</Text>
      <Button
        onPress={() => setCount(count + 1)}
        title="Click me!"
      />
    </View>
  );
};

// React Native Styles
const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
  },
});

There are only 2 things in the example above that differ from web:

  1. Unique React Native Core Components
  2. Styles are created with a different syntax

Additionally, there is also no baked in browser support (no console or network tab). So, debugging your app by default is a bit more complex. Fortunately, there are tools out there to bridge the gap. Flipper will help with seeing your console logs similar to what Chrome would do on web. For inspecting UI elements you can hit a hotkey from your simulator command + control + z and see a helpful menu Show Element Inspector.

Additional Considerations

  • There are components referred to as Core Components. Surprisingly, there aren’t a ton, and you can accomplish a lot with only learning a handful. These will be your primary components you use in place of HTML looking JSX from web.
  • There is no CSS. You can set up styles in a similar fashion via a styling API which is passed into individual JSX elements via a style prop which has a similar look to web. Your styles do not cascade like they would with CSS by default; but there are ways around this too.
  • You have access to physical hardware on the phone (the camera). You can leverage location services as well as share content via the native OS share prompts.
  • The biggest shock to switching to React Native and mobile in general, application deployment is more complicated. Instead of deploying your built code to a web server you now must play ball with Apple and Google for them to host your app within their store. Which means instead of deploying to a web server, you have to deploy twice for mobile. One for App Store Connect and another for Google Play.

Final Thoughts

I covered the details I encountered on my journey from web to mobile. It’s important to spend time learning what the React Native API offers for you in place of the DOM elements you are already familiar with. I hope this helps anyone planning to get into mobile development.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2024/04/29/from-react-to-react-native-a-high-level-view/feed/ 0 362248
Make Your Flutter Apps Soar with Pigeon Platform Channels https://blogs.perficient.com/2024/03/21/make-your-flutter-apps-soar-with-pigeon-platform-channels/ https://blogs.perficient.com/2024/03/21/make-your-flutter-apps-soar-with-pigeon-platform-channels/#respond Thu, 21 Mar 2024 18:08:29 +0000 https://blogs.perficient.com/?p=352054

Flutter is great framework for cross platform development. It allows you to make pixel perfect apps that are generated into native code, but what happens if you need to use existing code in iOS or Android directly? For situations like these, Flutter allows you to use platform channels.

Platform channels give you access to platform-specific APIs in a language that works directly with those APIs. Platform channels are available for Kotlin or Java on Android, Swift or Objective-C on iOS and macOS, C++ on Windows and C on Linux.

More information can be found on this here https://docs.flutter.dev/platform-integration/platform-channels

The platform APIs provided by Flutter work as intended, but the whole process is a bit cumbersome to set up. Pigeon allows us to use type safety and code generation to make this process a whole lot simpler.

Create a Pigeon Plugin

We will go ahead and create a simple example api.

Let’s start by creating a new plugin called pigeon_example

flutter create --org com.example --template=plugin --platforms=android,ios,linux,macos,windows -i swift pigeon_example
flutter pub add pigeon
flutter pub get

Platform Channel Types in Swift

Below is a list of supported types in Dart and their swift equivalents. We will use some of the most common types in our example

Dart TypesSwift Types
nullnil
boolNSNumber(value: Bool)
intNSNumber(value: Int32)
int, if 32 bits not enoughNSNumber(value: Int)
doubleNSNumber(value: Double)
StringString
Uint8ListFlutterStandardTypedData(bytes: Data)
Int32ListFlutterStandardTypedData(int32: Data)
Int64ListFlutterStandardTypedData(int64: Data)
Float32ListFlutterStandardTypedData(float32: Data)
Float64ListFlutterStandardTypedData(float64: Data)
ListArray
MapDictionary

Define Our API

In order to let Pigeon know what methods we’re going to be exposing we define our API in an abstract Dart class with the @HostApi() decorator, and its methods

Let’s define our Pigeon Example API in a new directory named pigeons.

import 'package:pigeon/pigeon.dart';

@HostApi()

abstract class ExampleApi {
bool getBool();
String getString();
func toggleValue();
}

Generate Pigeon Platform Code

Now we can let the Pigeon package do it’s magic and we can generate some code

dart run pigeon \
--input pigeons/example_api.dart \
--dart_out lib/example_api.dart \
--experimental_swift_out ios/Classes/ExampleApi.swift \
--kotlin_out ./android/app/src/main/kotlin/com/example/ExampleApi.kt \
--java_package "io.flutter.plugins"

Be sure that the paths to all of the files are correct or the next steps won’t work. Generate the code with the output for the platforms needed. This is example is going to focus on using Swift.

Add Method Implementation to the Runner

Next we need to write our native implementation of our methods. When doing this we need to add our files to the runner in Xcode to ensure that they run properly.

class ExampleApiImpl : ExampleApi{
var value = true;

func getBool(){
return value;
}
func toggleValue(){
    value = !value
  }
func getString(){
return "THIS IS AN EXAMPLE";
}

}

Add Pigeon Platform Channel to AppDelegate

You will also need to add this code in your AppDelegate.swift file

@UIApplicationMain

@objc class AppDelegate: FlutterAppDelegate {

override func application(

_ application: UIApplication,

didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?

) -> Bool {

GeneratedPluginRegistrant.register(with: self)

let exampleApi = ExampleApiImpl()

let controller : FlutterViewController = window?.rootViewController as! FlutterViewController

ExampleApiSetup.setUp(binaryMessenger: controller.binaryMessenger, api: exampleApi)




return super.application(application, didFinishLaunchingWithOptions: launchOptions)

}

}

 

Now you should be able to use your API in Dart code.

 

import 'package:flutter/material.dart';
import 'package:pigeon_example/example_api.dart';
import 'dart:async';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatefulWidget {
  const MyApp({super.key});

  @override
  State<MyApp> createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  final exampleApi = ExampleApi();
  bool value = false;
  @override
  void initState() {
    super.initState();
    initPlatformState();
  }

  // Platform messages are asynchronous, so we initialize in an async method.
  Future<void> initPlatformState() async {
    // Platform messages may fail, so we use a try/catch PlatformException.
    // We also handle the message potentially returning null.
    // If the widget was removed from the tree while the asynchronous platform
    // message was in flight, we want to discard the reply rather than calling
    // setState to update our non-existent appearance.
    if (!mounted) return;
  }

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
        home: Scaffold(
            appBar: AppBar(
              title: const Text('Plugin example app'),
            ),
            body:
                Column(mainAxisAlignment: MainAxisAlignment.center, children: [
              DefaultTextStyle(
                  style: Theme.of(context).textTheme.displayMedium!,
                  textAlign: TextAlign.center,
                  child: FutureBuilder<String>(
                    future: exampleApi
                        .getString(), // a previously-obtained Future<String> or null
                    builder:
                        (BuildContext context, AsyncSnapshot<String> snapshot) {
                      List<Widget> children = [];
                      if (snapshot.data!.isNotEmpty) {
                        children = <Widget>[
                          Text(snapshot.data ?? ''),
                        ];
                      }
                      return Center(
                        child: Column(
                          mainAxisAlignment: MainAxisAlignment.center,
                          children: children,
                        ),
                      );
                    },
                  )),
              Center(
                child: ElevatedButton(
                  child: const Text('Toggle Value'),
                  onPressed: () async {
                    await exampleApi.toggleValue();
                    var val = await exampleApi.getBool();
                    setState(() {
                      value = val;
                    });
                  },
                ),
              ),
              DefaultTextStyle(
                  style: Theme.of(context).textTheme.displayMedium!,
                  textAlign: TextAlign.center,
                  child: FutureBuilder<bool>(
                    future: exampleApi
                        .getBool(), // a previously-obtained Future<String> or null
                    builder:
                        (BuildContext context, AsyncSnapshot<bool> snapshot) {
                      List<Widget> children;
                      if (snapshot.data == true) {
                        children = <Widget>[
                          const Icon(
                            Icons.check_circle_outline,
                            color: Colors.green,
                            size: 60,
                          ),
                          Padding(
                            padding: const EdgeInsets.only(top: 16),
                            child: Text('Result: ${snapshot.data}'),
                          ),
                        ];
                      } else if (snapshot.data == false) {
                        children = <Widget>[
                          const Icon(
                            Icons.error_outline,
                            color: Colors.red,
                            size: 60,
                          ),
                          Padding(
                            padding: const EdgeInsets.only(top: 16),
                            child: Text('Result: ${snapshot.data}'),
                          ),
                        ];
                      } else {
                        children = const <Widget>[
                          SizedBox(
                            width: 60,
                            height: 60,
                            child: CircularProgressIndicator(),
                          ),
                          Padding(
                            padding: EdgeInsets.only(top: 16),
                            child: Text('Awaiting result...'),
                          ),
                        ];
                      }
                      return Center(
                        child: Column(
                          mainAxisAlignment: MainAxisAlignment.center,
                          children: children,
                        ),
                      );
                    },
                  ))
            ])));
  }
}

 

Now we can see the values from out example API in our Flutter UI. Button toggles will change our boolean value.

Simulator Screenshot Iphone 12 2024 03 21 At 12.56.11 Simulator Screenshot Iphone 12 2024 03 21 At 12.56.08

This same pattern can be used for any type of data supported by Pigeon.

Pigeon simplifies the process of creating platform channels. It also speeds up the process when multiple channels are needed. This becomes very valuable when you need a package that doesn’t have an implementation in Flutter. It’s a bit tricky to set up the first time, but once your scripts are written, modifying existing channels and creating new ones is a breeze.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

 

]]>
https://blogs.perficient.com/2024/03/21/make-your-flutter-apps-soar-with-pigeon-platform-channels/feed/ 0 352054
Parameterize Your Automated QA Test Scenarios With Cucumber https://blogs.perficient.com/2024/02/27/parameterize-your-automated-qa-test-scenarios-with-cucumber/ https://blogs.perficient.com/2024/02/27/parameterize-your-automated-qa-test-scenarios-with-cucumber/#comments Tue, 27 Feb 2024 16:21:29 +0000 https://blogs.perficient.com/?p=355275

Creation of automated QA test scripts in Cucumber provides a low barrier of entry for your QA team. What is Cucumber? Cucumber is Behavior-Driven Development tool that uses Gherkin for scenario syntax. Gherkin is a simple, non-technical language designed for easy maintenance and readability. Gherkin can easily integrate with open-source tools like Selenium, Appium for QA Automation.

We can extend Gherkin syntax in Cucumber test scripts even further with parameters. Parametrization provides the ability to run your scenarios using different test data, write less code and reuse it in multiple locations. In this article, we will learn how to use parameters to create robust test scenarios.

Before digging deep into parameterization, let’s learn about a few keywords!

Useful Keywords:

Scenario: Group of steps that contains user actions and validations.

Test Step: Represents a single user action or validation defined in simple language and starts with keywords like Given, When, Then, And and But.

Step Definition: A method linked to each test step in a scenario. One step definition also can be linked to multiple test steps by following parameterization techniques.

Scenario Outline:  Keyword used for scenarios that contains parameters with values defined. Use Scenario Outline to enable parameters instead of the keyword Scenario. Parameters are defined as variables inside <>. The variables are defined via the Examples keyword.

Examples: Keyword to define variables for a Scenario Outline. I.E. Login credentials for test accounts.

Parameterization Scenarios:

Scenario 1:

Scenario: Verify user can login to Login Page1

Given I am in LoginPage1

When I enter username and password

Then I verify that user is logged into HomePage

And I verify that “home” tab is displayed

And I verify the page title

Scenario 2:

Scenario: Verify user can login to LoginPage2

Given I am in LoginPage2

When I enter username and password

Then I verify that user is logged into HomePage

And I verify that “user” tab is displayed

And I verify the page title and save the title

 

Parameterize with Pre-defined Values

Parameters that can allow only strongly typed values are considered as pre-defined values.

Let’s look into Given statements from above example – Scenario 1: “I am in LoginPage1” and Scenario 2: “I am in LoginPage2.” The steps are same except for the LoginPage1 and LoginPage2 values. We’re going to create a single step definition for both steps.

 

@Given(“^I am in (LoginPage1|LoginPage2)$”)

Public void iAmInLoginPage(String parameter1){

//code here

}

 

 

Note: The parametrized step definitions will start with ^ and ends with $

 

Parameterize with Undefined Values

Parameters with undefined values are variables that can have different input values.

We need to test the above scenarios with different login credentials, which can change often. This can be achieved by updating the test data in the scenario, not the entire script linked to the scenario. The test step “When I enter username and password” is the perfect candidate for our use case. Let’s use the Scenario Outline keyword and pass the parametrized values as Examples. We’ll use <> to pass the username and password to our When clause.

 

Scenario Outlines:

Scenario 1:

Scenario Outline: Verify user is able to login to LoginPage1

Given I am in LoginPage1

When I enter <username> and <password>

Then I verify that user is logged into HomePage1

And I verify that “home” tab is displayed

And I verify the page title and save the title

Examples:

|username|password|

|user1|password1|

|user2|password2|

Note: The above scenario will be executed with iterations for user1 and user2.

The step definitions should use regular expressions to pass values to the scenario as shown below.

 

@When(“^I enter (.*) and (.*)$”)

Public void iEnterUserNameAndPassword(String username,String password){

//code here

}

 

Parameterize with Fixed/Default Values:

Default values are parameters defined in test step of the scenario. We are not making use of parameter names, we are directly passing values from test steps instead.

Let’s investigate the test step Scenario 1: “And I verify that “home” tab is displayed” and Scenario 2: “And I verify that “user” tab is displayed.” Both steps are the same except for the values home and user. Even though they aren’t parameter names, they are values used directly in our test steps. However, we can still create single step definition and link it to both test steps.

 

@And(“I verify that {String} tab is displayed”)

Public void iVerifyGivenTanIsDisplayed(String tabName){

//code here

}

 

Parameterize Null Values:

There are times when a particular parameter can or cannot have a value within our scenarios. This is where we would make use of Null Value parameters.

Let’s take last steps “And I verify the page title” and “And I verify the page title and save the title.” We can create a parameter to accept “ and save the title” or null. This can be achieved with the regular expression character ? and ensure the parameter has a leading space.

@Then(“^I verify the page title( and save the title)?$”)

Public void iVerifyPageTitle(String saveTitle){

//codehere

If(saveTitle != null){

//code here

}

}

 

Test scenario parameters enable reuse for test steps across different scenarios. Your code will be more readable and portable as a result. Parameterization decreases our script efforts by linking step definitions to multiple test steps. Adoption of parameters improves code quality which will allow for less errors, easy updates and better overall code.

You will love the code efficiencies that parameters provide! Why not give them a try.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

 

 

]]>
https://blogs.perficient.com/2024/02/27/parameterize-your-automated-qa-test-scenarios-with-cucumber/feed/ 1 355275
Generative AI Revolution: A Comparative Analysis https://blogs.perficient.com/2024/01/19/generative-ai-revolution-a-comparative-analysis/ https://blogs.perficient.com/2024/01/19/generative-ai-revolution-a-comparative-analysis/#respond Fri, 19 Jan 2024 16:21:50 +0000 https://blogs.perficient.com/?p=353976

In the world of Generative Artificial Intelligence (AI), a new era of large language models has emerged with the remarkable capabilities. ChatGPT, Gemini, Bard and Copilot have made an impact in the way we interact with mobile device and web technologies. We will perform a comparative analysis to highlight the capabilities of each tool.

 

ChatGPTGeminiBardCopilot
Training DataWebWebWebWeb
Accuracy85%85%70%80%
Recall85%95%75%82%
Precision89%90%75%90%
F1 Score91%92%75%84%
MultilingualYesYesYesYes
InputsGPT-3.5: Text Only
GPT-4.0: Text and Images
Text, Images and Google DriveText and ImagesText and Images
Real Time DataGPT-3.5: No
GPT-4.0: Yes
YesYesYes
Mobile SDKhttps://github.com/skydoves/chatgpt-androidAPI Onlyhttps://www.gemini.com/mobileAPI Only
CostGPT-3.5
GPT-4.0
Gemini Pro
Gemini Pro Vision
UndisclosedUndisclosed

Calculation Metrics:

TP – True Positive

FP – False Positive

TN – True Negative

FN – False Negative

Accuracy = (TP +TN) / (TP + FP + TN + FN)

Recall = TP / (TP + FN)

Precision = TP / (TP + FP)

F1 Score = 2 * (Precision * Recall) / (Precision + Recall)

 

Our sample data set consists of 100 queries against Gemini AI. The above formula applied calculates the following scores:

Accuracy: (85 + 0) /100 = 85%

Recall: 85/ (85 + 5) = 94.44%

Precision: 85/ (85 + 10) = 89.47%

F1-Score: 2 * (0.894 * 0.944) / (0.894 + 0.944) = 91.8%

 

Recommended AI Tool:

I recommend Gemini based on its accuracy and consistency.  The ease of integration to the iOS and Android platforms and performance stands out amongst it’s competitors. We will illustrate how easy it is to integration Gemini with 10 easy steps.

Let’s Integrate Gemini into an Android Application!

  1. Download the Android Studio preview release Canary build (Jelly Fish| 2023.3.1).
  2. Create a new project: File -> New -> New Project
  3. Select the Phone and Tablet
    1. Under New Project -> Generate API Starter
    2. Click Next to Proceed
  4. Fill all the necessary details
    1. Enter the Project Name: My Application (or whatever you want to name your project)
    2. Enter the Package Name: (com.companyname.myapplication).
    3. Select the Location to save the project
    4. Select the Minimum SDK version: as API 26 (“Oreo”;Android 8.0)
    5. Select the Build Configuration Language as Kotlin DSL(build.gradle.kts)
    6. Click Finish to proceed 
  5. Create a starter app using the Gemini API
  6. To generate the API. Go to Gemini Studio.
  7. Click Get API Key -> click Create API Key in New Project or Create API Key in Existing Project in the Google AI studio
  8. Select the API key from the Prompt and paste in the Android Studio.
  9. Click Finish to proceed.
  10. Click the Run option in the Android Studio.

And you’re up and running with Generative AI in your Android app!

I typed in “Write a hello world code in java” and Gemini responded with code snippet. You can try out various queries to personalize your newly integrated Generative AI application to your needs.

Screenshot 2024 01 17 At 10.05.06 Pm

Alternatively, you can just Download the Sample app from the GitHub and add the API key to the local.properties to run the app.

It’s essential to recognize the remarkable capabilities of Generative AI tools on the market. Comparison of various AI metrics and architecture can give insight into performance, limitations and suitability for desired tasks. As the AI landscape continues to grow and evolve, we can anticipate even more groundbreaking innovations from AI tools. These innovations will disrupt and transform industries even further as time goes on.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2024/01/19/generative-ai-revolution-a-comparative-analysis/feed/ 0 353976
White Label Your Mobile Apps with Azure https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/ https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/#respond Thu, 21 Dec 2023 15:44:28 +0000 https://blogs.perficient.com/?p=338661

Enterprises and organizations that manage products with overlapping feature sets often confront a unique challenge. Their core dilemma involves creating multiple branded mobile applications that share a common codebase while enabling each app to provide a distinct user experience with minimal development overhead. As a leader in custom mobile solutions, Perficient excels in white labeling mobile applications using the power and flexibility of Azure DevOps.

Tackling the White Label Challenge

Consider a scenario where your application has gained popularity, and multiple clients desire a version that reflects their own brand identity. They want their logos, color schemes, and occasionally distinct features, yet they expect the underlying functionality to be consistent. How do you meet these demands without spawning a myriad of codebases that are a nightmare to maintain? This post outlines a strategy and best practices for white labeling applications with Azure DevOps to meet this challenge head-on.

Developing a Strategy for White Label Success

White labeling transcends merely changing logos and color palettes; it requires strategic planning and an architectural approach that incorporates flexibility.

1. Application Theming

White labeling starts with theming. Brands are recognizable through their colors, icons, and fonts, making these elements pivotal in your design. Begin by conducting a thorough audit of your current style elements. Organize these elements into variables and store them centrally, setting the stage for smooth thematic transitions.

2. Establishing Your Default Configuration

Choosing a ‘default’ configuration is crucial. It sets the baseline for development and validation. The default can reflect one of your existing branded applications and acts as a unified starting point for addressing issues, whether related to implementation or theming.

3. Embracing Remote/Cloud Configurations

Tools like the Azure App Configuration SDK or Firebase Remote Configuration allow you to modify app settings without altering the code directly. Azure’s Pipeline Library also helps manage build-time settings, supporting flexible brand-specific configurations.

Using remote configurations decouples operational aspects from app logic. This approach not only supports white labeling but also streamlines the development and customization cycle.

Note: You can add your Brand from the step 2. Adding Your “Brand” Configuration to Your Build into your build artifacts, and reference the correct values in your remote configurations for your brand.

Coordinating White Labeled Mobile Apps with Azure Pipelines

With your application ready for theming and remote configuration, use Azure Pipelines to automate the build and release of your branded app artifacts. The structure of your build stages and jobs will depend on your particular needs. Here’s a pattern you can follow to organize jobs and stages for clarity and parallelization:

1. Setting Up Your Build Stage by Platforms

Organize your pipeline by platform, not brand, to reduce duplication and simplify the build process. Start with stages for iOS, Android, and other target platforms, ensuring these build successfully with your default configuration before moving to parallel build jobs.

Run unit tests side by side with this stage to catch issues sooner.

2. Adding Your “Brand” Configuration to Your Build

Keep a master list of your brands to spawn related build jobs. This could be part of a YAML template or a file in your repository. Pass the brand value to child jobs with an input variable in your YAML template to make sure the right brand configuration is used across the pipeline.

Here’s an example of triggering Android build jobs for different brands using YAML loops:

stages:
    - stage: Build
      jobs:
          - job: BuildAndroid
            strategy:
                matrix:
                    BrandA:
                        BrandName: 'BrandA'
                    BrandB:
                        BrandName: 'BrandB'
            steps:
                - template: templates/build-android.yml
                  parameters:
                      brandName: $(BrandName)

3. Creating a YAML Job to “Re-Brand” the Default Configuration

Replace static files specific to each brand using path-based scripts. Swap out the default logo at src/img/logo.png with the brand-specific logo at src/Configurations/Foo/img/logo.png during the build process for every brand apart from the default.

An example YAML snippet for this step would be:

jobs:
    - job: RebrandAssets
      displayName: 'Rebrand Assets'
      pool:
          vmImage: 'ubuntu-latest'
      steps:
          - script: |
                cp -R src/Configurations/$(BrandName)/img/logo.png src/img/logo.png
            displayName: 'Replacing the logo with a brand-specific one'

4. Publishing Your Branded Artifacts for Distribution

Once the pipeline jobs for each brand are complete, publish the artifacts to Azure Artifacts, app stores, or other channels. Ensure this process is repeatable for any configured brand to lessen the complexity of managing multiple releases.

In Azure, decide whether to categorize your published artifacts by platform or brand based on what suits your team better. Regardless of choice, stay consistent. Here’s how you might use YAML to publish artifacts:

- stage: Publish
  jobs:
      - job: PublishArtifacts
        pool:
            vmImage: 'ubuntu-latest'
        steps:
            - task: PublishBuildArtifacts@1
              inputs:
                  PathtoPublish: '$(Build.ArtifactStagingDirectory)'
                  ArtifactName: 'drop-$(BrandName)'
                  publishLocation: 'Container'

By implementing these steps and harnessing Azure Pipelines, you can skillfully manage and disseminate white-labeled mobile applications from a single codebase, making sure each brand maintains its identity while upholding a high standard of quality and consistency.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2023/12/21/white-label-your-mobile-apps-with-azure/feed/ 0 338661
Forget Okta? Well, It Might Not Forget You : Solving Remember Device Issues https://blogs.perficient.com/2023/12/11/forget-okta-well-it-might-not-forget-you-solving-remember-device-issues/ https://blogs.perficient.com/2023/12/11/forget-okta-well-it-might-not-forget-you-solving-remember-device-issues/#comments Mon, 11 Dec 2023 14:21:08 +0000 https://blogs.perficient.com/?p=335336

Why Okta?

Who can forget Okta? It is basically a household name at many enterprise companies. Even if you don’t know about Okta, you probably recognize the name. Okta is a cloud-based identity and access management platform that provides secure user authentication and authorization services. It enables organizations to manage and control access to various software applications, systems, and data resources across multiple devices and networks.

In my opinion, the best reasons to use Okta are for multi-factor authentication (MFA) and single sign-on (SSO) functionality. It adds an additional layer of security, enables users to access multiple applications and services with a single set of credentials, eliminating the need for them to remember multiple usernames and passwords.

MFA is extremely important to secure the login flow because it adds an extra layer of verification beyond just a password, which are stolen and leaked every day through phishing and data breaches. We all love a security sidekick, but those extra authentication steps can feel exhaustive and over protective, especially if the login is happening on the same device with the same credentials.

This is where Remember Device comes in and allows users to conveniently access their accounts without the need for frequent MFA. Okta provides this functionality and has several recommended ways to implement it.

Getting Technical

Okta’s documentation provides implementation insight for the Remember Device feature. They have their own implementation, but it may be beneficial to generate your own token for the devices your users access the application from.

To achieve this, Okta exposes the “deviceToken” property that can be included in the context object of a standard authentication request. This device token serves as a unique identifier for the user’s device. Okta makes it clear that a property override is considered a highly privileged action, requiring an administrator API token. The introduction of the context object in this manner ensures Okta remembers this device token regardless of how the user proceeds through the authentication process.

This is where a problem starts to show up. Even if the user doesn’t explicitly set the “rememberDevice” query parameter as true during the MFA process, the device token will live in Okta for that user for some long unknown amount of time. Okta’s recommended approach saves the device token EVERY SINGLE TIME. Even if a user selects Do Not Remember, anyone who saves or has access to that device token can use it to bypass MFA the next time that account is logged in. This can be especially tricky when multiple users may have accounts on the same device.

In the Real World:

  • 300 users login daily for 1 week
  • 300 x 7 = 2100 device tokens stored in Okta per week
  • Okta doesn’t provide any information on it’s threshold for storing these values
  • All prior tokens are still active and can bypass MFA

Luckily, as with most technical challenges, the company responsible has already discovered a way to solve the problem. Let’s take a look at Okta’s own UI widget they provide for login and MFA.

Okta Uses Cookies 

When authenticating through Okta’s widget, the same issue does not occur. If you take the device token generated on login, don’t select “remember device” and try to force that same token on a subsequent login, Okta treats it as a new device token. What is Okta’s widget doing that it’s documentation is hiding from us?

Okta uses the Device Token (DT) cookie as a secure token that is stored on a user’s device after successful authentication. It serves as a form of identification for the device and allows Okta to recognize and trust that device for future login attempts.

This cookie is used by their system to store during an MFA call if the rememberDevice query parameter is true. If that parameter is not true, then Okta will NOT save that token. When integrating with Okta through a custom backend service, if you really want to ensure that the device tokens you generate are not being maintained by Okta, then you will need to set the DT cookie value instead of the context object.

This is not a privileged action, it doesn’t save the device token. Okta even conveniently provides their own device token using that cookie if one is not passed in through the context object or the DT cookie on the initial authentication request.

What’s the Catch?

Unfortunately, this solution only works for Okta integration into a back-end service. Due to the updated Fetch spec, browsers block any JavaScript from accessing the “Set-Cookie” headers.

The only way to approach custom device token generation from a JavaScript front-end, is by using the context object and ensuring that each token generated is unique per user per device.

Conclusion

Okta still provides one of the strongest authentication systems in the tech industry and it will likely be around for a long time. If you want to have a more custom experience and generate your own tokens, keep in mind the issues that can arise from using Okta’s recommended approach. With a backend service, we can work around these issues and ensure we create only a single device token per user upon request. If your solution does not require customization, we recommend the Okta widget that already has the proper device token management built in.

This way Okta will only remember you the way you want them to.

For more information about Perficient’s Mobile Solutions expertise, subscribe to our blog or contact our Mobile Solutions team today!

]]>
https://blogs.perficient.com/2023/12/11/forget-okta-well-it-might-not-forget-you-solving-remember-device-issues/feed/ 1 335336