Ross Grogan-Kaylor, Author at Perficient Blogs https://blogs.perficient.com/author/rgrogankaylor/ Expert Digital Insights Fri, 29 Aug 2025 15:17:54 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Ross Grogan-Kaylor, Author at Perficient Blogs https://blogs.perficient.com/author/rgrogankaylor/ 32 32 30508587 A rabbit hole in web development https://blogs.perficient.com/2024/09/11/a-rabbit-hole-in-web-development/ https://blogs.perficient.com/2024/09/11/a-rabbit-hole-in-web-development/#respond Wed, 11 Sep 2024 16:15:35 +0000 https://blogs.perficient.com/?p=369021

A rabbit hole

Recently, I was learning about some new Adobe software, and came across the line of code import Theme from "@swc-react/theme". This quickly dropped me into the web development education rabbit hole…

  • A quick search shows me that "@swc-react/theme" is React Wrappers for Spectrum Web Components.

  • Another search shows that Spectrum Web Components is a particular implementation of Adobe Spectrum that uses Open Web Components‘s project generator.

  • What is Open Web Components? Well, whatever it is, it relies on something called Lit.

  • What is Lit? It’s a JavaScript library that relies on Web Components.

  • At the end of the rabbit hole, we learn that Web Components is a collection of modern HTML and JavaScript features that allow implementation of “components”, which are modular, HTML-parameterizable pieces of a webpage that have their own associated HTML, JavaScript, and CSS. Components are typically implemented by more heavyweight frameworks such as React or Angular.

Of course, few of the clarifying details I’ve added in the above bullet points were clear to me during my initial time in the rabbit hole.

The following is an article that presents the relevant content from the rabbit-hole in a more foundational, “bottom-up” approach.

Web components

Web Components is a suite of different technologies [standard to HTML and JavaScript] allowing you to create reusable custom elements – with their functionality encapsulated away from the rest of your code – and utilize them in your web apps.”

The “suite of different technologies” are the custom elements JavaScript API, the shadow DOM JavaScript API, and the <template> and <slot> HTML elements.

Custom elements (JavaScript API)

The custom elements JavaScript API allows

  • extension of built-in HTML elements, such as <p>, so that an extended HTML element can be used in HTML with code such as <p is="word-counter"> . (The argument to is specifies which extension of <p> is used.) These are called customized built-in elements.

  • creation of new HTML elements that have new tag names such as <custom-element>. These are called autonomous (HTML) elements.

A custom element is implemented as a class which extends either

  • an interface corresponding to an HTML element, in the case of extending an existing HTML element

    or

  • HTMLElement, in the case of creating a new HTML element

The class will need to implement several “lifecycle callback functions”. The class, say Cls, is then passed to window.customElementRegistry.define("my-custom-element", Cls).

Shadow DOM (JavaScript API)

The shadow DOM JavaScript API allows “hidden” DOM trees, called shadow trees, to be attached to elements in the regular DOM tree. Shadow trees are hidden in the sense that they are not selected by tools such as document.querySelectorAll(). They allow for encapsulation because none of the code inside a shadow tree can affect the portion of the overall DOM tree that is its parent.

Shadow trees are effected by using

  • <template shadowrootmode="open"> </template> in HTML

    or

  • const shadow = elem.attatchShadow({mode: "open"}) in JavaScript

<template>

The <template> HTML element is not actually rendered by the browser. Instead, when template is the JavaScript Element representing a <template> HTML element (e.g. const template = document.querySelectorAll("#some-template")), we are expected to manually render* template.content. This manual rendering is done by writing code such as document.body.appendChild(template.content).

But- still- what good is this? At this stage, all we know about <template> is that use of it requires manually rendering HTML. It seems useless!

*template.content is of type DocumentFragment, which is a data structure that represents template.innerHTML. You can read about a situation in which you would want to use DocumentFragment over innerHTML here. It’s not clear to me how using DocumentFragment is vastly superior to innerHTML in this scenario, but there is probably some small performance advantage.

Slotting

<template> does become quite useful when it’s paired with the <slot> element. The <slot> element allows us to define portions of the <template> inner HTML that are variable so that we can later “plug-in” custom HTML into those portions of the <template> inner HTML.

In order achieve this functionality of <slot>, we must actually use <slot> alongside custom element and shadow DOM concepts, as this was how <slot> was designed to be used.

Slotted custom elements

We now describe how <slot> is used with custom elements, the shadow DOM, and templates to implement a “slotted” custom element.

  1. Include code such as

<template id = "some-template">
    ...
    <slot name = "some-slot"> default text </slot>
    ...
</template>

in the HTML.

  1. In the class that defines a custom element, write a constructor that creates a shadow tree by including const shadowRoot = this.attachShadow({mode: "open"}) in the constructor.

  1. In same constructor, right after the creation of the shadow tree, set template.content to be the inner HTML of the shadow tree: shadowRoot.attachChild(template.content.cloneNode(true)).

(To see an example of this, inspect this webpage with your browser’s development tools.)

We see that the three concepts of custom elements, the shadow DOM, and templates are all involved. (1) and (3) are about templates, (2) is about the shadow DOM, and (2) and (3) occur in the custom element’s constructor!

But how does <slot> come into play? Well, suppose that a custom element called “some-element” is configured in the above way. Then then the HTML

<some-element> </some-element>

is interpreted by the browser to be the inner HTML of the template with the inner HTML of the template’s <slot> element replacing the template’s <slot> element. So, the browser will render the HTML

...
default text
...

Alternatively, the HTML

<some-element>
    <div slot = "some-slot"> replacement text </div>
</some-element>

is interpreted by the browser to be the inner HTML of the template with the inner HTML of the newly specified <slot> element replacing the template’s <slot> element. So, the browser will render the HTML

...
replacement text
...

Modern components

The type of custom element above implements the idea of a modern component, which is

  • easily reusable

  • encapsulated (in the sense that one component’s code is separate from other components and does not affect other components state or behavior)

  • allows for parameterization of HTML with <slot>

We’ve seen that writing the above type of custom element requires a lot of boilerplate. We could eliminate the boilerplate by writing a class that implements the modern component functionality. The class’s constructor would take the HTML that is to underlie the modern component as an argument*.

* If <slot> functionality is used, then the HTML that is to underlie the modern component would contain a the same kind of <slot> element that <template> did above.

Lit

Lit is a library that provides a class, LitElement, that implements this notion of modern component. As Lit’s documentation says, the advantage of this approach is that, since modern components rely on standard HTML and JavaScript APIs, they are supported by almost all web browsers (all web browser that support the required HTML and JavaScript APIs, that is), and do not require any frameworks such as Angular or React to run.

Open Web Components

Open Web Components is a website that “gives a set of recommendations and defaults” on how to write modern web components”. The “Getting Started” page recommends that to begin developing a web component, you should make use of their npm package by running npm init @open-wc, which generates an example Lit component.

Spectrum Web Components

Spectrum Web Components is the “frameworkless” or “as close to vanilla JS as possible” implementation of Adobe Spectrum. Spectrum Web Components are Lit components and thus extend LitElement.

React Wrappers for Spectrum Web Components

swc-react is a collection of React wrapper components for the Spectrum Web Components (SWC) library, enabling you to use SWC in your React applications with ease. It relies on the @lit/react package to provide seamless integration between React and the SWC library.”

Why not just use React Spectrum components?

swc-react and React components are two technologies that implement the idea of a component in some way. I would think that if we’re using React, wouldn’t it more natural to just use React components, and not import an extra library that make Lit components useable in React? Well, Adobe documentation says:

We recommend using swc-react over React Spectrum in your add-ons based on React, because it currently offers a more comprehensive set of components which provide built-in benefits as detailed above in the Spectrum Web Components section, and is more actively supported.

So I suppose that answers my question 🙂

]]>
https://blogs.perficient.com/2024/09/11/a-rabbit-hole-in-web-development/feed/ 0 369021
Desktop application development with Angular and Electron https://blogs.perficient.com/2024/06/19/desktop-application-development-with-angular-and-electron/ https://blogs.perficient.com/2024/06/19/desktop-application-development-with-angular-and-electron/#respond Wed, 19 Jun 2024 20:39:19 +0000 https://blogs.perficient.com/?p=364773

Desktop application development with Angular and Electron

Electron is a framework that allows you to use web development paradigms (i.e. HTML, CSS, JavaScript) to develop cross-platform desktop applications. Typically, desktop applications are developed in lower-level, compiled languages such as Java, C#, and C++, so it’s neat that this is possible.

It’s simple enough to create an Electron application that uses “vanilla” HTML, CSS, and JavaScript, but what if we want to use a web development framework, like Angular or React?

In this article, I’ll show how this is done. We’ll walk through setting up an Electron-backend Angular-frontend desktop application.

Node and npm

Electron runs on node.js. (If you don’t already know about node.js, know that node.js is basically “JavaScript that runs on a server instead of in a browser”.)

Since this is the case, we’ll need to install Node Package Manager, or npm. If you don’t already have npm, an easy way to obtain it is to run one of the Node.js installer executables. Running such an executable will install both Node.js and npm.

Angular

  • Once npm is installed, we can use it to install Angular. Open a command prompt and run npm install -g @angular/cli to do so. After this command finishes executing, you can sanity-check that Angular is indeed installed by executing ng version; this should display some information on the version of Angular you now have installed.

  • Navigate to wherever you want to store the project folder for the demo application we will be creating. Then execute ng new ae_demo. Choose y when asked if you want to add Angular routing (we won’t be making use of this feature, but in general, you probably do want to use it), and choose CSS when asked what stylesheet format should be used (for simplicity).

  • This creates a project folder titled ae_demo and fills it with several files. The ae_demo folder and the files it contains together constitute an Angular project, i.e. an Angular webapp.

  • To run the Angular webapp, execute cd ae_demo and then ng serve. After the output from ng serve indicates that the webapp is live, visit the webapp yourself by navigating to http://localhost:4200/ in your web browser. You can also stop the webapp by pressing CTRL + C while your cursor is captured by the command line window.

  • At this point, we could spend a lot of time learning how to write Angular code and thereby increase the sophistication of our webapp. For the purposes of this tutorial, here’s an overview of the relevant parts of writing Angular code:

    • You develop your Angular app by modifying the files in the ae_demo/src/app folder.

    • When your development is complete, you run ng build. This produces a subfolder of ae_demo named dist that contains an index.html file, a .css file, and three .js files. So, at the end of the day, your Angular app complies to a regular webapp that uses JavaScript. These are the files that you would provide to a production webserver if you wanted to run your app on it.

Electron

  • Now that we have an idea of how to write the frontend of our application in Angular, we will learn how to use Electron for the backend. The first step is of course to install Electron with npm: cd into ae_demo and execute npm install electron --save-dev.

  • Now, within ae_demo/src, create a folder called electron. In the electron folder, create a file called main.js with the following contents:

    /* ========================= */
    /* main.js */
    /* ========================= */
    
    const {app, BrowserWindow} = require('electron');
    const url = require("url");
    const path = require("path");
    
    let mainWindow; // of type BrowserWindow
    
    /* Loads the index.html file into the window win. */
    function loadIndexHtml(win) {
      win.loadURL(
        url.format({
          pathname: path.join(__dirname, `../../dist/ae_demo/index.html`),
          protocol: "file:",
          slashes: true
        })
      );
    }
    
    /* Configures the window win so that the renderer process (i.e. the Angular scripts) is loaded only after "DOMContentLoaded" occurs. This prevents us from getting "is not a function" errors when using functions exposed from Electron in Angular scripts. */
    function configLoadRendererAfterDOMContentLoaded(win) {
      /* In the anonymous function, we use a common pattern for manually bootstrapping an Angular app. Google "manually boostrapping Angular app" to learn more about this. */
      win.webContents.on("dom-ready", function() {
        const jsCode = `document.addEventListener('DOMContentLoaded', function() { 
          platformBrowserDynamic().bootstrapModule(AppModule).catch(err => console.error(err)); });`
        win.webContents.executeJavaScript(jsCode);
      });
    }
    
    function createWindow () {
      mainWindow = new BrowserWindow({
        width: 800,
        height: 600,
        webPreferences: {
          nodeIntegration: true //necessary?
        }
      });
    
      mainWindow.on("closed", function () { mainWindow = null });
      configLoadRendererAfterDOMContentLoaded(mainWindow);
      loadIndexHtml(mainWindow);
      mainWindow.webContents.openDevTools();
    }
    
    app.whenReady().then(function() {  
      createWindow();
    })
    
    app.on('window-all-closed', function () {
      if (process.platform !== "darwin") app.quit();
    });
    
    app.on('activate', function () {
      if (mainWindow === null) createWindow();
    });
  • Once this is done, you should edit the node.js configuration file, package.json, so that the JSON within is of the following form:

    {
      ...,
      "main": "./src/electron/main.js",
      "scripts": {
        ...,
        "devStart": "ng build --base-href . && electron .",
        ...
       },
       ...
    }  
  • You can now build and run your Angular-frontend Electron-backend app by executing the following sequence of commands:

    • cd ae_demo

    • ng build --base-href . (as described before, this produces a folder named dist containing the .html, .css, and .js files that constitute the frontend)

    • ./node_modules/.bin/electron . (this causes Electron to search for the dist folder in the current folder; once it finds this folder, it will run the app constituted by these files)

  • Alternatively, you can build and run the app by executing npm run devStart.

    • One oddity is we will get an error if we define devStart to be "ng build --base-href . && electron node_modules/.bin/electron ."; specifying the explicit path to the Electron executable does not work! Only the implicit path causes no error, for some reason.

Invoking the backend in frontend code

Sometimes we need to be able to perform tasks in the Angular frontend (e.g. make a HTTP request) that typically only the Electron backend can do.

To achieve this, we will define functions that perform the desired tasks in the Electron backend and specify in a new file, ae_demo/electron/preload.js, that these functions are to be made available to the frontend.

For example, suppose we want to use the axios library in node.js to perform a HTTP request.

To do this, we should add the following to main.js:

app.whenReady().then(function() {  
  ipcMain.handle("httpRequest1", httpRequest);
}

async function httpRequest(event, requestConfig) {
  return (await axios(requestConfig)).data;
}

Then create a file called preload.js in the electron folder that has the following contents:

/* ========================= */
/* preload.js                */
/* ========================= */

const { contextBridge, ipcRenderer } = require("electron")

contextBridge.exposeInMainWorld("api", { 
  httpRequest2: (requestConfig) => ipcRenderer.invoke("httpRequest1", requestConfig)
});

Add preload: path.join(__dirname, "preload.js") to webPreferences in the createWindow() function of main.js:

function createWindow () {
  mainWindow = new BrowserWindow({
    width: 800,
    height: 600,
    webPreferences: {
        nodeIntegration: true,
        preload: path.join(__dirname, "preload.js")
    }
  });

Now, to make a HTTP request in our Angular code, we just do the following:

(<any> window).api.httpRequest2(requestConfig)

This works because:

  • The preload script exposes an api object that has a function called httpRequest2 to the Angular frontend.

  • In the preload script, it is specified that httpRequest2(requestConfig) is equal to the result of sending the requestConfig object as a message to the channel named httpRequest1.

  • Calling ipcMain.handle("httpRequest1", httpRequest) in main.js specifies that whenever the httpRequest1 channel receives an object obj as a message, it should call httpRequest(obj).

Thus, we can call(<any> window).api.httpRequest2(requestConfig) in Angular code as a means to effectively call httpRequest(obj).

]]>
https://blogs.perficient.com/2024/06/19/desktop-application-development-with-angular-and-electron/feed/ 0 364773
Matrices in machine learning https://blogs.perficient.com/2022/07/18/matrices-in-machine-learning/ https://blogs.perficient.com/2022/07/18/matrices-in-machine-learning/#respond Mon, 18 Jul 2022 16:00:46 +0000 https://blogs.perficient.com/?p=313769

This article gives a sense of how matrix operations such as matrix multiplication arise in machine learning.

Dense layers

Before we get started, there is a small bit of prerequisite knowledge about “dense layers” of neural networks.

  1. What is a “dense layer”? Don’t worry about how “dense layers” fit into the big picture of a neural network. All we need to know is that a dense layer can be thought of as a collection of “nodes”.

  2. Well now we’ve begged the question: what is a “node”? A node can be thought of as a machine, associated with a set of numbers called weights, that is responsible for producing a numeric output in response to receiving some number of binary (i.e. 0 or 1) inputs. Specifically, if a node N is associated with n weights w1, …, wn, then, when n binary numbers x1, …, xn are given to N as input, N will compute the following “weighted sum” as output:

For example, if a node has weights 3, 4, 5 and is passed the binary inputs 1, 0, 1, then that node will return the weighted sum 3 * 1 + 4 * 0 + 5 * 1 = 8.

That’s it! We are done with covering prerequisites. We now know what a dense layer of a neural network is responsible for accomplishing: a dense layer contains nodes, which are associated with weights; the nodes use the weights to compute weighted sums when they receive binary inputs.

Matrices

Now, we see what happens when we consider all of the nodes in a dense layer producing their outputs at once.

Suppose L is a dense layer. Then L consists of m nodes, N1, …, Nm for some whole number m, where the ith node Ni has n weights* associated with it, wi1, …, win. When Ni receives n binary (0 or 1) inputs x1, …, xn, it computes

Since the above weighted sum is computed by the ith node, let’s refer to it as f(Ni):

The nodes N1, …, Nm will need to compute the weighted sums f(N1), …, f(Nm):

Ml3

 

The above can be rewritten with use of so-called column vectors:

 

Ml4

A column vector is simply a list of numbers written out in a column. Note that in the above we have made use of the notions of “column vector addition” and “scaling of a column vector by a number”.

The above expression can be expressed as a matrix-vector product:

Ml5

(The matrix-vector product Wx of the above is literally defined to be the expression on the right side of the previous equation).

This matrix-vector product notation gives us a succinct way to determine what each node in a layer does to inputs x1, …, xn.

Generalization

The above approach generalizes in the following way. Assume that instead of n inputs x1, …, xn, we have p groups of inputs, where the ith group has inputs x1i, …, xni. Additionally, define xi := (x1i, …, xni) and let f(Nj, xi) denote the result of node Nj acting on x1i, …, xni. Then we have

Ml6

This is equivalent to

Ml7

Further generalization

Now, assume that in addition to having multiple groups of inputs x1, …, xp, we want to pass these groups of inputs through multiple layers multiple layers L1, …, Ll. If layer Li has weight matrix wi, then the result of passing x1, …, xp through L1, …, Ll is the following product of matrices:

Ml8

* You may object and remark that the number of weights associated with a given node depends on the node; in other words, that node each node Ni has ni weights associated with it, not n weights. We are justified in assuming that ni = n for all i because we can always associate extra weights of 0 to nodes that don’t already have the requisite n weights associated with them.

]]>
https://blogs.perficient.com/2022/07/18/matrices-in-machine-learning/feed/ 0 313769
A tour of PowerQuery’s M language https://blogs.perficient.com/2022/04/22/a-tour-of-powerquerys-m-language/ https://blogs.perficient.com/2022/04/22/a-tour-of-powerquerys-m-language/#respond Fri, 22 Apr 2022 14:43:02 +0000 https://blogs.perficient.com/?p=308564

In a previous post, I introduced PowerQuery and demonstrated how to perform various SQL-like operations. This article gives a tour of PowerQuery’s M language that underlies each query.

let and in

If you select a query and click on the “Advanced Editor” button in the Home tab, you’ll see something like this:

Image 20220421150214466

This is the M language code that constitutes our query. We’ll soon come back to the above code, but for now, let’s gain a basic understanding of how M works.

The first thing to know about M is that most M scripts are of the form let ... in .... In such a script, intermediate computations happen inside the let statement, and the content after in is the script’s return value.

For example, when the M code

let
     x = 3,
     y = x + 5
in
     y

is the script underlying a query, then that query appears as follows in the GUI:

Image 20220421103907452

Interestingly enough, it is not actually necessary for a script to contain the keywords let and inSo long as the only content in the script evaluates to a value. For instance,

x = 5

is a perfectly valid M script!

So, it is more accurate to say that

  • The contents of every M script must evaluate to a value.

  • let ... in ... evaluates the content after in. Therefore, since let ... in ... evaluates to a value, any script may be of the form let ... in ... .

We should also note that one can place the code of the form x = let ... in ... within any existing let block, and then make use of x!

let ... in ... Vs. select ... from ...

In my opinion, the let ... in ... syntax doesn’t really make much sense. I think the M language would make much more sense if there were no let nor inAnd every script simply returned the value of its last line.

It seems to me thatlet ... in ... is supposed to evoke connotations with SQL’s select ... from .... Comparisons between let ... in ... and select ... from ... quickly break down, though:

  • The data source in a SQL query is specified in the from clause, while the data source of a let ... in ... statement typically appears in the let clause.

  • The result set of a SQL query is determined primarily from the select clause, while the result of a let ... in ... statement is whatever comes after in.

 

 

Autogenerated M code

Now that we have some knowledge about let ... in ...We can look at some sample M code that is autogenerated after using the GUI to create a query:

let
     Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
     #"Changed Type" = Table.TransformColumnTypes(Source,{{"col1", Int64.Type}, {"col2", type text}, {"col3", type text}}),
     #"Filtered Rows" = Table.SelectRows(#"Changed Type", each [col1] = 1 or [col2] = "b")
in
     #"Filtered Rows"

Looking closely at the above code teaches us two important facts about the M language:

  1. Variable identifiers can be of the form #"{string}", where {string} is any string of characters.

  2. The autogenerated M code corresponding to each “step” in a PowerQuery query references the previous step. (E.g., when computing #"Changed Type", we pass Source to Table.TransformColumnTypes()).

If we consult the M documentation for any of the functions (Excel.CurrentWorkbook(), Table.TransformColumnTypes(), Table.SelectRows()) in the above, we also see that

  1. The objects that represent each “step” in a query are of type table.

M data types

  • The Microsoft documentation describes M as having the following primitive types: binary, date, datetime, datetimezone, duration, list, logical, null, number, record, text, time, type.

  • There are also “abstract types”: function, table, any, and none.

  • Types in M can be declared as nullable.

  • Some types represent types ( type number and type text are such types).

Lists and records

In M, the basic collection types are lists and records. Lists and records are 0-indexed.

Lists are essentially “arrays”, and records map string-valued “keys” to “values.” (So records are essentially “dictionaries”/”hashmaps”).

To initialize a list, use code such as lst = {1, "a", 2, false}. To initialize a record, use code such as rec = [key = 1, key2 = "blah"]. To access the ith element of a list, use lst{i}. To get the value associated with key key (e.g. key = "key1") in a record rec, use rec[key].

M uses functional programming

In M, we use functional programming constructs in the place of looping constructs. The go-to functional programming construct is the function List.Transform(). Given a list lst and a function fn, List.Transform(lst, fn) returns the list that is the result of applying fn to each element of lst.

The function List.Generate() can also be handy. Whenever you can’t think of a good way to solve your problem by using List.Transform()And whenever it is actually best to essentially implement a for loop, use this code to do so:

List.Generate(() => 0, each _ < n, each _ + 1, {statement})

It will execute {statement} n times.

User-defined functions

Writing user-defined functions in M can prove very useful. In my work, I found that I needed to repeat a certain sequence of steps many times. If I were to manually rewrite these steps with the PowerQuery GUI repeatedly, I would drive myself insane and have way too many PowerQuery steps. But, since I created a user-defined function to perform the above task, I was able to perform collapse the above four steps into a single step!

The syntax for defining a custom function uses anonymous function syntax.

fn = (x) => x * x

(If you were to evaluate fn(x) elsewhere in the script, that invocation fn(x) would return x * x).

The query whose M script is the above looks like this in the GUI:

Image 20220421120442467

Global variables and global functions

When a variable or function is used multiple times in multiple scripts, it is best practice to separate the definition of the variable or function from all of the scripts that use the variable or function. To define a global variable with a value of, say, 5, use the Advanced Editor* to make a query’s M code

5

Then, change the name of the query to be the desired identifier for the variable.

Since functions are variables of type function, the process for defining a global function is the same. For example, to declare a global function named fn that sends x to x * x, create a query whose name is fn, and edit the query’s M code with the Advanced Editor* so that it is

(x) => x * x

* If you use the smaller editor instead of the Advanced Editor, you will have to prepend an equals = to the beginning of your code to avoid errors.

Accessing the “current” table row

Recall that the function call that implements the equivalent of a general where clause looks something like

Table.SelectRows(#"Changed Type", each [col1] = 1)

There are a several concepts at play here we glossed over before that deserve explanation.

  • Rows of tables are represented as records. If row is a record that represents some row of a table, the value in the column row of that row is row[col].

  • The second argument of Table.SelectRows() is a function whose input is a record that represents the “current row” of the table and whose output is a logical (i.e. a boolean) that indicates whether or not to include the current row in the result set.

  • _ is a valid variable name in M, and so the function (_) => fn(_) is the same as the function (x) => fn(x) . For example, the function (_) => _ * _ is the same as the function (x) => x * x.

  • The each keyword is shorthand for the syntax(_) =>.

  • Whenever a variable var appears in square brackets to the right of an each, M interprets [var] as meaning _[var]. Therefore, an expression such as each [var] is the same (_) => _[var].

Knowing all of these things, we see that the above code translates to

Table.SelectRows(#"Changed Type", (_) => _[col1] = 1)

Since you might be uncomfortable with using _ as a variable, let’s consider another equivalent function call:

Table.SelectRows(#"Changed Type", (row) => row[col1] = 1)

Here, we understand (row) => row[col1] = 1 to be the function that takes in a record representing the current row, looks at the value in this record associated with the key col1, and returns true whenever that value is equal to 1. Thus, the above code selects the rows from the table that have a value in column col1 of 1.

]]>
https://blogs.perficient.com/2022/04/22/a-tour-of-powerquerys-m-language/feed/ 0 308564
Data exploration with PowerQuery https://blogs.perficient.com/2022/04/22/data-exploration-with-powerquery/ https://blogs.perficient.com/2022/04/22/data-exploration-with-powerquery/#respond Fri, 22 Apr 2022 14:29:53 +0000 https://blogs.perficient.com/?p=308553

Microsoft’s PowerQuery is a neat tool that allows one to perform SQL-like operations on Excel tables.

When investigating a database, I actually prefer using PowerQuery over raw SQL for a couple reasons:

  • PowerQuery displays result sets that are much easier to look at than the a typical SQL plaintext result set.

  • It’s easy to immediately interact with PowerQuery result sets by using the graphical user interface.

  • Most importantly, you write PowerQuery queries one step at a time and can therefore easily sanity check a query as you write it. (It’s tedious to do so in raw SQL).

If you frequently use SQL to investigate databases, I highly recommend that you try out PowerQuery.

To try PowerQuery out on some test data, just create an Excel Table*, then select any cell within that Table, go to the Data tab at the top of the screen, and click “From Table/Range”. (* To create an Excel Table: enter some random data into a rectangular range of cells, then select any cell within that range, go to the Insert tab at the top of the screen, and click “Table”).

Here’s what happens if I have the following Excel Table:

Image 20220421090512615

After I select a cell from the above table, and click “From Table/Range”, the PowerQuery editor pops up:

We can see that PowerQuery has represented my Excel Table as a query. We can also see the graphical user interface that allows us to interactively add steps to said query.

PowerQuery equivalents to SQL constructs

It’s instructive to think about how we can accomplish various SQL constructs within PowerQuery.

  • To do the equivalent of a select statement, and select a subset of columns from the result set, we would click on the “Choose Columns” button (visible above).

  • To do a select distinct, we use “Choose Columns” to execute the desired select, and then, in the following result set, select all columns, right click, and select “Remove Duplicates”.

  • Accomplishing the equivalent of a where clause- selecting the subset of rows from the result set for which a certain condition is true- is a bit hacky in general. (We describe how to do this later). In the case when the condition only involves one column, though, we can do everything in a non-hacky way. If we want to filter the above result set for with col1 = 1, we would click the downwards arrow inside the col1 header, and use either the “Number Filters” option or the checkbox next to “1” in the following menu:

    Image 20220421091644894

  • To do a group by, we go to the Transform tab at the top of the screen, and click “Group By”.

  • To do a join (whether inner, left, right, full outer, etc.), we click “Merge Queries” from within the Home tab. To do a union, we click “Append Queries” from within the Home Tab.

    • To increase encapsulation, one can use the “Merge Queries as New” or “Append Queries as New” options to produce a table that is the result of joining or unioning two existing tables.

      Image 20220421093022520

General where clauses

Above, we noted that accomplishing a where clause that involves more than one column is a bit hacky. We describe how to write such a where clause here. It’s really not that bad: first, just click the downwards arrow inside any column’s header, and filter for anything you like. I’ve done so, and filtered the above data for rows with col1 = 1:

Image 20220422081254808

Notice the code that appears in the bar that runs horizontally over the top the table:

= Table.SelectRows(#"Changed Type", each [col1] = 1)

This code provides a more low-level description of what the “Filtered Rows” step of the query is doing. You can probably guess how we accomplish a general filter (one that involves columns other than col1). If we wanted to change the filtering condition to, say, col1 = 1 or col2 = "b", then what we do is edit said code to be

= Table.SelectRows(#"Changed Type", each [col1] = 1 or [col2] = "b")

It works! We get

Image 20220422081358076

In general, any column of the table can be referenced in an “each statement” such as the above by enclosing the column name in square brackets. Soon, we’ll learn more about what this square bracket notation actually means, and why it must come after the keyword each.

]]>
https://blogs.perficient.com/2022/04/22/data-exploration-with-powerquery/feed/ 0 308553
Dependency injection in C# .NET https://blogs.perficient.com/2022/03/21/dependency-injection-in-c-net/ https://blogs.perficient.com/2022/03/21/dependency-injection-in-c-net/#respond Mon, 21 Mar 2022 20:02:34 +0000 https://blogs.perficient.com/?p=306533

I’ve decided to write a tutorial on how to accomplish dependency injection in C# .NET, because the Microsoft documentation of dependency injection libraries is unfortunately way too sparse, and the Microsoft dependency injection tutorial is a little convoluted and complicated.

Fortunately, C# .NET’s implementation of dependency injection is pretty straightforward. In my opinion, it’s way more straightforward than the implementation provided by Java’s Spring Framework. If you understand the basics of the dependency injection concept but haven’t yet tried it out in practice, C# .NET could be your best bet.

Dependency injection recap

Here’s a quick recap on what dependency injection entails. If you want more detail, this article I wrote may be helpful.

In general, whether dependency injection is in play or not, classes may specify types- called dependencies– that they have has-a relationships with.

In dependency injection, instances of classes are not responsible for creating instances of their dependencies. Instead, a managing container maintains has-a relationships with instances of the classes, and the user specifies to the container which implementations of the dependencies they want to use by calling one of the container’s methods, or by writing so-called “configuration code” that is interpreted by the container. At runtime, the container “injects” these implementations into the class instances.

Why use dependency injection? The main point is to separate interface from implementation. Why is this important? I suggest you read the linked article above for more details.

.NET dependency injection terminology

The first thing to be aware of when learning dependency injection in C# .NET is that Microsoft uses some alternative terminology when discussing dependency injection concepts. If you want to be able to understand the Microsoft documentation, you need to be aware of this terminology. So, here’s some vocabulary:

Microsoft phrase Meaning
service dependency
service registration the storing of dependencies in the managing container
service resolving the injection at runtime of a dependency

ServiceDescriptor

The ServiceDescriptor class is what represents a service (recall that “service” means “dependency”). The most down-to-earth constructor of ServiceDescriptor is as follows:

public ServiceDescriptor (Type serviceType, Type implementationType, ServiceLifetime lifetime)

So, we see that in C# .NET, a service essentially wraps the type of the dependency, the type of the preferred implementation for said dependency, and the “lifetime” of the dependency.

ServiceLifetime

In my opinion, “lifetime” should really be called “instantiation multiplicity”, since the value of lifetime in the above constructor determines whether or not the management container is to create multiple instances of the dependency, and, if so, how to do so.

Specifically, ServiceLifetime is an enum that can take on the value Singleton, Transient, or Scoped.

  • Singleton indicates that the management container (which we have not seen yet) will ensure that only one instance of the service will be created throughout the program lifetime. All class instances which depend on the service will share the same service.

  • Transient indicates that the management container will ensure that a new instance of the dependency will be created whenever a different class instance needs it.

  • The meaning of Scoped is a little complicated for a first pass at dependency injection in C# .NET. If you want to learn about it, read this.

ServiceDescriptor properties

You already saw the ServiceDescriptor constructor, which is what’s most important in regards to understanding ServiceDescriptor. For a bit more detail, here are the public properties that are wrapped by the ServiceDescriptor:

public Func<IServiceProvider,object>? ImplementationFactory { get; } // a factory method that stores instructions on how to build an instance of the implementation type
public object? ImplementationInstance { get; }
public Type? ImplementationType { get; } // type of the wrapped instance, ImplementationInstance
public ServiceLifetime Lifetime { get; }
public Type ServiceType { get; } // type of the wrapped interface

Some of the above may be confusing, so here are some clarifying notes:

  • Func<T1, T2> represents a function that takes an argument of type T1 as input and returns a type T2 instance as output. Thus, the ImplementationFactory property is a function that takes an IServiceProvider as input and returns an instance of the implementation as output. ImplementationFactory can be thought of as wrapping instructions for how to create an instance of the implementation instance.

  • For any type T, the expression T? is shorthand for Nullable<T>, which represents a nullable version of the type T. A type is called nullable if compiler errors are not thrown when a null value of said type is attempted to be used. For more context on Nullable<T> is, see the below appendix.

Registering services (ServiceDescriptors) with IServiceCollection

So far, we know how to represent services (dependencies) as ServiceDescriptors. We’ll now learn how to create a managing container and how to register our services with said container.

An instance of type IServiceCollection is what will represent our managing container. From its interface definition, we can see that IServiceCollection a collection of  ServiceDescriptors. (So, IServiceCollection is interpreted as “I{ServiceCollection}“, which means “interface to a collection of services”, not “{IService}Collection“, which would mean “collection of interfaces to services”!).

using Microsoft.Extensions.DependencyInjection;
using System.Collections.Generic;​
public interface IServiceCollection : ICollection<ServiceDescriptor>, IEnumerable<ServiceDescriptor>, IList<ServiceDescriptor> { }

Microsoft provides an implementation of IServiceCollection for us- the ServiceCollection class from the Microsoft.Extensions.DependencyInjection namespace- so we don’t have to take care of the implementation ourselves.

Service registration via extension methods to IServiceCollection

In order to store services in an IServiceCollection, we need to enable access to some extension methods to IServiceCollection.

(An extension method is an instance method of a class that is added to the class after the class is defined. Confusingly, you can add extension methods, which are non-abstract methods, to an interface. To learn more about extension methods, see the below appendix).

To obtain access to the extension methods we need, just include a using Microsoft.Extensions.DependencyInjection.ServiceCollectionServiceExtensions statement at the top of the file.

Some important extension methods (with the parameter for the extended class, IServiceCollection, omitted) added by ServiceCollectionServiceExtensions are:

AddSingleton(Type serviceType, Type implementationType);
AddSingleton(Type serviceType); // is the above with implementationType = serviceType

Being extension methods to IServiceCollection, these methods are invoked on an instance of type IServiceCollection in the same way that usual instance methods are. For example, if services has type IServiceCollection, then we would call the above methods by writing

services.AddSingleton(serviceType, implementationType);
services.AddSingleton(serviceType);

You can probably surmise that these two methods add a ServiceDescriptor of lifetime Singleton that has the specified serviceType and implementationType to the IServiceCollection.

There are also versions of the above methods for which certain combinations of the parameters are held fixed:

AddSingleton<TService,TImplementation>()
AddSingleton<TService>(); // is the above with TImplementation = TService
AddSingleton<TService,TImplementation>(Func<IServiceProvider,TImplementation> implementationFactory)

And, of course, for every method whose name is AddSingleton, there will be corresponding methods with names of AddTransient and AddScoped that perform the same task for services of Transient and Scoped lifetimes, respectively.

Though, these two versions of AddSingleton, which specify the instance that is to be wrapped by the singleton service, don’t have AddTransient or AddScoped counterparts, because it wouldn’t make sense to specify only a single instance to AddTransient or AddScoped:

AddSingleton(Type serviceType, object implementationInstance);
AddSingleton<TService>(TService instance);

Resolving services at runtime with IServiceProvider

At this point, we know how to we know how to represent services (dependencies) as ServiceDescriptors, how to create how to create a managing container, and how to register our services with the container. The last item we need to address is that of configuring the resolving of services at runtime (i.e. configuring the injection of dependencies at runtime).

Let’s suppose that services is an IServiceCollection (i.e. a managing container) that contains some ServiceDescriptors (i.e. “services”, or dependencies).

To grab services from a managing container named services at runtime, we will first obtain an instance of type IServiceProvider from the managing container* by storing the return value of services.BuildServiceProvider() . Then, we grab a particular service by using the single abstract method specified in the IServiceProvider interface:

public object? GetService(Type serviceType)

* services.BuildServiceProvider returns an instance of Microsoft.Extensions.DependencyInjection.ServiceProvider, which implements IServiceProvider.

Somewhat random: creating IServiceCollections by using IServiceProviders

This section is pretty optional.

If you already have one IServiceCollection instance and corresponding IServiceProvider, and you want to create another IServiceCollection instance by making use of dependencies stored in the first IServiceCollection instance, you can use these extension methods to IServiceCollection:

AddSingleton(Type serviceType, Func<IServiceProvider, object> factory);
AddSingleton<TService,TImplementation>(Func<IServiceProvider,TImplementation> implementationFactory);
AddSingleton<TService>(Func<IServiceProvider, TService> implementationFactory); // is the above with TImplementation = TService

Appendix

This appendix documents some less commonly known language features of the C# language.

? and nullable reference types

A non-nullable type is a type for which compiler errors are thrown when a variable of that type with null value is attempted to be used . Contrastingly, a nullable type is a type for which compiler errors are not thrown in said situation. You can still get runtime errors with nullable types, of course! The whole point of non-nullable types is to avoid runtime errors by catching them at compilation.

According to the Microsoft documentation, all reference types were nullable prior to C# 8.0. Nowadays (i.e. after C# 8.0), all reference types are non-nullable by default.

You can still use nullable types if you really want, though. For any type T, the typeNullable<T> is nullable. ?T is shorthand for Nullable<T>.

Extension methods

In C#, it is possible to define instance methods outside of the corresponding class definition. Methods defined in this way are called extension methods.

Extension methods must be defined in a static class, and must use the this keyword in the following way:

public class Cls { ... }

​public static class Extension
{
     public static int extensionMethod1(this Cls cls)
     { int someValue = 0; return someValue; }       
     
     public static int extensionMethod2(this Cls cls, int arg)
     { int someValue = 0; return someValue; } 
}

Extension methods are called in the same way as regular instance methods: to call the above defined extension methods on an instance cls of Cls, you would write cls.extensionMethod() or cls.extensionMethod2(arg), respectively.

Extension methods to interfaces

Somewhat confusingly, it is possible to define extension methods- in exactly the same way as above- for interfaces. To me this possibility runs contradictory to the intent of “interface”- interfaces are not supposed to be associated with actual implementations of methods. But you can do it. It is also in fact impossible to add something like an “abstract extension method” to an interface. The C# standard library unfortunately makes much use of implementing interfaces via extension methods. Oh well.

References

I referenced the following two articles in developing my understanding of IServiceCollection: (1), (2).

]]>
https://blogs.perficient.com/2022/03/21/dependency-injection-in-c-net/feed/ 0 306533
Clarifying Excel’s lookup functions https://blogs.perficient.com/2022/02/09/clarifying-excels-lookup-functions/ https://blogs.perficient.com/2022/02/09/clarifying-excels-lookup-functions/#respond Wed, 09 Feb 2022 20:44:43 +0000 https://blogs.perficient.com/?p=304592

I’ve decided to write some of my own documentation for common use cases of the Excel functions LOOKUP, VLOOKUP, HLOOKUP and XLOOKUP because the official documentation is pretty confusing. It uses “lookup value” as a synonym for “key”, when one would conventionally expect a “lookup value” to be a synonym for “value”! (After all, in the typical key-value terminology, “values” are obtained as the result of looking up “keys”!)

Before jumping in- here’s a quick overview. All four lookup functions essentially return the result of the pseudocode values[keys.indexOf(key)], where, given arrays of “keys” and “values” named keys and values, respectively, keys.IndexOf(key) is the index of the key in the array keys. Additionally,

  • LOOKUP is the most simplistic of the four functions- it pretty much looks up “values” from “keys” like you would expect.

  • The “V” and “H” in VLOOKUP and HLOOKUP stand for “vertical” and “horizontal”, respectively; in VLOOKUP, the provided 1D ranges must be columns, and in HLOOKUP they must be rows.

  • XLOOKUP combines the functionality of VLOOKUP and XLOOKUP, and allows for the provided 1D ranges to be either rows or columns. (If you have access to XLOOKUP, you should prefer it over VLOOKUP and HLOOKUP. But at the time of writing, you need access to a Microsoft 365 subscription to use XLOOKUP).

Without further ado, here is my documentation.

LOOKUP

Syntax: LOOKUP(key, keys, values).

Returns the result of the pseudocode values[keys.indexOf(key)], where keys.indexOf(key) is the index of the key in keys, when keys is treated as an array.

key – a value that exists in keys

keys – a 1D range of “keys”

values – a 1D range of “values”

Notes:

  • The official documentation mentions an “array form” of a LOOKUP invocation. I don’t cover that here (the above summarizes the “vector form”) because VLOOKUP, HLOOKUP, and XLOOKUP accomplish the same thing as the “array form”.

 

VLOOKUP

Syntax: VLOOKUP(key, table, valuesIndex, fuzzyMatch).

Returns the result of the pseudocode values[keys.indexOf(key)], where keys is the column of “keys”, “values” is the column of “values”, and where keys.indexOf(key) is the index of the key in keys, when keys is treated as an array.

key – a value that exists in keys

table – a 2D range that contains the column of “keys” and the column of “values” OR a table that contains the column of “keys” and the column of “values”

valuesIndex – the column index (into table) of the column of “values”

fuzzyMatch – whether or not to fuzzily match key with values in the column of “keys” (you almost always want to use fuzzyMatch = FALSE)

Notes:

  • To create a table that you would use for the table argument, select the 2D range that is to be registered as a table. Then, go to the Insert tab, click Table, and then click OK.

  • You might ask: “Why would we want to specify a table that the “key” and “value” columns reside in? Why not just specify the ‘key’ and ‘value’ columns?” The reason it’s advantageous to have this table parameter is that it, if we are calling HLOOKUP multiple times in the same column and varying valuesIndex between calls, we will get an error message if valuesIndex ventures outside the bounds of table. This error message can prevent us from making erroneous computations.

HLOOKUP

HLOOKUP works in the same way as VLOOKUP, with the only difference being that the “keys” and “values” must be stored in rows instead of columns.

XLOOKUP

Syntax: XLOOKUP(key, keys, values).

Returns the result of the pseudocode values[keys.indexOf(key)], where keys.indexOf(key) is the index of the key in keys, when keys is treated as an array.

key – a value that exists in keys

keys – a 1D range of “keys”

values – a 1D range of “values”

Notes:

  • A 1D range can be either a row or a column.

]]>
https://blogs.perficient.com/2022/02/09/clarifying-excels-lookup-functions/feed/ 0 304592
On Scala’s parenthesis convention for no-arg functions https://blogs.perficient.com/2022/02/08/on-scalas-parenthesis-convention-for-no-arg-functions/ https://blogs.perficient.com/2022/02/08/on-scalas-parenthesis-convention-for-no-arg-functions/#respond Tue, 08 Feb 2022 17:54:21 +0000 https://blogs.perficient.com/?p=304463

One might be confused or even angered when they learn about Scala’s convention regarding parenthesis usage for no-arg functions.

The convention is this: given a no-arg function, you use put parentheses next to the function call only if the function has side effects. So, you would invoke a function named printCurrentState by writing printCurrentState(), since printCurrentState has the side effect of printing output to the console. On the other hand, you would invoke a function named getCurrentState by simply writing getCurrentState, since getCurrentState presumably just returns a value and does nothing else.

But why? Here, I’ll present a quick analysis to convince you of why this convention makes sense.

Let’s consider an arbitrary no-arg function. An arbitrary no-arg function falls into one of the following categories:

  • void methods

    • with side effects (“no-arg subroutine”)

    • with no side effects (“useless method”)

  • methods with a return value

    • with no side effects (“a method that philosophically represents a variable”, i.e. a “getter method”)

    • with side effects (“variable retrieval and subroutine”).

If we ignore the possibility of “useless methods”, then we can rearrange the remaining three options to see that any given no-arg method is either

  • a void method with side effects

  • a method with a return value and no side effects

  • a method with a return value and side effects.

That is, every no-arg method is in practice either

  • a method with side effects

  • a method with a return value and no side effects, i.e., a “getter method.”

“Getter methods” in a philosophical sense “almost variables” because it is not completely inaccurate to think of their invocations as peeks into the state of reality rather than as a value returned by work done behind-the-scenes.

Since every no-arg method is either a “getter method” or not, it is syntactically unambiguous to establish the convention of invoking “getter methods” without parentheses (). More importantly, it is pleasing, as the removal of parentheses emphasizes the interpretation of “getter methods” as being variables.

]]>
https://blogs.perficient.com/2022/02/08/on-scalas-parenthesis-convention-for-no-arg-functions/feed/ 0 304463
Introduction to Spring Framework for Java https://blogs.perficient.com/2022/01/20/introduction-to-spring-framework/ https://blogs.perficient.com/2022/01/20/introduction-to-spring-framework/#respond Thu, 20 Jan 2022 19:58:16 +0000 https://blogs.perficient.com/?p=303626

Introduction to Spring Framework for Java

This article walks through the basics of using Spring Framework for Java.

From a very, very high level point of view, Spring Framework infers a program’s runtime behavior from labels that the programmer attaches to pieces of code. There are many different groupings of labels, and each grouping of labels provides an interface to the configuration of some behind-the-scenes process.

Since a simple-looking Spring Framework program with just a few labels can have quite a lot going on behind the scenes, learning Spring Framework can seem a little overwhelming to the beginner. Learning is made even more difficult by the fact that most online resources documenting Spring Framework haphazardly walk through different types of labels instead of building a fundamental ground-up understanding.

This article intends to fill this gap and provide a ground up understanding. We will start with vanilla Java, and then, one programming design pattern at a time, we will augment on an understanding of how that design pattern is configured in Spring Framework with labels.

Before you begin reading this article in earnest…

  • You should have a firm grasp of how object oriented programming is achieved in Java. So, you should be familiar with concepts such as: “pass reference by value” and “pass by reference”, classes, constructors, fields, access modifiers (public and private), methods, static methods, class instances/objects, getter and setter methods, inheritance, method overloading, runtime polymorphism, interfaces and abstract methods, etc.
  • You should know about the basics of Java annotations. The “labels” spoken of above are really annotations.

  • You should be aware of Macro Behler’s excellent introduction to Spring Framework. I’ve mentioned that most online resources on Spring Framework are disorganized and disappointing; his article is one of the few exceptions. It’s always good to have multiple readings to pull from when learning a topic, and so I encourage you to read this article as a supplement to this.

Dependency injection

Above, our “very, very high level” point of view was that Spring Framework infers a program’s runtime behavior from Java annotations. This is an accurate surface-level description of Spring Framework, but it isn’t a good characterization from which to grow a fundamental understanding.

We will begin our understanding of Spring Framework with a different characterization. At its core, Spring Framework is a tool for implementing the design pattern called dependency injection.

In dependency injection, the object instances on which a Java class Cls depends are “injected” into an instance obj of Cls by a container that has a reference to obj. Since the container, rather than obj, controls when obj‘s dependencies are injected, it is often referred to as an inversion of control container. Dependency injection is also sometimes called “inversion of control” for this reason.

What’s the point of dependency injection? Well, one of the advantages is that it allows us to avoid instantiating unnecessary copies of a dependency.

Suppose that multiple classes require a reference to an object that represents a connection to a particular database. Since this reference is a dependency, we can easily share a single database connection among all of the class instances by making use of the dependency injection technique described above. This is much better than wastefully giving each class instance its own database connection.

Spring beans

When using Spring Framework, we will spend most of our time dealing with and thinking about “Spring beans,” which are the dependencies that are managed by the Spring IoC container.

To be more specific, a Spring bean is a not-necessarily-Serializable object that…

  • is created at runtime by the Spring IoC container (IoC stands for inversion of control)

  • has references to other objects or beans (“dependencies”) injected into it at runtime by the Spring IoC container

  • is otherwise controlled at runtime by the Spring IoC container.

Spring beans can be configured via XML files or by using Java annotations within Java classes. Using annotations is the modern approach. This article will use annotations only- no XML code.

Note that a Spring bean is different than a JavaBean. JavaBeans are part of the core Java language, while Spring beans are obviously not. (Specifically, a JavaBean is a Serializable class with a public no argument constructor that has all fields private. JavaBeans are also not managed by the Spring IoC container).

Configuring Spring beans

In the above, we said that Spring beans are created at runtime by the Spring IoC container. But how does the Spring IoC container know what sort of beans to create? Well, you, the programmer, must write “configuration code” that specifies which Java classes should be instantiated as Spring beans at runtime.

There are two main ways to do this: either use the @Bean annotation or the @Component annotation. (You can also use an annotation derived from @Component).

First configuration method: @Beaned methods that return instances

To specify that Cls should be instantiated as a bean at runtime, annotate a method that returns an instance of Cls with @Bean, and place that annotated method within a class that is itself annotated with @Configuration:

@Configuration
public class Config
{
     @Bean
     public Cls createABeanWrappingAClsInstance(Object args) { return new Cls(args); }
}

Second configuration method: use @Component and @ComponentScan

Another way to specify that a class Cls should be instantiated as a bean at runtime is to annotate Cls‘s class declaration (i.e. public class Cls) with @Component, while also annotating some other class, say Config, that is “at or above below” the level of Cls in the directory hierarchy with @Configuration and @ComponentScan:

@Configuration
@ComponentScan
/* Config must be at or above Cls in the directory hierarchy. */
public class Config { }

@Component
public class Cls { ... }

(When you create later create an ApplicationContext in the main() method of your application, you will have to pass ApplicationContext.class to the ApplicationContext constructor. But, if you are using Spring Boot, the class containing the main() method is already secretly annotated with @Configuration and @ComponentScan, so you don’t need to do anything other than annotate Cls with @Component). 

Specifically, using @ComponentScan in this way specifies that, at runtime, if a class “at or below” the level of Cls in the directory hierarchy is annotated by @Component  or by an annotation whose parent annotation* is @Component, then that class will be used to construct a Spring bean. Notably, the annotations @Service, @Repository, and @Controller all have @Component as a parent annotation.

*One annotation is considered to be the “child annotation” of another annotation if it is meta-annotated by that other annotation.

Sidenote: annotation “inheritance” in Spring

As is noted in this Stack Overflow answer, Spring Framework’s AnnotationUtils class has a method that tests whether an annotation is equal to or is annotated with another annotation. I’m making an educated guess that Spring uses this sort of inheritance testing for annotations all over the place.

Differences between @Service, @Repository, and @Controller

@Service, @Repository, and @Controller are similar in that they are child annotations of @Component (i.e. they are all meta-annotated by @Component). What are some differences?

  • @Service indicates to the programmer that the class it annotates contains “business logic”. Other than that, it doesn’t enable any behind-the-scenes behavior. The Spring devs may change this some day.

  • @Repository “is a marker for any class that fulfils the role or stereotype of a repository (also known as Data Access Object or DAO). Among the uses of this marker is the automatic translation of exceptions [from implementation exceptions to a Spring exception]” (from here).

  • @Controller must annotate a class if we want to use annotations from Spring Web MVC of the form @<request type>Mapping. These annotations are used for setting up HTTP API endpoints.

Extra readings: @Component vs. @Service vs. @Repository vs. @Controller, @Component vs. @Bean.

Implementing dependency injection

We now know how to configure Spring beans, but don’t yet know anything about how to actually dependency-inject Spring beans into other Spring beans. We describe how to do so in this section.

Though, before we describe how to do so, there is a little more prerequisite knowledge we should cover.

More prerequisite knowledge

Convention: “bean definition”

For the rest of this document, the term bean definition will refer to a method annotated with @Bean that returns an object instance or a class annotated with @Component.

Bean scopes

Every bean definition has an associated “scope”.

The default (and most important) scope is singleton. If a bean is of singleton scope, all references to that bean access the same Java object. singleton scope is used to achieve dependency sharing, which, if you recall the above “Dependency injection” section, is one of the key advantages of using an IoC container.

The second most important scope is prototype. If a bean is of prototype scope, different references to that bean are references to different Java objects.

The four other scopes, request, session, application, and websocket, can only be used in a “web-aware application context,” and are less commonly used. Don’t worry about these ones.

Terminology: “plain old Java objects” (“POJOs”)

A “plain old Java class” is a class that does not depend on an application framework such as Spring. Basically, since most Spring features are handled with annotations, a plain old Java class is a class without any Spring annotations.

Unfortunately, people say “plain old Java object” instead of “plain old Java class”, so we speak of POJOs instead of POJCs.

POJOs are often used in Spring apps in combination with not-POJOs to represent “more concrete” objects (such as an Employee, etc.).

Extra reading: http://www.shaunabram.com/beans-vs-pojos/.

Implementing somewhat-manual dependency injection

Now, we are actually ready to use Spring Framework to implement the dependency injection design pattern.

Suppose we’ve configured a Spring bean named Cls1 that has a reference to a Spring bean Cls2:

@Component
public class Cls1
{
     private Cls2 cls2;
     public Cls2 getCls2() { return cls2; }
     public void setCls2(Cls2 cls2) { this.cls2 = cls2;}
}
​
@Component
public class Cls2 { ... }

We want to inject an instance of the Cls2 into our Cls1 bean at runtime. To do so, we need a reference to the Spring IoC container.

The interfaces BeanFactory and ApplicationContext both represent the IoC container. Since ApplicationContext extends BeanFactory, and therefore has more functionality, ApplicationContext should be used in most situations.

We perform dependency injection by using ApplicationContext as follows:

public class Application
{
   public static void main(String[] args)
  {
       /* <package> is the package inside which to look for @Configuration classes and in which to perform @ComponentScan. For example,    
       <package> might be "com.perficient.techbootcamp.*" */
       ApplicationContext ctx = new annotationConfigApplicationContext("<package>");
       
       /* The below assumes that a class named Cls has been configured as a bean (recall, this is done
       by using @Component and @ComponentScan or by using @Bean). */
       Cls1 cls1 = ctx.getBean(Cls1.class);
       
       /* Perform dependency injection: inject an instance of cls2 into the bean cls1. */
       Cls2 cls2 = new Cls2();
       cls1.setCls2(cls2);
  }
}

The above code is adapted from Macro Behler’s article.

Implementing dependency injection with @Autowired

In Spring Framework, one typically uses annotations that execute the effect of the above dependency injection behind the scenes. Specifically, one uses the @Autowired annotation. When @Autowired is present on a bean’s field, an instance of that field’s type will be injected into that field at runtime.

So, if we want to replicate the functionality of the above, we would write the following:

@Component
public class Cls2 { ... }
​
@Component
public class Cls1
{
     @Autowired
     private Cls2 cls2;

     public Cls2 getCls2() { return cls2; }
     // Notice, no setter necessary.
}
​
public class Application
{
     public static void main(String[] args)
     {     
           ApplicationContext ctx = new annotationConfigApplicationContext("<package>");
       
           /* The below code has been commented out because it is unnecessary.
              The above @Autowired annotation tells Spring Framework to inject
              a reference to the Cls2 bean into the Cls1 bean at runtime. */
       
           // Cls1 cls1 = ctx.getBean(Cls1.class);
           // Cls2 cls2 = new Cls2();
           // cls1.setCls2(cls2);
   }
}

Field injection with @Autowired

You may wonder how it is possible to inject an instance of Cls2 into cls1 when Cls1 has no setCls2() method. After thinking about it for a second, you might suspect that injection is done by using Cls1‘s constructor. This is actually not the case. (In the above code, Cls1 doesn’t have a with-args constructor!). When @Autowired annotates a bean’s field, then, at runtime, the IoC container uses this Java reflection technique to modify the field, even if it’s private.

Placing @Autowired on a field thus constitutes field injection.

Using @Autowired on fields is bad practice

According to this article, using field injection is bad practice because it disallows you from marking fields as final . (You want to be able to mark fields as final when appropriate because doing so prevents you from getting into a circular dependency situation).

More reasons why field injection is bad: https://dzone.com/articles/spring-di-patterns-the-good-the-bad-and-the-ugly.

Using @Autowired on constructors and setters

@Autowired can also be used on constructors or setters to inject a parameter into a constructor or setter at runtime.

The @Qualifier annotation

Because a bean could have an @Autowired field whose type is an interface, and because multiple classes may implement the same interface, it can be necessary to specify which implementation of the interface is meant to be dependency-injected. This is done with the @Qualifier annotation, as follows:

public interface Intf { ... }

@Qualifier("impl1")
@Component
public class Impl1 extends Intf { ... }
​
@Qualifier("impl2")
@Component
public class Impl2 extends Intf { ... }
​
public class Cls
{
     @Autowired
     @Qualifier("impl1")
     private Cls1 cls1Instance; // at runtime, cls1Instance will be set to a Cls1 instance
   
     @Autowired
     @Qualifier("impl2")
     private Cls2 cls2Instance; // at runtime, cls1Instance will be set to a Cls1 instance
}

Here are the specifics of how field-names are matched to bean-names:

  • Define the qualifier-name of a bean definition or field to be: (1) the argument of the @Qualifier annotation attached to said bean definition or field, if the bean definition or field is indeed annotated with @Qualifier, and (2) the name of the class associated with the bean definition, if the bean definition or field is not annotated with @Qualifier.

  • When no @Qualifier annotation is present on a field, then the class whose case-agnostic qualifier name is equal to the case-agnostic name of the field is what is dependency-injected into the field. (“Case agnostic” means “ignore case”).

End

This concludes my introduction to Spring Framework for Java. I hope you’ve gained a sense as to how Spring Framework allows you to implement dependency injection!

]]>
https://blogs.perficient.com/2022/01/20/introduction-to-spring-framework/feed/ 0 303626
An abstract take on the dependency injection pattern https://blogs.perficient.com/2021/09/22/an-abstract-take-on-the-dependency-injection-pattern/ https://blogs.perficient.com/2021/09/22/an-abstract-take-on-the-dependency-injection-pattern/#respond Wed, 22 Sep 2021 19:07:36 +0000 https://blogs.perficient.com/?p=297561

This article will take a relatively abstract look at the design pattern called dependency injection (or inversion of control). I feel that most articles about dependency injection get too bogged down in the particulars of whatever example is being used to demonstrate the structure. In this article, we’ll present pure abstraction.

Well, maybe not pure abstraction- we do have to pick a particular programming language, after all! We will use Java in this article. If you don’t know Java, don’t worry too much. We’ll stick to “basic” Java- nothing esoteric.

A typical dependency situation

Consider the following dependency situation, in which a class Cls depends both upon an interface Intf and on an implementation Impl of that interface.

public interface Intf { void helperMethod(Object args); }
​
public class Impl extends Intf
{
    @Override
    public void helperMethod(Object args) { /* implementation */ }
}
​
public class Cls
{
    public void method(Object args)
    {
        Intf intf = new Impl();
        intf.helperMethod(args);
    }
}

Cls depends on Impl because it requires knowledge of the Impl type in order to execute new Impl(). To restate, our current dependency situation is:

Cls –creates–> Impl

Cls –has–> Intf

Impl –is–> Intf

We want to be in a dependency situation in which Cls depends only on Intf and not on Impl. I.e., we want to decouple the implementation of Cls from any particular implementation of Intf. As described in some Microsoft documentation, this decoupling is desired because:

  1. It allows us to change which implementation Intf is used by Cls without modifying code* in the body of Cls.

    • Having an easy way to swap one implementation out for another makes it easy to swap in a mocked implementation, which lends itself to test-driven development.

  2. It removes the need for manual configuration of Impl‘s dependencies.

* If you’re using Spring Framework for Java, then you change the injected implementation by changing an annotation within the body of Cls. I don’t count this as changing “actual code” in the body of Cls! Admittedly, it would be better if changing the interface implementation didn’t touch anything inside Cls, so that the configuration code (the code specifying which interface is to be injected) is completely separate from the implementation code. C# .NET’s way of doing dependency injection is better about sticking to this rule.

A better dependency situation

To improve our dependency situation, we will pass the responsibility of creating Impl to some Container class that manages Cls. When commanded to do so, Container will “inject” a newly created  Impl instance into Cls. One way to perform this “injection”, constructor injection, is to pass an Impl instance to a constructor of Cls that accepts an Impl. (Of course, Cls must have a public with-arguments constructor for constructor injection to be possible). There are other forms of dependency injection, such as setter injection and field injection. (In Java, field injection is predicated on abusing reflection techniques to modify private fields).

This setup is called dependency injection. Implementing dependency injection places us in the following much improved dependency situation:

Cls –has–> Intf

Container –has–> Cls

Container –has–> Intf

Container –creates–> Impl

Impl –is–> Intf

Now, Cls depends only on Intf and not on Impl, as desired.

In addition to solving the decoupling problem, dependency injection comes with the extravagant benefit of allowing for the “injection” of a single interface implementation into different classes that depend on the same interface type, when doing so makes sense. In other words, our new dependency situation also allows for the sharing of one Intf instance between class instances.

Summary

To summarize, here are the two main benefits of dependency injection:

  1. Dependency injection decouples the implementations of classes from the implementations of those classes’ dependencies. Decoupling of interfaces from implementations is desirable because…

    1.1. It allows us to change which implementation of an interface is used by a dependent class without modifying code in the body of the class.

    • Having an easy way to swap one implementation out for another makes it easy to swap in a mocked implementation, which lends itself to test-driven development.

    1.2. It removes the need for manual configuration of an injected implementation’s dependencies.

  2. Dependency injection allows us to share a single interface implementation instance between multiple classes that depend on said interface, when applicable.

Inversion of control

Since, in dependency injection, Container, rather than Cls, controls what and when Cls‘s dependencies are injected, control has in some sense been inverted. Dependency injection is thus an example of inversion of control. For this reason, a class such as Container is often referred to as the inversion of control container, or IoC container.

Note that while dependency injection is an example of inversion of control, not all inversion of control is dependency injection. This article by Martin Fowler details other examples of inversion of control.

Code

Here’s code that implements the dependency injection pattern. (The following is basically an abstract extrapolation of code given in Martin Fowler’s article on dependency injection).

public interface Intf { void helperMethod(Object args); }
​
public class Impl extends Intf
{
    private Object args;
    public Impl(Object args) { this.args = args; }
​
    @Override
    public void helperMethod() { /* implementation that uses this.args */ }
}
​
public class Cls
{
    private intf;
    public Cls(Intf intf) { this.intf = intf; }

    public void method() { intf.helperMethod(); }
}
​
public class Container{ /* implementation will be kind of complicated */ }
   
public class Main
{
    /* A config file typically performs performs the task of this method. */
    private Container configureContainer()
    {
       Object args = ... // get the arguments that should be passed to the Impl constructor
       Container cntr = new Container();
       
       /* The below line tells cntr all the information it needs to execute the statement
       "Intf intf = Impl(args)". */
       cntr.registerComponentImplementation(Intf.class, Impl.class, args);
       
       /* This next line tells cntr all the information it needs to execute the statement
       "Cls cls = new Cls(intf)". */
       cntr.registerComponentImplementation(Cls.class);
       return cntr;
    }
   
    public static void main(String[] _args)
    {
       /* This is how we call cls.method(). */
       Container cntr = configureContainer();
       Cls cls = (Cls) cntr.getComponentInstance(Cls.class);
       cls.method(); // This executes the same task as "cls.method(args)" did in the original situation.
    }
}

Extra: did we give Container unnecessary information?

One particular about the lines involving cntr.registerComponentImplementation() wasn’t immediately clear to me, and might be confusing to you, too. My question was: is it necessary to pass Intf.class to the first call of registerComponentImplementation()? It seems that there should exist an implementation of Container such that if we execute the following, cntr does what we would expect behind the scenes.

cntr.registerComponentImplementation(Impl.class, args);
cntr.registerComponentImplementation(Cls.class);

That is, it seems that cntr would have enough information to do new Cls(new Impl(args)). This is because cntr has a Cls, and Cls knows that Impl is an Intf. And, for the sake of argument, even if we assume that cntr somehow didn’t know about this through Cls, the JVM itself knows that Impl is an Intf– after all, executing new Cls(new Impl(args)) doesn’t require that we type cast new Impl(args) to Intf!

Answer: upon investigation, I found that it is just convention to pass more information than is strictly necessary in certain dependency injection frameworks.

]]>
https://blogs.perficient.com/2021/09/22/an-abstract-take-on-the-dependency-injection-pattern/feed/ 0 297561
Mocking in test-driven development (TDD) with Java’s EasyMock https://blogs.perficient.com/2021/09/22/mocking-in-test-driven-development-tdd-with-javas-easymock/ https://blogs.perficient.com/2021/09/22/mocking-in-test-driven-development-tdd-with-javas-easymock/#respond Wed, 22 Sep 2021 17:24:21 +0000 https://blogs.perficient.com/?p=297552

In this article, we’ll explore the test-driven development practice of mocking.

Consider a class Cls with a method method() that relies upon a method helperMethod(), where helperMethod() queries some external resource, and suppose that our goal is to test whether method() works as intended.

public class Cls
{
    private Object helperMethod(Object args)
    {
        // Use some external resource to obtain "result".
        return result;
    }
    
    public Object method(Object args) 
    { 
               // Calculate "result" by using helperMethod(args) somehow.
               return result; 
    }
}

Since method() calls helperMethod(), a method that relies on an unpredictable external resource, we will need to imitate, or mock helperMethod() in order to achieve our goal. Instead of actually calling helperMethod() within method(), we will make an educated guess as to what helperMethod()‘s output should be for various inputs.

To prepare for imitating helperMethod() in this way, we will replace the call to helperMethod() with a call to an interface method.

public interface HelperI { Object helperMethod(Object args); }

public class Cls
{
    private HelperI helperI;
    public void setHelperI(HelperI helperI) { this.helperI = helperI; }
    
    public Object method(Object args)
    {
        // Calcluate "result" by using helperI.helperMethod(args) somehow.
        return result;
    }
}

Specifically, the above code replaces the call to helperMethod() with a call to helperI.helperMethod().

Now, we can use a library such as EasyMock to provide a good “best guess” implementation of the interface HelperI and, most importantly, its method helperMethod().

Here’s an implementation that uses EasyMock and JUnit to do exactly this.

/* We omit the necessary "static import" statements for the EasyMock and JUnit libraries to reduce clutter. */

public class Tester
{
    private HelperI helperI;
    private Cls cls;
    @Before
    /* @Before is a JUnit annotation, not an EasyMock annotation. Any method tagged with @Before is executed 
    before each test case. */
    public void setUp() throws Exception
    {
        /* For an interface intf, createNiceMock(intf) returns an instance of 
        a class that implements intf, where all abstract methods of intf are 
        implemented by using default values for return values.
        Note, createNiceMock() does come from EasyMock. */
        helperI = createNiceMock(HelperI.class);
        cls = new Cls();
        cls.setHelperI(helperI);
    }

    @Test
    /* @Test is a JUnit annotation that indicates the method to which it is attatched is to be executed as a 
    test case. */
    public void testMethod() // Recall, our goal is to test whether method() works.
    {
        /* Now, "block out" helperMethod(). The expect() call below specifies that, 
        for i in {1, ..., n}, the ith time helperI.helperMethod() is called, it should 
        have recieved the input args[i] (in this example, the input will be coming from method(),
        since method() calls helperI.helperMethod()), and that it will return returns[i]. */

        n = ... // some positive integer
        Object[] args = ... // A length n array of inputs. We will use EasyMock to 
                           // ensure that helperI.helperMethod() recieves args[i] 
                          // from method() in the ith JUnit test.
            
        Object[] helperReturns = ... // A length n array of outputs. We will use EasyMock to impose that
                                    // helperI.helperMethod() should return helperReturns[i] upon recieving args[i] as input.
            
        Object[] expectedMethodReturns = ... // A length n array of outputs. We hope that method() will 
                                            // return expectedMethodReturns[i] in the ith iteration.
            
        for (i = 0; i < n; i ++)
            expect(helperI.helperMethod(args[i])).andReturn(helperReturns[i]);
        
        /* Apply the mocking that was just specified above to the helperI interface. */
        replay(helperI);
        cls.setHelperI(helperI);

        /* The mocking is all set up now, so we can now test if method() works. */
    
        /* Do the n tests that were set up by the expect() calls above. 
        The ith iteration of the for loop executes the ith test. 
        In the ith test, the input passed to helperI.helperMethod() from method()
        should be args[i]. When the input to helperI.helperMethod() is indeed 
        args[i], the output helperI.helperMethod() will return returns[i]. */
        
        for (i = 0; i < n; i ++)
            Object args = ... // args should be such that, in the ith iteration of this loop, calling
                             // cls.method(args) results in calling helperI.helperMethod(args[i]) within cls.method()
            assertEquals(expectedMethodReturns[i], cls.method(args)); // assertEquals() is a JUnit method
    }
}
Source

This tutorial was used as a source for this article.

]]>
https://blogs.perficient.com/2021/09/22/mocking-in-test-driven-development-tdd-with-javas-easymock/feed/ 0 297552