Perficient Minneapolis office Articles / Blogs / Perficient https://blogs.perficient.com/tag/perficient-minneapolis-office/ Expert Digital Insights Fri, 29 Aug 2025 15:17:54 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Perficient Minneapolis office Articles / Blogs / Perficient https://blogs.perficient.com/tag/perficient-minneapolis-office/ 32 32 30508587 A tour of PowerQuery’s M language https://blogs.perficient.com/2022/04/22/a-tour-of-powerquerys-m-language/ https://blogs.perficient.com/2022/04/22/a-tour-of-powerquerys-m-language/#respond Fri, 22 Apr 2022 14:43:02 +0000 https://blogs.perficient.com/?p=308564

In a previous post, I introduced PowerQuery and demonstrated how to perform various SQL-like operations. This article gives a tour of PowerQuery’s M language that underlies each query.

let and in

If you select a query and click on the “Advanced Editor” button in the Home tab, you’ll see something like this:

Image 20220421150214466

This is the M language code that constitutes our query. We’ll soon come back to the above code, but for now, let’s gain a basic understanding of how M works.

The first thing to know about M is that most M scripts are of the form let ... in .... In such a script, intermediate computations happen inside the let statement, and the content after in is the script’s return value.

For example, when the M code

let
     x = 3,
     y = x + 5
in
     y

is the script underlying a query, then that query appears as follows in the GUI:

Image 20220421103907452

Interestingly enough, it is not actually necessary for a script to contain the keywords let and inSo long as the only content in the script evaluates to a value. For instance,

x = 5

is a perfectly valid M script!

So, it is more accurate to say that

  • The contents of every M script must evaluate to a value.

  • let ... in ... evaluates the content after in. Therefore, since let ... in ... evaluates to a value, any script may be of the form let ... in ... .

We should also note that one can place the code of the form x = let ... in ... within any existing let block, and then make use of x!

let ... in ... Vs. select ... from ...

In my opinion, the let ... in ... syntax doesn’t really make much sense. I think the M language would make much more sense if there were no let nor inAnd every script simply returned the value of its last line.

It seems to me thatlet ... in ... is supposed to evoke connotations with SQL’s select ... from .... Comparisons between let ... in ... and select ... from ... quickly break down, though:

  • The data source in a SQL query is specified in the from clause, while the data source of a let ... in ... statement typically appears in the let clause.

  • The result set of a SQL query is determined primarily from the select clause, while the result of a let ... in ... statement is whatever comes after in.

 

 

Autogenerated M code

Now that we have some knowledge about let ... in ...We can look at some sample M code that is autogenerated after using the GUI to create a query:

let
     Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
     #"Changed Type" = Table.TransformColumnTypes(Source,{{"col1", Int64.Type}, {"col2", type text}, {"col3", type text}}),
     #"Filtered Rows" = Table.SelectRows(#"Changed Type", each [col1] = 1 or [col2] = "b")
in
     #"Filtered Rows"

Looking closely at the above code teaches us two important facts about the M language:

  1. Variable identifiers can be of the form #"{string}", where {string} is any string of characters.

  2. The autogenerated M code corresponding to each “step” in a PowerQuery query references the previous step. (E.g., when computing #"Changed Type", we pass Source to Table.TransformColumnTypes()).

If we consult the M documentation for any of the functions (Excel.CurrentWorkbook(), Table.TransformColumnTypes(), Table.SelectRows()) in the above, we also see that

  1. The objects that represent each “step” in a query are of type table.

M data types

  • The Microsoft documentation describes M as having the following primitive types: binary, date, datetime, datetimezone, duration, list, logical, null, number, record, text, time, type.

  • There are also “abstract types”: function, table, any, and none.

  • Types in M can be declared as nullable.

  • Some types represent types ( type number and type text are such types).

Lists and records

In M, the basic collection types are lists and records. Lists and records are 0-indexed.

Lists are essentially “arrays”, and records map string-valued “keys” to “values.” (So records are essentially “dictionaries”/”hashmaps”).

To initialize a list, use code such as lst = {1, "a", 2, false}. To initialize a record, use code such as rec = [key = 1, key2 = "blah"]. To access the ith element of a list, use lst{i}. To get the value associated with key key (e.g. key = "key1") in a record rec, use rec[key].

M uses functional programming

In M, we use functional programming constructs in the place of looping constructs. The go-to functional programming construct is the function List.Transform(). Given a list lst and a function fn, List.Transform(lst, fn) returns the list that is the result of applying fn to each element of lst.

The function List.Generate() can also be handy. Whenever you can’t think of a good way to solve your problem by using List.Transform()And whenever it is actually best to essentially implement a for loop, use this code to do so:

List.Generate(() => 0, each _ < n, each _ + 1, {statement})

It will execute {statement} n times.

User-defined functions

Writing user-defined functions in M can prove very useful. In my work, I found that I needed to repeat a certain sequence of steps many times. If I were to manually rewrite these steps with the PowerQuery GUI repeatedly, I would drive myself insane and have way too many PowerQuery steps. But, since I created a user-defined function to perform the above task, I was able to perform collapse the above four steps into a single step!

The syntax for defining a custom function uses anonymous function syntax.

fn = (x) => x * x

(If you were to evaluate fn(x) elsewhere in the script, that invocation fn(x) would return x * x).

The query whose M script is the above looks like this in the GUI:

Image 20220421120442467

Global variables and global functions

When a variable or function is used multiple times in multiple scripts, it is best practice to separate the definition of the variable or function from all of the scripts that use the variable or function. To define a global variable with a value of, say, 5, use the Advanced Editor* to make a query’s M code

5

Then, change the name of the query to be the desired identifier for the variable.

Since functions are variables of type function, the process for defining a global function is the same. For example, to declare a global function named fn that sends x to x * x, create a query whose name is fn, and edit the query’s M code with the Advanced Editor* so that it is

(x) => x * x

* If you use the smaller editor instead of the Advanced Editor, you will have to prepend an equals = to the beginning of your code to avoid errors.

Accessing the “current” table row

Recall that the function call that implements the equivalent of a general where clause looks something like

Table.SelectRows(#"Changed Type", each [col1] = 1)

There are a several concepts at play here we glossed over before that deserve explanation.

  • Rows of tables are represented as records. If row is a record that represents some row of a table, the value in the column row of that row is row[col].

  • The second argument of Table.SelectRows() is a function whose input is a record that represents the “current row” of the table and whose output is a logical (i.e. a boolean) that indicates whether or not to include the current row in the result set.

  • _ is a valid variable name in M, and so the function (_) => fn(_) is the same as the function (x) => fn(x) . For example, the function (_) => _ * _ is the same as the function (x) => x * x.

  • The each keyword is shorthand for the syntax(_) =>.

  • Whenever a variable var appears in square brackets to the right of an each, M interprets [var] as meaning _[var]. Therefore, an expression such as each [var] is the same (_) => _[var].

Knowing all of these things, we see that the above code translates to

Table.SelectRows(#"Changed Type", (_) => _[col1] = 1)

Since you might be uncomfortable with using _ as a variable, let’s consider another equivalent function call:

Table.SelectRows(#"Changed Type", (row) => row[col1] = 1)

Here, we understand (row) => row[col1] = 1 to be the function that takes in a record representing the current row, looks at the value in this record associated with the key col1, and returns true whenever that value is equal to 1. Thus, the above code selects the rows from the table that have a value in column col1 of 1.

]]>
https://blogs.perficient.com/2022/04/22/a-tour-of-powerquerys-m-language/feed/ 0 308564
Data exploration with PowerQuery https://blogs.perficient.com/2022/04/22/data-exploration-with-powerquery/ https://blogs.perficient.com/2022/04/22/data-exploration-with-powerquery/#respond Fri, 22 Apr 2022 14:29:53 +0000 https://blogs.perficient.com/?p=308553

Microsoft’s PowerQuery is a neat tool that allows one to perform SQL-like operations on Excel tables.

When investigating a database, I actually prefer using PowerQuery over raw SQL for a couple reasons:

  • PowerQuery displays result sets that are much easier to look at than the a typical SQL plaintext result set.

  • It’s easy to immediately interact with PowerQuery result sets by using the graphical user interface.

  • Most importantly, you write PowerQuery queries one step at a time and can therefore easily sanity check a query as you write it. (It’s tedious to do so in raw SQL).

If you frequently use SQL to investigate databases, I highly recommend that you try out PowerQuery.

To try PowerQuery out on some test data, just create an Excel Table*, then select any cell within that Table, go to the Data tab at the top of the screen, and click “From Table/Range”. (* To create an Excel Table: enter some random data into a rectangular range of cells, then select any cell within that range, go to the Insert tab at the top of the screen, and click “Table”).

Here’s what happens if I have the following Excel Table:

Image 20220421090512615

After I select a cell from the above table, and click “From Table/Range”, the PowerQuery editor pops up:

We can see that PowerQuery has represented my Excel Table as a query. We can also see the graphical user interface that allows us to interactively add steps to said query.

PowerQuery equivalents to SQL constructs

It’s instructive to think about how we can accomplish various SQL constructs within PowerQuery.

  • To do the equivalent of a select statement, and select a subset of columns from the result set, we would click on the “Choose Columns” button (visible above).

  • To do a select distinct, we use “Choose Columns” to execute the desired select, and then, in the following result set, select all columns, right click, and select “Remove Duplicates”.

  • Accomplishing the equivalent of a where clause- selecting the subset of rows from the result set for which a certain condition is true- is a bit hacky in general. (We describe how to do this later). In the case when the condition only involves one column, though, we can do everything in a non-hacky way. If we want to filter the above result set for with col1 = 1, we would click the downwards arrow inside the col1 header, and use either the “Number Filters” option or the checkbox next to “1” in the following menu:

    Image 20220421091644894

  • To do a group by, we go to the Transform tab at the top of the screen, and click “Group By”.

  • To do a join (whether inner, left, right, full outer, etc.), we click “Merge Queries” from within the Home tab. To do a union, we click “Append Queries” from within the Home Tab.

    • To increase encapsulation, one can use the “Merge Queries as New” or “Append Queries as New” options to produce a table that is the result of joining or unioning two existing tables.

      Image 20220421093022520

General where clauses

Above, we noted that accomplishing a where clause that involves more than one column is a bit hacky. We describe how to write such a where clause here. It’s really not that bad: first, just click the downwards arrow inside any column’s header, and filter for anything you like. I’ve done so, and filtered the above data for rows with col1 = 1:

Image 20220422081254808

Notice the code that appears in the bar that runs horizontally over the top the table:

= Table.SelectRows(#"Changed Type", each [col1] = 1)

This code provides a more low-level description of what the “Filtered Rows” step of the query is doing. You can probably guess how we accomplish a general filter (one that involves columns other than col1). If we wanted to change the filtering condition to, say, col1 = 1 or col2 = "b", then what we do is edit said code to be

= Table.SelectRows(#"Changed Type", each [col1] = 1 or [col2] = "b")

It works! We get

Image 20220422081358076

In general, any column of the table can be referenced in an “each statement” such as the above by enclosing the column name in square brackets. Soon, we’ll learn more about what this square bracket notation actually means, and why it must come after the keyword each.

]]>
https://blogs.perficient.com/2022/04/22/data-exploration-with-powerquery/feed/ 0 308553
Dependency injection in C# .NET https://blogs.perficient.com/2022/03/21/dependency-injection-in-c-net/ https://blogs.perficient.com/2022/03/21/dependency-injection-in-c-net/#respond Mon, 21 Mar 2022 20:02:34 +0000 https://blogs.perficient.com/?p=306533

I’ve decided to write a tutorial on how to accomplish dependency injection in C# .NET, because the Microsoft documentation of dependency injection libraries is unfortunately way too sparse, and the Microsoft dependency injection tutorial is a little convoluted and complicated.

Fortunately, C# .NET’s implementation of dependency injection is pretty straightforward. In my opinion, it’s way more straightforward than the implementation provided by Java’s Spring Framework. If you understand the basics of the dependency injection concept but haven’t yet tried it out in practice, C# .NET could be your best bet.

Dependency injection recap

Here’s a quick recap on what dependency injection entails. If you want more detail, this article I wrote may be helpful.

In general, whether dependency injection is in play or not, classes may specify types- called dependencies– that they have has-a relationships with.

In dependency injection, instances of classes are not responsible for creating instances of their dependencies. Instead, a managing container maintains has-a relationships with instances of the classes, and the user specifies to the container which implementations of the dependencies they want to use by calling one of the container’s methods, or by writing so-called “configuration code” that is interpreted by the container. At runtime, the container “injects” these implementations into the class instances.

Why use dependency injection? The main point is to separate interface from implementation. Why is this important? I suggest you read the linked article above for more details.

.NET dependency injection terminology

The first thing to be aware of when learning dependency injection in C# .NET is that Microsoft uses some alternative terminology when discussing dependency injection concepts. If you want to be able to understand the Microsoft documentation, you need to be aware of this terminology. So, here’s some vocabulary:

Microsoft phrase Meaning
service dependency
service registration the storing of dependencies in the managing container
service resolving the injection at runtime of a dependency

ServiceDescriptor

The ServiceDescriptor class is what represents a service (recall that “service” means “dependency”). The most down-to-earth constructor of ServiceDescriptor is as follows:

public ServiceDescriptor (Type serviceType, Type implementationType, ServiceLifetime lifetime)

So, we see that in C# .NET, a service essentially wraps the type of the dependency, the type of the preferred implementation for said dependency, and the “lifetime” of the dependency.

ServiceLifetime

In my opinion, “lifetime” should really be called “instantiation multiplicity”, since the value of lifetime in the above constructor determines whether or not the management container is to create multiple instances of the dependency, and, if so, how to do so.

Specifically, ServiceLifetime is an enum that can take on the value Singleton, Transient, or Scoped.

  • Singleton indicates that the management container (which we have not seen yet) will ensure that only one instance of the service will be created throughout the program lifetime. All class instances which depend on the service will share the same service.

  • Transient indicates that the management container will ensure that a new instance of the dependency will be created whenever a different class instance needs it.

  • The meaning of Scoped is a little complicated for a first pass at dependency injection in C# .NET. If you want to learn about it, read this.

ServiceDescriptor properties

You already saw the ServiceDescriptor constructor, which is what’s most important in regards to understanding ServiceDescriptor. For a bit more detail, here are the public properties that are wrapped by the ServiceDescriptor:

public Func<IServiceProvider,object>? ImplementationFactory { get; } // a factory method that stores instructions on how to build an instance of the implementation type
public object? ImplementationInstance { get; }
public Type? ImplementationType { get; } // type of the wrapped instance, ImplementationInstance
public ServiceLifetime Lifetime { get; }
public Type ServiceType { get; } // type of the wrapped interface

Some of the above may be confusing, so here are some clarifying notes:

  • Func<T1, T2> represents a function that takes an argument of type T1 as input and returns a type T2 instance as output. Thus, the ImplementationFactory property is a function that takes an IServiceProvider as input and returns an instance of the implementation as output. ImplementationFactory can be thought of as wrapping instructions for how to create an instance of the implementation instance.

  • For any type T, the expression T? is shorthand for Nullable<T>, which represents a nullable version of the type T. A type is called nullable if compiler errors are not thrown when a null value of said type is attempted to be used. For more context on Nullable<T> is, see the below appendix.

Registering services (ServiceDescriptors) with IServiceCollection

So far, we know how to represent services (dependencies) as ServiceDescriptors. We’ll now learn how to create a managing container and how to register our services with said container.

An instance of type IServiceCollection is what will represent our managing container. From its interface definition, we can see that IServiceCollection a collection of  ServiceDescriptors. (So, IServiceCollection is interpreted as “I{ServiceCollection}“, which means “interface to a collection of services”, not “{IService}Collection“, which would mean “collection of interfaces to services”!).

using Microsoft.Extensions.DependencyInjection;
using System.Collections.Generic;​
public interface IServiceCollection : ICollection<ServiceDescriptor>, IEnumerable<ServiceDescriptor>, IList<ServiceDescriptor> { }

Microsoft provides an implementation of IServiceCollection for us- the ServiceCollection class from the Microsoft.Extensions.DependencyInjection namespace- so we don’t have to take care of the implementation ourselves.

Service registration via extension methods to IServiceCollection

In order to store services in an IServiceCollection, we need to enable access to some extension methods to IServiceCollection.

(An extension method is an instance method of a class that is added to the class after the class is defined. Confusingly, you can add extension methods, which are non-abstract methods, to an interface. To learn more about extension methods, see the below appendix).

To obtain access to the extension methods we need, just include a using Microsoft.Extensions.DependencyInjection.ServiceCollectionServiceExtensions statement at the top of the file.

Some important extension methods (with the parameter for the extended class, IServiceCollection, omitted) added by ServiceCollectionServiceExtensions are:

AddSingleton(Type serviceType, Type implementationType);
AddSingleton(Type serviceType); // is the above with implementationType = serviceType

Being extension methods to IServiceCollection, these methods are invoked on an instance of type IServiceCollection in the same way that usual instance methods are. For example, if services has type IServiceCollection, then we would call the above methods by writing

services.AddSingleton(serviceType, implementationType);
services.AddSingleton(serviceType);

You can probably surmise that these two methods add a ServiceDescriptor of lifetime Singleton that has the specified serviceType and implementationType to the IServiceCollection.

There are also versions of the above methods for which certain combinations of the parameters are held fixed:

AddSingleton<TService,TImplementation>()
AddSingleton<TService>(); // is the above with TImplementation = TService
AddSingleton<TService,TImplementation>(Func<IServiceProvider,TImplementation> implementationFactory)

And, of course, for every method whose name is AddSingleton, there will be corresponding methods with names of AddTransient and AddScoped that perform the same task for services of Transient and Scoped lifetimes, respectively.

Though, these two versions of AddSingleton, which specify the instance that is to be wrapped by the singleton service, don’t have AddTransient or AddScoped counterparts, because it wouldn’t make sense to specify only a single instance to AddTransient or AddScoped:

AddSingleton(Type serviceType, object implementationInstance);
AddSingleton<TService>(TService instance);

Resolving services at runtime with IServiceProvider

At this point, we know how to we know how to represent services (dependencies) as ServiceDescriptors, how to create how to create a managing container, and how to register our services with the container. The last item we need to address is that of configuring the resolving of services at runtime (i.e. configuring the injection of dependencies at runtime).

Let’s suppose that services is an IServiceCollection (i.e. a managing container) that contains some ServiceDescriptors (i.e. “services”, or dependencies).

To grab services from a managing container named services at runtime, we will first obtain an instance of type IServiceProvider from the managing container* by storing the return value of services.BuildServiceProvider() . Then, we grab a particular service by using the single abstract method specified in the IServiceProvider interface:

public object? GetService(Type serviceType)

* services.BuildServiceProvider returns an instance of Microsoft.Extensions.DependencyInjection.ServiceProvider, which implements IServiceProvider.

Somewhat random: creating IServiceCollections by using IServiceProviders

This section is pretty optional.

If you already have one IServiceCollection instance and corresponding IServiceProvider, and you want to create another IServiceCollection instance by making use of dependencies stored in the first IServiceCollection instance, you can use these extension methods to IServiceCollection:

AddSingleton(Type serviceType, Func<IServiceProvider, object> factory);
AddSingleton<TService,TImplementation>(Func<IServiceProvider,TImplementation> implementationFactory);
AddSingleton<TService>(Func<IServiceProvider, TService> implementationFactory); // is the above with TImplementation = TService

Appendix

This appendix documents some less commonly known language features of the C# language.

? and nullable reference types

A non-nullable type is a type for which compiler errors are thrown when a variable of that type with null value is attempted to be used . Contrastingly, a nullable type is a type for which compiler errors are not thrown in said situation. You can still get runtime errors with nullable types, of course! The whole point of non-nullable types is to avoid runtime errors by catching them at compilation.

According to the Microsoft documentation, all reference types were nullable prior to C# 8.0. Nowadays (i.e. after C# 8.0), all reference types are non-nullable by default.

You can still use nullable types if you really want, though. For any type T, the typeNullable<T> is nullable. ?T is shorthand for Nullable<T>.

Extension methods

In C#, it is possible to define instance methods outside of the corresponding class definition. Methods defined in this way are called extension methods.

Extension methods must be defined in a static class, and must use the this keyword in the following way:

public class Cls { ... }

​public static class Extension
{
     public static int extensionMethod1(this Cls cls)
     { int someValue = 0; return someValue; }       
     
     public static int extensionMethod2(this Cls cls, int arg)
     { int someValue = 0; return someValue; } 
}

Extension methods are called in the same way as regular instance methods: to call the above defined extension methods on an instance cls of Cls, you would write cls.extensionMethod() or cls.extensionMethod2(arg), respectively.

Extension methods to interfaces

Somewhat confusingly, it is possible to define extension methods- in exactly the same way as above- for interfaces. To me this possibility runs contradictory to the intent of “interface”- interfaces are not supposed to be associated with actual implementations of methods. But you can do it. It is also in fact impossible to add something like an “abstract extension method” to an interface. The C# standard library unfortunately makes much use of implementing interfaces via extension methods. Oh well.

References

I referenced the following two articles in developing my understanding of IServiceCollection: (1), (2).

]]>
https://blogs.perficient.com/2022/03/21/dependency-injection-in-c-net/feed/ 0 306533
Clarifying Excel’s lookup functions https://blogs.perficient.com/2022/02/09/clarifying-excels-lookup-functions/ https://blogs.perficient.com/2022/02/09/clarifying-excels-lookup-functions/#respond Wed, 09 Feb 2022 20:44:43 +0000 https://blogs.perficient.com/?p=304592

I’ve decided to write some of my own documentation for common use cases of the Excel functions LOOKUP, VLOOKUP, HLOOKUP and XLOOKUP because the official documentation is pretty confusing. It uses “lookup value” as a synonym for “key”, when one would conventionally expect a “lookup value” to be a synonym for “value”! (After all, in the typical key-value terminology, “values” are obtained as the result of looking up “keys”!)

Before jumping in- here’s a quick overview. All four lookup functions essentially return the result of the pseudocode values[keys.indexOf(key)], where, given arrays of “keys” and “values” named keys and values, respectively, keys.IndexOf(key) is the index of the key in the array keys. Additionally,

  • LOOKUP is the most simplistic of the four functions- it pretty much looks up “values” from “keys” like you would expect.

  • The “V” and “H” in VLOOKUP and HLOOKUP stand for “vertical” and “horizontal”, respectively; in VLOOKUP, the provided 1D ranges must be columns, and in HLOOKUP they must be rows.

  • XLOOKUP combines the functionality of VLOOKUP and XLOOKUP, and allows for the provided 1D ranges to be either rows or columns. (If you have access to XLOOKUP, you should prefer it over VLOOKUP and HLOOKUP. But at the time of writing, you need access to a Microsoft 365 subscription to use XLOOKUP).

Without further ado, here is my documentation.

LOOKUP

Syntax: LOOKUP(key, keys, values).

Returns the result of the pseudocode values[keys.indexOf(key)], where keys.indexOf(key) is the index of the key in keys, when keys is treated as an array.

key – a value that exists in keys

keys – a 1D range of “keys”

values – a 1D range of “values”

Notes:

  • The official documentation mentions an “array form” of a LOOKUP invocation. I don’t cover that here (the above summarizes the “vector form”) because VLOOKUP, HLOOKUP, and XLOOKUP accomplish the same thing as the “array form”.

 

VLOOKUP

Syntax: VLOOKUP(key, table, valuesIndex, fuzzyMatch).

Returns the result of the pseudocode values[keys.indexOf(key)], where keys is the column of “keys”, “values” is the column of “values”, and where keys.indexOf(key) is the index of the key in keys, when keys is treated as an array.

key – a value that exists in keys

table – a 2D range that contains the column of “keys” and the column of “values” OR a table that contains the column of “keys” and the column of “values”

valuesIndex – the column index (into table) of the column of “values”

fuzzyMatch – whether or not to fuzzily match key with values in the column of “keys” (you almost always want to use fuzzyMatch = FALSE)

Notes:

  • To create a table that you would use for the table argument, select the 2D range that is to be registered as a table. Then, go to the Insert tab, click Table, and then click OK.

  • You might ask: “Why would we want to specify a table that the “key” and “value” columns reside in? Why not just specify the ‘key’ and ‘value’ columns?” The reason it’s advantageous to have this table parameter is that it, if we are calling HLOOKUP multiple times in the same column and varying valuesIndex between calls, we will get an error message if valuesIndex ventures outside the bounds of table. This error message can prevent us from making erroneous computations.

HLOOKUP

HLOOKUP works in the same way as VLOOKUP, with the only difference being that the “keys” and “values” must be stored in rows instead of columns.

XLOOKUP

Syntax: XLOOKUP(key, keys, values).

Returns the result of the pseudocode values[keys.indexOf(key)], where keys.indexOf(key) is the index of the key in keys, when keys is treated as an array.

key – a value that exists in keys

keys – a 1D range of “keys”

values – a 1D range of “values”

Notes:

  • A 1D range can be either a row or a column.

]]>
https://blogs.perficient.com/2022/02/09/clarifying-excels-lookup-functions/feed/ 0 304592
On Scala’s parenthesis convention for no-arg functions https://blogs.perficient.com/2022/02/08/on-scalas-parenthesis-convention-for-no-arg-functions/ https://blogs.perficient.com/2022/02/08/on-scalas-parenthesis-convention-for-no-arg-functions/#respond Tue, 08 Feb 2022 17:54:21 +0000 https://blogs.perficient.com/?p=304463

One might be confused or even angered when they learn about Scala’s convention regarding parenthesis usage for no-arg functions.

The convention is this: given a no-arg function, you use put parentheses next to the function call only if the function has side effects. So, you would invoke a function named printCurrentState by writing printCurrentState(), since printCurrentState has the side effect of printing output to the console. On the other hand, you would invoke a function named getCurrentState by simply writing getCurrentState, since getCurrentState presumably just returns a value and does nothing else.

But why? Here, I’ll present a quick analysis to convince you of why this convention makes sense.

Let’s consider an arbitrary no-arg function. An arbitrary no-arg function falls into one of the following categories:

  • void methods

    • with side effects (“no-arg subroutine”)

    • with no side effects (“useless method”)

  • methods with a return value

    • with no side effects (“a method that philosophically represents a variable”, i.e. a “getter method”)

    • with side effects (“variable retrieval and subroutine”).

If we ignore the possibility of “useless methods”, then we can rearrange the remaining three options to see that any given no-arg method is either

  • a void method with side effects

  • a method with a return value and no side effects

  • a method with a return value and side effects.

That is, every no-arg method is in practice either

  • a method with side effects

  • a method with a return value and no side effects, i.e., a “getter method.”

“Getter methods” in a philosophical sense “almost variables” because it is not completely inaccurate to think of their invocations as peeks into the state of reality rather than as a value returned by work done behind-the-scenes.

Since every no-arg method is either a “getter method” or not, it is syntactically unambiguous to establish the convention of invoking “getter methods” without parentheses (). More importantly, it is pleasing, as the removal of parentheses emphasizes the interpretation of “getter methods” as being variables.

]]>
https://blogs.perficient.com/2022/02/08/on-scalas-parenthesis-convention-for-no-arg-functions/feed/ 0 304463
Introduction to Spring Framework for Java https://blogs.perficient.com/2022/01/20/introduction-to-spring-framework/ https://blogs.perficient.com/2022/01/20/introduction-to-spring-framework/#respond Thu, 20 Jan 2022 19:58:16 +0000 https://blogs.perficient.com/?p=303626

Introduction to Spring Framework for Java

This article walks through the basics of using Spring Framework for Java.

From a very, very high level point of view, Spring Framework infers a program’s runtime behavior from labels that the programmer attaches to pieces of code. There are many different groupings of labels, and each grouping of labels provides an interface to the configuration of some behind-the-scenes process.

Since a simple-looking Spring Framework program with just a few labels can have quite a lot going on behind the scenes, learning Spring Framework can seem a little overwhelming to the beginner. Learning is made even more difficult by the fact that most online resources documenting Spring Framework haphazardly walk through different types of labels instead of building a fundamental ground-up understanding.

This article intends to fill this gap and provide a ground up understanding. We will start with vanilla Java, and then, one programming design pattern at a time, we will augment on an understanding of how that design pattern is configured in Spring Framework with labels.

Before you begin reading this article in earnest…

  • You should have a firm grasp of how object oriented programming is achieved in Java. So, you should be familiar with concepts such as: “pass reference by value” and “pass by reference”, classes, constructors, fields, access modifiers (public and private), methods, static methods, class instances/objects, getter and setter methods, inheritance, method overloading, runtime polymorphism, interfaces and abstract methods, etc.
  • You should know about the basics of Java annotations. The “labels” spoken of above are really annotations.

  • You should be aware of Macro Behler’s excellent introduction to Spring Framework. I’ve mentioned that most online resources on Spring Framework are disorganized and disappointing; his article is one of the few exceptions. It’s always good to have multiple readings to pull from when learning a topic, and so I encourage you to read this article as a supplement to this.

Dependency injection

Above, our “very, very high level” point of view was that Spring Framework infers a program’s runtime behavior from Java annotations. This is an accurate surface-level description of Spring Framework, but it isn’t a good characterization from which to grow a fundamental understanding.

We will begin our understanding of Spring Framework with a different characterization. At its core, Spring Framework is a tool for implementing the design pattern called dependency injection.

In dependency injection, the object instances on which a Java class Cls depends are “injected” into an instance obj of Cls by a container that has a reference to obj. Since the container, rather than obj, controls when obj‘s dependencies are injected, it is often referred to as an inversion of control container. Dependency injection is also sometimes called “inversion of control” for this reason.

What’s the point of dependency injection? Well, one of the advantages is that it allows us to avoid instantiating unnecessary copies of a dependency.

Suppose that multiple classes require a reference to an object that represents a connection to a particular database. Since this reference is a dependency, we can easily share a single database connection among all of the class instances by making use of the dependency injection technique described above. This is much better than wastefully giving each class instance its own database connection.

Spring beans

When using Spring Framework, we will spend most of our time dealing with and thinking about “Spring beans,” which are the dependencies that are managed by the Spring IoC container.

To be more specific, a Spring bean is a not-necessarily-Serializable object that…

  • is created at runtime by the Spring IoC container (IoC stands for inversion of control)

  • has references to other objects or beans (“dependencies”) injected into it at runtime by the Spring IoC container

  • is otherwise controlled at runtime by the Spring IoC container.

Spring beans can be configured via XML files or by using Java annotations within Java classes. Using annotations is the modern approach. This article will use annotations only- no XML code.

Note that a Spring bean is different than a JavaBean. JavaBeans are part of the core Java language, while Spring beans are obviously not. (Specifically, a JavaBean is a Serializable class with a public no argument constructor that has all fields private. JavaBeans are also not managed by the Spring IoC container).

Configuring Spring beans

In the above, we said that Spring beans are created at runtime by the Spring IoC container. But how does the Spring IoC container know what sort of beans to create? Well, you, the programmer, must write “configuration code” that specifies which Java classes should be instantiated as Spring beans at runtime.

There are two main ways to do this: either use the @Bean annotation or the @Component annotation. (You can also use an annotation derived from @Component).

First configuration method: @Beaned methods that return instances

To specify that Cls should be instantiated as a bean at runtime, annotate a method that returns an instance of Cls with @Bean, and place that annotated method within a class that is itself annotated with @Configuration:

@Configuration
public class Config
{
     @Bean
     public Cls createABeanWrappingAClsInstance(Object args) { return new Cls(args); }
}

Second configuration method: use @Component and @ComponentScan

Another way to specify that a class Cls should be instantiated as a bean at runtime is to annotate Cls‘s class declaration (i.e. public class Cls) with @Component, while also annotating some other class, say Config, that is “at or above below” the level of Cls in the directory hierarchy with @Configuration and @ComponentScan:

@Configuration
@ComponentScan
/* Config must be at or above Cls in the directory hierarchy. */
public class Config { }

@Component
public class Cls { ... }

(When you create later create an ApplicationContext in the main() method of your application, you will have to pass ApplicationContext.class to the ApplicationContext constructor. But, if you are using Spring Boot, the class containing the main() method is already secretly annotated with @Configuration and @ComponentScan, so you don’t need to do anything other than annotate Cls with @Component). 

Specifically, using @ComponentScan in this way specifies that, at runtime, if a class “at or below” the level of Cls in the directory hierarchy is annotated by @Component  or by an annotation whose parent annotation* is @Component, then that class will be used to construct a Spring bean. Notably, the annotations @Service, @Repository, and @Controller all have @Component as a parent annotation.

*One annotation is considered to be the “child annotation” of another annotation if it is meta-annotated by that other annotation.

Sidenote: annotation “inheritance” in Spring

As is noted in this Stack Overflow answer, Spring Framework’s AnnotationUtils class has a method that tests whether an annotation is equal to or is annotated with another annotation. I’m making an educated guess that Spring uses this sort of inheritance testing for annotations all over the place.

Differences between @Service, @Repository, and @Controller

@Service, @Repository, and @Controller are similar in that they are child annotations of @Component (i.e. they are all meta-annotated by @Component). What are some differences?

  • @Service indicates to the programmer that the class it annotates contains “business logic”. Other than that, it doesn’t enable any behind-the-scenes behavior. The Spring devs may change this some day.

  • @Repository “is a marker for any class that fulfils the role or stereotype of a repository (also known as Data Access Object or DAO). Among the uses of this marker is the automatic translation of exceptions [from implementation exceptions to a Spring exception]” (from here).

  • @Controller must annotate a class if we want to use annotations from Spring Web MVC of the form @<request type>Mapping. These annotations are used for setting up HTTP API endpoints.

Extra readings: @Component vs. @Service vs. @Repository vs. @Controller, @Component vs. @Bean.

Implementing dependency injection

We now know how to configure Spring beans, but don’t yet know anything about how to actually dependency-inject Spring beans into other Spring beans. We describe how to do so in this section.

Though, before we describe how to do so, there is a little more prerequisite knowledge we should cover.

More prerequisite knowledge

Convention: “bean definition”

For the rest of this document, the term bean definition will refer to a method annotated with @Bean that returns an object instance or a class annotated with @Component.

Bean scopes

Every bean definition has an associated “scope”.

The default (and most important) scope is singleton. If a bean is of singleton scope, all references to that bean access the same Java object. singleton scope is used to achieve dependency sharing, which, if you recall the above “Dependency injection” section, is one of the key advantages of using an IoC container.

The second most important scope is prototype. If a bean is of prototype scope, different references to that bean are references to different Java objects.

The four other scopes, request, session, application, and websocket, can only be used in a “web-aware application context,” and are less commonly used. Don’t worry about these ones.

Terminology: “plain old Java objects” (“POJOs”)

A “plain old Java class” is a class that does not depend on an application framework such as Spring. Basically, since most Spring features are handled with annotations, a plain old Java class is a class without any Spring annotations.

Unfortunately, people say “plain old Java object” instead of “plain old Java class”, so we speak of POJOs instead of POJCs.

POJOs are often used in Spring apps in combination with not-POJOs to represent “more concrete” objects (such as an Employee, etc.).

Extra reading: http://www.shaunabram.com/beans-vs-pojos/.

Implementing somewhat-manual dependency injection

Now, we are actually ready to use Spring Framework to implement the dependency injection design pattern.

Suppose we’ve configured a Spring bean named Cls1 that has a reference to a Spring bean Cls2:

@Component
public class Cls1
{
     private Cls2 cls2;
     public Cls2 getCls2() { return cls2; }
     public void setCls2(Cls2 cls2) { this.cls2 = cls2;}
}
​
@Component
public class Cls2 { ... }

We want to inject an instance of the Cls2 into our Cls1 bean at runtime. To do so, we need a reference to the Spring IoC container.

The interfaces BeanFactory and ApplicationContext both represent the IoC container. Since ApplicationContext extends BeanFactory, and therefore has more functionality, ApplicationContext should be used in most situations.

We perform dependency injection by using ApplicationContext as follows:

public class Application
{
   public static void main(String[] args)
  {
       /* <package> is the package inside which to look for @Configuration classes and in which to perform @ComponentScan. For example,    
       <package> might be "com.perficient.techbootcamp.*" */
       ApplicationContext ctx = new annotationConfigApplicationContext("<package>");
       
       /* The below assumes that a class named Cls has been configured as a bean (recall, this is done
       by using @Component and @ComponentScan or by using @Bean). */
       Cls1 cls1 = ctx.getBean(Cls1.class);
       
       /* Perform dependency injection: inject an instance of cls2 into the bean cls1. */
       Cls2 cls2 = new Cls2();
       cls1.setCls2(cls2);
  }
}

The above code is adapted from Macro Behler’s article.

Implementing dependency injection with @Autowired

In Spring Framework, one typically uses annotations that execute the effect of the above dependency injection behind the scenes. Specifically, one uses the @Autowired annotation. When @Autowired is present on a bean’s field, an instance of that field’s type will be injected into that field at runtime.

So, if we want to replicate the functionality of the above, we would write the following:

@Component
public class Cls2 { ... }
​
@Component
public class Cls1
{
     @Autowired
     private Cls2 cls2;

     public Cls2 getCls2() { return cls2; }
     // Notice, no setter necessary.
}
​
public class Application
{
     public static void main(String[] args)
     {     
           ApplicationContext ctx = new annotationConfigApplicationContext("<package>");
       
           /* The below code has been commented out because it is unnecessary.
              The above @Autowired annotation tells Spring Framework to inject
              a reference to the Cls2 bean into the Cls1 bean at runtime. */
       
           // Cls1 cls1 = ctx.getBean(Cls1.class);
           // Cls2 cls2 = new Cls2();
           // cls1.setCls2(cls2);
   }
}

Field injection with @Autowired

You may wonder how it is possible to inject an instance of Cls2 into cls1 when Cls1 has no setCls2() method. After thinking about it for a second, you might suspect that injection is done by using Cls1‘s constructor. This is actually not the case. (In the above code, Cls1 doesn’t have a with-args constructor!). When @Autowired annotates a bean’s field, then, at runtime, the IoC container uses this Java reflection technique to modify the field, even if it’s private.

Placing @Autowired on a field thus constitutes field injection.

Using @Autowired on fields is bad practice

According to this article, using field injection is bad practice because it disallows you from marking fields as final . (You want to be able to mark fields as final when appropriate because doing so prevents you from getting into a circular dependency situation).

More reasons why field injection is bad: https://dzone.com/articles/spring-di-patterns-the-good-the-bad-and-the-ugly.

Using @Autowired on constructors and setters

@Autowired can also be used on constructors or setters to inject a parameter into a constructor or setter at runtime.

The @Qualifier annotation

Because a bean could have an @Autowired field whose type is an interface, and because multiple classes may implement the same interface, it can be necessary to specify which implementation of the interface is meant to be dependency-injected. This is done with the @Qualifier annotation, as follows:

public interface Intf { ... }

@Qualifier("impl1")
@Component
public class Impl1 extends Intf { ... }
​
@Qualifier("impl2")
@Component
public class Impl2 extends Intf { ... }
​
public class Cls
{
     @Autowired
     @Qualifier("impl1")
     private Cls1 cls1Instance; // at runtime, cls1Instance will be set to a Cls1 instance
   
     @Autowired
     @Qualifier("impl2")
     private Cls2 cls2Instance; // at runtime, cls1Instance will be set to a Cls1 instance
}

Here are the specifics of how field-names are matched to bean-names:

  • Define the qualifier-name of a bean definition or field to be: (1) the argument of the @Qualifier annotation attached to said bean definition or field, if the bean definition or field is indeed annotated with @Qualifier, and (2) the name of the class associated with the bean definition, if the bean definition or field is not annotated with @Qualifier.

  • When no @Qualifier annotation is present on a field, then the class whose case-agnostic qualifier name is equal to the case-agnostic name of the field is what is dependency-injected into the field. (“Case agnostic” means “ignore case”).

End

This concludes my introduction to Spring Framework for Java. I hope you’ve gained a sense as to how Spring Framework allows you to implement dependency injection!

]]>
https://blogs.perficient.com/2022/01/20/introduction-to-spring-framework/feed/ 0 303626
An abstract take on the dependency injection pattern https://blogs.perficient.com/2021/09/22/an-abstract-take-on-the-dependency-injection-pattern/ https://blogs.perficient.com/2021/09/22/an-abstract-take-on-the-dependency-injection-pattern/#respond Wed, 22 Sep 2021 19:07:36 +0000 https://blogs.perficient.com/?p=297561

This article will take a relatively abstract look at the design pattern called dependency injection (or inversion of control). I feel that most articles about dependency injection get too bogged down in the particulars of whatever example is being used to demonstrate the structure. In this article, we’ll present pure abstraction.

Well, maybe not pure abstraction- we do have to pick a particular programming language, after all! We will use Java in this article. If you don’t know Java, don’t worry too much. We’ll stick to “basic” Java- nothing esoteric.

A typical dependency situation

Consider the following dependency situation, in which a class Cls depends both upon an interface Intf and on an implementation Impl of that interface.

public interface Intf { void helperMethod(Object args); }
​
public class Impl extends Intf
{
    @Override
    public void helperMethod(Object args) { /* implementation */ }
}
​
public class Cls
{
    public void method(Object args)
    {
        Intf intf = new Impl();
        intf.helperMethod(args);
    }
}

Cls depends on Impl because it requires knowledge of the Impl type in order to execute new Impl(). To restate, our current dependency situation is:

Cls –creates–> Impl

Cls –has–> Intf

Impl –is–> Intf

We want to be in a dependency situation in which Cls depends only on Intf and not on Impl. I.e., we want to decouple the implementation of Cls from any particular implementation of Intf. As described in some Microsoft documentation, this decoupling is desired because:

  1. It allows us to change which implementation Intf is used by Cls without modifying code* in the body of Cls.

    • Having an easy way to swap one implementation out for another makes it easy to swap in a mocked implementation, which lends itself to test-driven development.

  2. It removes the need for manual configuration of Impl‘s dependencies.

* If you’re using Spring Framework for Java, then you change the injected implementation by changing an annotation within the body of Cls. I don’t count this as changing “actual code” in the body of Cls! Admittedly, it would be better if changing the interface implementation didn’t touch anything inside Cls, so that the configuration code (the code specifying which interface is to be injected) is completely separate from the implementation code. C# .NET’s way of doing dependency injection is better about sticking to this rule.

A better dependency situation

To improve our dependency situation, we will pass the responsibility of creating Impl to some Container class that manages Cls. When commanded to do so, Container will “inject” a newly created  Impl instance into Cls. One way to perform this “injection”, constructor injection, is to pass an Impl instance to a constructor of Cls that accepts an Impl. (Of course, Cls must have a public with-arguments constructor for constructor injection to be possible). There are other forms of dependency injection, such as setter injection and field injection. (In Java, field injection is predicated on abusing reflection techniques to modify private fields).

This setup is called dependency injection. Implementing dependency injection places us in the following much improved dependency situation:

Cls –has–> Intf

Container –has–> Cls

Container –has–> Intf

Container –creates–> Impl

Impl –is–> Intf

Now, Cls depends only on Intf and not on Impl, as desired.

In addition to solving the decoupling problem, dependency injection comes with the extravagant benefit of allowing for the “injection” of a single interface implementation into different classes that depend on the same interface type, when doing so makes sense. In other words, our new dependency situation also allows for the sharing of one Intf instance between class instances.

Summary

To summarize, here are the two main benefits of dependency injection:

  1. Dependency injection decouples the implementations of classes from the implementations of those classes’ dependencies. Decoupling of interfaces from implementations is desirable because…

    1.1. It allows us to change which implementation of an interface is used by a dependent class without modifying code in the body of the class.

    • Having an easy way to swap one implementation out for another makes it easy to swap in a mocked implementation, which lends itself to test-driven development.

    1.2. It removes the need for manual configuration of an injected implementation’s dependencies.

  2. Dependency injection allows us to share a single interface implementation instance between multiple classes that depend on said interface, when applicable.

Inversion of control

Since, in dependency injection, Container, rather than Cls, controls what and when Cls‘s dependencies are injected, control has in some sense been inverted. Dependency injection is thus an example of inversion of control. For this reason, a class such as Container is often referred to as the inversion of control container, or IoC container.

Note that while dependency injection is an example of inversion of control, not all inversion of control is dependency injection. This article by Martin Fowler details other examples of inversion of control.

Code

Here’s code that implements the dependency injection pattern. (The following is basically an abstract extrapolation of code given in Martin Fowler’s article on dependency injection).

public interface Intf { void helperMethod(Object args); }
​
public class Impl extends Intf
{
    private Object args;
    public Impl(Object args) { this.args = args; }
​
    @Override
    public void helperMethod() { /* implementation that uses this.args */ }
}
​
public class Cls
{
    private intf;
    public Cls(Intf intf) { this.intf = intf; }

    public void method() { intf.helperMethod(); }
}
​
public class Container{ /* implementation will be kind of complicated */ }
   
public class Main
{
    /* A config file typically performs performs the task of this method. */
    private Container configureContainer()
    {
       Object args = ... // get the arguments that should be passed to the Impl constructor
       Container cntr = new Container();
       
       /* The below line tells cntr all the information it needs to execute the statement
       "Intf intf = Impl(args)". */
       cntr.registerComponentImplementation(Intf.class, Impl.class, args);
       
       /* This next line tells cntr all the information it needs to execute the statement
       "Cls cls = new Cls(intf)". */
       cntr.registerComponentImplementation(Cls.class);
       return cntr;
    }
   
    public static void main(String[] _args)
    {
       /* This is how we call cls.method(). */
       Container cntr = configureContainer();
       Cls cls = (Cls) cntr.getComponentInstance(Cls.class);
       cls.method(); // This executes the same task as "cls.method(args)" did in the original situation.
    }
}

Extra: did we give Container unnecessary information?

One particular about the lines involving cntr.registerComponentImplementation() wasn’t immediately clear to me, and might be confusing to you, too. My question was: is it necessary to pass Intf.class to the first call of registerComponentImplementation()? It seems that there should exist an implementation of Container such that if we execute the following, cntr does what we would expect behind the scenes.

cntr.registerComponentImplementation(Impl.class, args);
cntr.registerComponentImplementation(Cls.class);

That is, it seems that cntr would have enough information to do new Cls(new Impl(args)). This is because cntr has a Cls, and Cls knows that Impl is an Intf. And, for the sake of argument, even if we assume that cntr somehow didn’t know about this through Cls, the JVM itself knows that Impl is an Intf– after all, executing new Cls(new Impl(args)) doesn’t require that we type cast new Impl(args) to Intf!

Answer: upon investigation, I found that it is just convention to pass more information than is strictly necessary in certain dependency injection frameworks.

]]>
https://blogs.perficient.com/2021/09/22/an-abstract-take-on-the-dependency-injection-pattern/feed/ 0 297561
Mocking in test-driven development (TDD) with Java’s EasyMock https://blogs.perficient.com/2021/09/22/mocking-in-test-driven-development-tdd-with-javas-easymock/ https://blogs.perficient.com/2021/09/22/mocking-in-test-driven-development-tdd-with-javas-easymock/#respond Wed, 22 Sep 2021 17:24:21 +0000 https://blogs.perficient.com/?p=297552

In this article, we’ll explore the test-driven development practice of mocking.

Consider a class Cls with a method method() that relies upon a method helperMethod(), where helperMethod() queries some external resource, and suppose that our goal is to test whether method() works as intended.

public class Cls
{
    private Object helperMethod(Object args)
    {
        // Use some external resource to obtain "result".
        return result;
    }
    
    public Object method(Object args) 
    { 
               // Calculate "result" by using helperMethod(args) somehow.
               return result; 
    }
}

Since method() calls helperMethod(), a method that relies on an unpredictable external resource, we will need to imitate, or mock helperMethod() in order to achieve our goal. Instead of actually calling helperMethod() within method(), we will make an educated guess as to what helperMethod()‘s output should be for various inputs.

To prepare for imitating helperMethod() in this way, we will replace the call to helperMethod() with a call to an interface method.

public interface HelperI { Object helperMethod(Object args); }

public class Cls
{
    private HelperI helperI;
    public void setHelperI(HelperI helperI) { this.helperI = helperI; }
    
    public Object method(Object args)
    {
        // Calcluate "result" by using helperI.helperMethod(args) somehow.
        return result;
    }
}

Specifically, the above code replaces the call to helperMethod() with a call to helperI.helperMethod().

Now, we can use a library such as EasyMock to provide a good “best guess” implementation of the interface HelperI and, most importantly, its method helperMethod().

Here’s an implementation that uses EasyMock and JUnit to do exactly this.

/* We omit the necessary "static import" statements for the EasyMock and JUnit libraries to reduce clutter. */

public class Tester
{
    private HelperI helperI;
    private Cls cls;
    @Before
    /* @Before is a JUnit annotation, not an EasyMock annotation. Any method tagged with @Before is executed 
    before each test case. */
    public void setUp() throws Exception
    {
        /* For an interface intf, createNiceMock(intf) returns an instance of 
        a class that implements intf, where all abstract methods of intf are 
        implemented by using default values for return values.
        Note, createNiceMock() does come from EasyMock. */
        helperI = createNiceMock(HelperI.class);
        cls = new Cls();
        cls.setHelperI(helperI);
    }

    @Test
    /* @Test is a JUnit annotation that indicates the method to which it is attatched is to be executed as a 
    test case. */
    public void testMethod() // Recall, our goal is to test whether method() works.
    {
        /* Now, "block out" helperMethod(). The expect() call below specifies that, 
        for i in {1, ..., n}, the ith time helperI.helperMethod() is called, it should 
        have recieved the input args[i] (in this example, the input will be coming from method(),
        since method() calls helperI.helperMethod()), and that it will return returns[i]. */

        n = ... // some positive integer
        Object[] args = ... // A length n array of inputs. We will use EasyMock to 
                           // ensure that helperI.helperMethod() recieves args[i] 
                          // from method() in the ith JUnit test.
            
        Object[] helperReturns = ... // A length n array of outputs. We will use EasyMock to impose that
                                    // helperI.helperMethod() should return helperReturns[i] upon recieving args[i] as input.
            
        Object[] expectedMethodReturns = ... // A length n array of outputs. We hope that method() will 
                                            // return expectedMethodReturns[i] in the ith iteration.
            
        for (i = 0; i < n; i ++)
            expect(helperI.helperMethod(args[i])).andReturn(helperReturns[i]);
        
        /* Apply the mocking that was just specified above to the helperI interface. */
        replay(helperI);
        cls.setHelperI(helperI);

        /* The mocking is all set up now, so we can now test if method() works. */
    
        /* Do the n tests that were set up by the expect() calls above. 
        The ith iteration of the for loop executes the ith test. 
        In the ith test, the input passed to helperI.helperMethod() from method()
        should be args[i]. When the input to helperI.helperMethod() is indeed 
        args[i], the output helperI.helperMethod() will return returns[i]. */
        
        for (i = 0; i < n; i ++)
            Object args = ... // args should be such that, in the ith iteration of this loop, calling
                             // cls.method(args) results in calling helperI.helperMethod(args[i]) within cls.method()
            assertEquals(expectedMethodReturns[i], cls.method(args)); // assertEquals() is a JUnit method
    }
}
Source

This tutorial was used as a source for this article.

]]>
https://blogs.perficient.com/2021/09/22/mocking-in-test-driven-development-tdd-with-javas-easymock/feed/ 0 297552
Perficient Colleagues Share SCRUM DAY MN Insights https://blogs.perficient.com/2019/11/05/perficient-colleagues-share-scrum-day-mn-insights/ https://blogs.perficient.com/2019/11/05/perficient-colleagues-share-scrum-day-mn-insights/#respond Tue, 05 Nov 2019 19:03:20 +0000 https://blogs.perficient.com/?p=246521

SCRUM DAY MN is a leading not-for-profit event in the Midwest where local Scrum practitioners gather to share ideas, lessons learned and inspire others who might be at the beginning of their Scrum journey. The conference offered multiple hour-long workshops, covering a variety of content, with one thing in common: hands on.

Throughout the day participants went from prioritizing an imaginary ‘Alice in Wonderland’ backlog to discovering their own strengths and weaknesses, and how that can impede or improve team collaboration.

Scrum Day MN

Keynote Speaker Review by Chloe Naumowicz, Associate Business Consultant

The keynote speaker, Tripp Crosby, comedian, content creator, and author of “Stuff You Should Know About Stuff” spoke on radical and incremental creativity. He emphasized the importance of allowing yourself the space to discover. Crosby shared that taking a pause to reflect allowed him to find new ways to live every day to the fullest, and have more, better ideas.

“Discussing Agile with Leadership” Review by Abigail Speller, Associate Business Consultant

Agile coach, Angela Johnson, discussed different ways to help leaders adopt Scrum or Agile without ‘saying the “S” word’. She emphasized reframing the way we discuss Scrum and Agile to talk about the impact, rather than the framework itself.

Another suggestion was staying away from jargon that people aren’t familiar with, and avoiding adjectives (such as ‘pure’, ‘bad’, etc.)

Lastly, to successfully drive the Scrum adoption, the speaker advised to stop judging, and instead start helping leadership in the adoption of Scrum or Agile. Saying things like, ‘That’s not Scrum’ or ‘Scrum says….’ and ‘You’re doing it wrong’ doesn’t provide a path to normalcy, but rather puts people on the defense. Instead, we should be asking clarifying questions, and providing guidance and advice.

“How to Engage Teams in Sprint Retrospectives” Review by Tea Dejanovic, Senior Business Consultant

              More often than not, the question “What went well?” during the Sprint Retrospective is answered with crickets chirping. The session aimed to help participants convert crickets into conversation, and inspire teams to reflect in a fun and engaging manner. Before providing an overview of different retrospective methods, the presenter discussed some helpful do’s and don’ts.

DON’T:

  • Either facilitate or participate, but not both
  • Avoid the blame game – instead, focus on how processes can be improved to serve the team better

DO:

  • Acknowledge things that are out of team’s control that might have affected the sprint performance
  • Create action items, and revisit them at the start of the next retrospective

The sprint retrospective technique that stuck out the most to me was called ‘Three Little Pigs’. Upon the start of the meeting, create three columns as follows and ask the team members to fill them in:

  1. House of Straw: These are the items that could topple over at any minute, and are currently holding together by nothing more than sheer luck
  2. House of Sticks: These are processes that work pretty well, but there is room for improvement
  3. House of Bricks: These are items that our team does excellently and that invoke a sense of pride and accomplishment

Lastly, the most important take away regarding Sprint Retrospectives was to regularly create time to hold them. They are a crucial mechanism that allows the team to evolve and improve throughout the life of the project, and provide a chance for feedback in an open, honest, yet constructive atmosphere.

Help Wanted: Seeking the Courageous” Review by Chloe Naumowicz, Associate Business Consultant

Of the 5 values of Scrum, courage is the base that makes the foundation strong. Speaker Kim Hauf asked the participants to think about what courage meant to us. Some suggestions were:

  • Vulnerability
  • Patience
  • Trust
  • Transparency

She shared examples of times where she needed to be courageous to enable Scrum and Agile in her company. Ultimately, the risks she took made her a better leader to her team, as well as gain recognition in the eyes of the company leadership. Risk, together with action yields courage.

When you’re able to take the bold and honest route, it will trickle down and grow with others.

“I Know My Top 5 Strengths, Now What?” Review by Melissa Kelly, Associate Business Consultant

Teri Bylander-Pinke was highly engaging and offered various group exercises that allowed one to share their top strengths and learn those of others. During one of the exercises, we reflected on the following:

  • You get the best of me when…
  • You get the worst of me when…
  • You can count on me to…
  • This is what I need from you…

We discussed the answers with others in a speed-dating fashion. This exercise showed me how important it is to understand your teammates; their strengths, what motivates them, and what hinders them from excelling in their roles.  The exercise is based on Gallup’s StrengthsFinder Assessment, which in the long run increases productivity and efficiency of teams, by maximizing on each individuals strengths. As a result, employees become more engaged with their peers, their tasks, and their projects.

“Skills to Become a Better Leader” Review by Saba Aslam, Associate Technical Consultant

Harvey Robbins presentation used basic principles of psychology to discuss different behavioral traits among team members. He outlined four distinct personality types that are the most commonly found in a workplace:

  • Analytical
  • Driver
  • Amiable
  • Expressive

Ultimately, a smart leader understands different personalities of their team members and leverages them to increase the productivity of the team as a whole.

]]>
https://blogs.perficient.com/2019/11/05/perficient-colleagues-share-scrum-day-mn-insights/feed/ 0 246521
Going the Extra Mile to Support The Sandwich Project https://blogs.perficient.com/2019/10/01/going-the-extra-mile-to-support-the-sandwich-project/ https://blogs.perficient.com/2019/10/01/going-the-extra-mile-to-support-the-sandwich-project/#respond Tue, 01 Oct 2019 21:05:15 +0000 https://blogs.perficient.com/?p=245179

Perficient’s Minneapolis colleagues are all about supporting the local community from large to small organizations and projects. Their latest Giving Back event featured The Sandwich Project, whose ultimate goal is to feed the homeless within the Minneapolis area not only around the holiday season but every day. Currently, they feed about 4,500 people per week with sandwiches that are donated by the local community.

“It’s so easy to take a decent meal for granted and we know kids learn better, play harder, and grow healthier with a full belly. If we can help parents provide that to their kids, it’s a privilege to help make that happen. And it was a great way to spend a little time getting to know our colleagues better, doing good at the same time!” Megan Jensen, Senior Digital Marketing Strategist

The Perficient team worked with four local grocery stores to gather donations to create 500 sandwiches, which included 50 loaves of bread, 32 packages of deli meat, and 500 slices of cheese. The Giving Back team of 11 colleagues from multiple business units including some from our latest acquisition in Fargo, Sundog Interactive, were able to assemble all 500 sandwiches within the matter of an hour and fifteen minutes. The cross-business unit collaboration and general teamwork used to make this a successful giving back event a fantastic way to shows how our colleagues pull together to support the local community.

One colleague to highlight is Alex Schill, who really went the “EXTRA MILE” by biking from downtown Minneapolis to our local Perficient office to donate his time and energy in assembling sandwiches for the local community. The outstanding generosity from our Minneapolis colleagues is truly above and beyond.

“The sandwich project was a great time! It was nice connecting with my Perficient colleagues while doing good for the community by helping those who need it most!” Alex Schill, Financial Analyst

 

]]>
https://blogs.perficient.com/2019/10/01/going-the-extra-mile-to-support-the-sandwich-project/feed/ 0 245179
Perficient Team’s Office Olympics Foster Collaboration and Creativity https://blogs.perficient.com/2019/03/27/office-olympics-foster-collaboration-creativity-of-minneapolis-team/ https://blogs.perficient.com/2019/03/27/office-olympics-foster-collaboration-creativity-of-minneapolis-team/#respond Wed, 27 Mar 2019 15:43:15 +0000 https://blogs.perficient.com/?p=238000

Our Minneapolis team has recreated their own Office Olympics competition, in an effort to put a fun twist to team building activities. A series of events has been planned a few weeks in advance, utilizing a high dose of creativity and office supplies at hand. Events put to test a variety of skills and ranged from rubber band archery and chair races to a speed finger skating competition.

The final event consisted of an online typing test projected on the conference room screens. Two colleagues from opposite teams attempted to type as many correctly spelled words as possible in one minute, while their teammates cheered them on.

”These kinds of events provide an opportunity to spend time with colleagues that I normally do not work with. Having fun together is the best way I know of to build a community,” said Jennifer Edwards, project manager.

Cultivating Collaboration

The emphasis of the activities was not competition, but rather collaboration between different teams, which was ultimately the key to success for the winning Team Red. The uncommon nature of some of the tasks compelled our colleagues to resort to creative solutions to gain competitive advantage.

“One aspect of Office Olympics that I really enjoyed was our team work. It was reflective of how good teams work – one person is not good at everything, but together each of us brought something that made our team successful,” says Sohail Faisal, lead technical consultant.

Engaging in Creative Dialogue

Perhaps the biggest benefit of the Office Olympics is the opportunity to build trust and better understanding between colleagues that do not work closely on a daily basis. Engaging in a creative dialogue while seeking solution to the task at hand, offers an insight into different communication styles and uncovers leadership abilities.

“As a recent new hire, it was a great way to get a glimpse at the Perficient culture, meet new people and make connections outside of my team,” says Chloe Naumowicz, associate business consultant.

Organized team building activities have been proven to boost office morale and provide winners with bragging rights for months to come!

To learn more about what it’s like to work at Perficient, continue reading our Life at Perficient blog series.

]]>
https://blogs.perficient.com/2019/03/27/office-olympics-foster-collaboration-creativity-of-minneapolis-team/feed/ 0 238000
A Perficient Community of Practice Fosters Learning and Collaboration https://blogs.perficient.com/2019/03/18/a-perficient-minneapolis-community-of-practice-fosters-learning-and-collaboration/ https://blogs.perficient.com/2019/03/18/a-perficient-minneapolis-community-of-practice-fosters-learning-and-collaboration/#respond Mon, 18 Mar 2019 16:17:43 +0000 https://blogs.perficient.com/?p=237117

Last year our Minneapolis office pioneered a community of practice called Define and Drive. The group is tailored towards the business consultant and project manager colleagues, with an aim of knowledge sharing on market-relevant skills, and fostering a sense of community.

The founders of the group, Karla Kraft and Jennifer Edwards, came to the idea from two different perspectives. As a director and a career counselor, Karla was eager to learn how she can better support her counselees with project related issues. Jennifer Edwards, on the other hand, thought it would be valuable to hear other colleagues share their experience with the local clients, as well as present on a work-related topic they are passionate about.

“Openly expressing ideas in a group of mixed levels of experience provides a channel for all to learn from each other’s insights, skills, and perspectives,” describes Jennifer Edwards, project manager.

A picture of the Perficient team
Meetings revolve around a short presentation, followed by a structured conversation based on speaker provided guiding questions. For example, some of the past topics included “Remote Team Building”, “Status Reports Best Practices”, “Scrum at Scale” and many more. Topics are chosen from a prioritized backlog of ideas, which is populated by the participants, and revisited at the end of each meeting.

“The community has helped me refresh upon the fundamentals of my job – projects get so busy and there is not a whole lot of time to reflect upon and improve the process,” describes Emily Thrash, a senior project manager.

Picture of Karla Kraft and Jennifer Edwards from Perficient

In addition, the group also provides a line of sight for junior colleagues in terms of career development.

“The group enhances the Career Counseling program, as it provides an opportunity for omnidirectional mentoring, where colleagues can gain visibility on what is it like to be in that next level role,” says Karla Kraft, director in the Minneapolis office.

Lastly, an overarching goal of Define and Drive is to ensure consistency across Minneapolis practice in terms of client delivery.

“I feel like I am set for success, and better prepared to respond to situations I might not have encountered before,“ describes Tea Dejanovic, a senior business consultant.


To continue reading about our Minneapolis office and their workplace culture, click here.

Please also check out our featured article on the Minneapolis office’s Support of Local Tech Community.

]]>
https://blogs.perficient.com/2019/03/18/a-perficient-minneapolis-community-of-practice-fosters-learning-and-collaboration/feed/ 0 237117