Exciting news! Sitecore kept the original promise and released the new ltsc2022 container images for all the topologies of both the 10.3 and 10.2 versions of their platform.
The biggest benefits of new images are improved image sizes – almost 50% smaller than ltsc2019, and support for running Process Isolation on Windows 11.
Check it yourself:
So, what does that mean for developers and DevOps?
First and most, running Sitecore 10.3 on Windows Server 2022 is now officially supported. You may consider upgrading your existing solutions to benefit from Server 2022 runtime.
Developers working on Windows 11 now also got so much wanted support, containers built from the new images can run in Process isolation mode without a hypervisor. That brings your cluster performance to nearly bare metal metrics.
I decided to give it a try and test if that would work and how effectively. I recently purchased a new Microsoft Surface 8 Pro laptop which had Windows 11 pre-installed and therefore useless for my professional purposes, so it seems to be excellent test equipment.
After initial preparation and installing all the prerequisites, I was ready to go. Choosing the codebase I decided to go with the popular Sitecore Containers Template for JSS Next.js apps and Sitecore 10.3 XM1 topology, as the most proven and well-preconfigured starter kit.
Since I initialized my codebase with -Topology XM1
parameter, all the required container configurations are located under /MyProject/run/sitecore-xm1
folder. We are looking for .env
file which stores all the necessary parameters.
The main change to do here is setting these two environmental settings to benefit from ltsc2022 images:
SITECORE_VERSION=10.3-ltsc2022
EXTERNAL_IMAGE_TAG_SUFFIX=ltsc2022
The other important change in .env
file would be changing to ISOLATION=process
. Also, please note that TRAEFIK_ISOLATION=hyperv
stays unchanged due to a lack of ltsc2022 support for Traefik, so sadly you still need to have Hyper-V installed on this machine. The difference is that it serves only Traefik, the rest of Sitecore resources will work in the Process mode.
I also did a few optional improvements upgrading important components to their recent versions:
MANAGEMENT_SERVICES_IMAGE=scr.sitecore.com/sxp/modules/sitecore-management-services-xm1-assets:5.1.25-1809
HEADLESS_SERVICES_IMAGE=scr.sitecore.com/sxp/modules/sitecore-headless-services-xm1-assets:21.0.583-1809
Also, changed node to reflect the recent LTS version:
NODEJS_VERSION=18.14.1
sitecore-docker-tools-assets
did not get any changes from the previous version of Sitecore 10.2, so I left it untouched.ISOLATION=process
changing this value from default
. The rest of .env
file was correctly generated for me by Init.ps1
script..\up.ps1
in PowerShell terminal with administrative mode and wait until it downloads and builds images:Sitecore 10.3 built above Ltsc2022 images works on Win11 in Process isolation mode
I tested all of the important features of the platform, including Experience Editor and it all works, and what is especially important – works impressively fast with the Process isolation mode. So I ended up having a nice and powerful laptop suitable for modern Sitecore headless operations.
Enjoy faster development!
]]>Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Docker containers are light weighted when compared to virtual machines. The keywords of Docker are develop, ship and run anywhere. The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be deployed anywhere.
Docker images are read-only templates to build Docker images. These images are created from a file called Dockerfile. Within the Dockerfile, you define all the dependencies and packages that are needed by your application.
Every time you run a Docker image, it runs as a Docker container. Therefore, a Docker container is the run time instance of a Docker image.
Docker’s registry, known as DockerHub is used to store Docker images. Docker images can either be pushed or pulled, from or to the Docker repository. DockerHub allows you to have Public/Private repositories.
Docker swarm is a technique to create and maintain a cluster of Docker engines. A cluster of Docker engines comprises of multiple Docker engines connected to each other, forming a network. This network of Docker engines is called a Docker swarm.
Docker compose is used to run multiple containers at once with a single command, which is docker-compose up.
Docker has a simple “DockerFile” file format that it uses to specify the “layers” of an image. So let’s go ahead and create a Dockerfile in our Spring Boot project:
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT [“java”,”-jar”,”/app.jar”]
Let’s now build the docker image by typing the following command –
$ docker build -t spring-boot-demo.
That’s it. You can now see the list of all the docker images on your system using the following command –
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED
spring-boot-demo latest 30ad8958ac67 22 hours ago
This should display your newly built image.
Once you have a docker image , you can run the command like
$ docker run -p 5000:8080 spring-boot-demo
In this command, we have used the port mapping. Once the application is started, you should be able to access it at “http://localhost:5000”.
You can also push the image to your DockerHub. For that, We need to tag our local image to our registry by using the command
$ docker tag image username/repository:tag
For example, We have a DockerHub username as ‘dockeruser’, Here is how we can tag the local image of our spring boot application –
$ docker tag spring-boot-demo dockeruser/myapplication
Finally, use the docker push command to push the tagged image to docker hub like so –
$ docker push dockeruser/myapplication
After we publish the image to docker hub, anyone can pull this image and run it in their environment.
]]>
I finally got a chance to attend Microsoft Build this year! Unfortunately, it was at the cost of a worldwide pandemic so I guess I shouldn’t be that excited about it. Typically, Build is held in-person with 15,000+ of your closest developer friends – it’s Microsoft’s big yearly showcase of all the innovation they’ve been doing for software development, and given Microsoft’s renewed focus on developers, lots of good things tend to come from Build.
This year, lots of very cool things were shown off at Microsoft Build – I mainly want to focus on the 3 that I think are the coolest.
This was talked about quite a bit on day 1 – if you’ve ever used Netlify to deploy a static site built with Gatsby or Next or the like, it’s basically Azure’s version of it. You’re on Azure infrastructure with Azure resiliency and Azure support.
Quick rundown:
At the moment this is still in public preview, but I have my blog built with Gatsby and in a GitHub repo so I thought I’d give it a spin. Azure asks you where you pant to put your app (resource group, location, etc) and connects your GitHub credentials to select the repo that has your code and THAT’S IT. Azure performs the magic to create the resource in Azure, the build/deployment configuration, the GitHub Action to do the deployment on commit, and a randomly generated URL reminiscent of GitHub-suggested project names. Within about 5 mins, I had the site up on Azure Static Web Apps.
ASWA is not just for purely static sites either – it can use Azure Functions to provide APIs and uses the /api
route to be able to access them. Easy API endpoints! There are lots of other little things like authentication that the service provides that you can find in the documentation.
Windows has a package manager! Affectionately nicknamed “WinGet”, Windows Package Manager is something that power users have been clamoring for for a while. While there were 3rd party alternatives like Chocolatey () or scoop, Microsoft has taken this on and brought a command-line package manager to the masses. You can effectively script the installation of software for setting up a new machine to exactly how you like it! It’s got a ways to go and it’s only in preview at the moment but I’ve got high hopes for what WinGet is going to be able to bring to us command-line junkies.
WSL is the acronym for the Windows Subsystem for Linux and v2 of this very very cool technology is coming in the Windows 10 2004 update later this month. The WSL functionality has been in Windows for quite a while now and with v1, you were able to fire up a Linux distro from the Windows Store (younger me: wait, those words aren’t supposed to go together) and have a Linux environment running in Windows without virtualization. WSL1 depended on translating Linux kernel calls to Windows system calls and did the corresponding Windows-y thing. Cool tech and worked in a pinch, but due to the translation, disk i/o and other essential system functions could be really slow or not work at all.
With WSL2, Microsoft rethought how it would work and they came out the other side with something as non-legacy-Microsoft as you could get. Microsoft added a full Linux kernel(!) into Windows and uses that as its base for all WSL activities. What this means is you have a full-blown Windows system and a full-blown Linux kernel running side-by-side . Just sit and take that in for a sec. I’ll wait.
Admittedly I’ve been using WSL2 for a bit now while on the Windows Insiders build of Windows 10 and I’ve already thrown as many development workloads as I could to it and it has handled it amazingly. That Gatsby blog I talked about earlier? Built entirely in Linux in WSL2 and VSCode (the VSCode integration is bonkers).
Turns out that 2020 is the year of the Linux desktop – just not how we all had expected.
This sort of kind of counts since Windows Terminal has been in preview for quite a while now and I’ve been using it since the early 0.x versions. At Build, they announced that Windows Terminal is officially out of preview and has released as 1.0. If you’re not using Windows Terminal, go download it from the Windows Store right now. It will make you love your command line again. As an ex-ConEmu/Cmder/Hyper user, just the speed alone on Windows Terminal is worth every penny (which is no pennies since its free and open source). Additionally, there are tons of customizations you can do with it through the settings file. If you’d like someone to just do it for you, there are a number of pre-built themes and color schemes on TerminalSplash as well. It’s a perfect companion to WSL2!
Were you able to attend Microsoft Build this year too? We would love to hear all the insights you got from the virtual conference – there were so many sessions and tracks that this was just a glimpse of all that was going on!
]]>Nowadays, many companies are migrating their data to a cloud storage solution rather than on a physical server. Utilizing the cloud has been an increasingly popular solution over the past few years. The advantages of using cloud storage over physical storage include cost-effectivity, always-on availability, increased security, increased mobility, and more.
One popular cloud solution is called AWS (Amazon Web Services), which is provided by Amazon. AWS offers multiple cloud solutions for varying needs of businesses. The cloud storage solution, S3, “provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.”
Many organizations use AWS to connect their existing information systems to AWS S3 for storing data, archiving data, or even further integrating with other information systems (Ex. ERP Data -> AWS S3 -> OneStream).
Windows PowerShell is a windows command-line shell that uses a proprietary scripting language. PowerShell is useful for a variety of tasks including object manipulation, which we will further explore.
Importing AWS Tools for PowerShell
Get-ExecutionPolicy
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
Import-Module AWSPowerShell
Connecting to AWS S3 using PowerShell
Set-AWSCredential -AccessKey AKIA0123456787EXAMPLE –SecretKey wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY -StoreAs MyNewProfile
Set-AWSCredential -ProfileName MyNewProfile
Downloading and Renaming Files from AWS S3 using PowerShell
$bucket = 'exampleBucket'
$objects = Get-S3Object -BucketName $bucket -KeyPrefix 'Folder1/Subfolder1/'
$localPath = 'C:Usersomar.abuzaherDesktopBlog'
foreach ($object in $objects) {
$fileName = $object.Key.Substring(19)
foreach($file in $fileName) {
$localFileName = ('NEW' + $file)
$localFilePath = Join-Path $localPath $localFileName
Copy-S3Object -BucketName $bucket -Key $object.Key -LocalFile $localFilePath }
}
The completed code looks like this:
$bucket = 'exampleBucket' $objects = Get-S3Object -BucketName $bucket -KeyPrefix 'Folder1/Subfolder1/' $localPath = 'C:Usersomar.abuzaherDesktopBlog' foreach ($object in $objects) { $fileName = $object.Key.Substring(19) foreach($file in $fileName) { $localFileName = ('NEW' + $file) $localFilePath = Join-Path $localPath $localFileName Copy-S3Object -BucketName $bucket -Key $object.Key -LocalFile $localFilePath } }
]]>
“Operating system” (or OS) is a phrase that everyone in IT knows. The first thing that clicks in most of our minds when we think of an OS is Windows, but many OSes available in the market give you a similar experience to Windows. Most developers, coders, or system administrators use services like Hypervisors or Virtual Box because of their ability to run multiple OSes inside a single host OS. The issue with these solutions is that they are very bulky in size and consume more space thereby utilizing a heavy amount of resources to run. This prompts the question about whether there’s an alternative – does something similar exist in a more lightweight way?
Docker is just like an OS platform – without a whole OS. Instead, Docker uses OS-level virtualization to carry software in packages called containers. Containers are isolated from each other and have their own libraries, packages, and software, and communicate with their defined channels or mediums.
There are many tutorials available online that will provide you with basic to advanced knowledge of Docker and how it works on Linux. This article, however, will provide you with an understanding of Docker in Windows and how effective it is when compared to industry level use for servers.
Containers are software in packages with their own OS-level architecture. In housing their own libraries of OSes, containers are flexible, standalone, lightweight, secure, and includes everything needed to run your apps.
Docker in Windows can be downloaded from the Docker website and is very easy to install compared to Linux because it’s not a command base. After installation, Docker will only run Linux-mode virtualization with its own MobyLinuxVM by default. It also uses own set of configurations and build up by default, but you can change it. By going to Docker → Setting → Advanced, you can scale the VM as per your use.
To work in Docker with Windows, you need to switch Docker to Windows virtualization, which you can do on right-clicking on Docker bar inside the notification tray and simply click on Switch to Windows Containers. To download software in packages, you can use the Kitematic tool in Docker. This needs to be installed as per the instructions mentioned on the website. Alternatively, you can simply download from Docker store.
There are more than 19,000 software packages available for Windows and more than 200,000 packages for Linux, where you can download Windows IIS servers to MSSQL servers for hosting purposes. The images there are bulkier than Linux images but they are lightweight compared to the original Windows exe images.
For hosting in Windows with Windows IIS inside a Docker platform, we need to download Windows IIS image. We can do this with the following command run in the command prompt:
>docker pull mcr.microsoft.com/windows/servercore/iis
You can check on if the Docker image is downloaded or not with the command:
> docker image ls
After testing, you can run the image simply by using the command below:
>docker run -d -p 8000:80 --name perficient-example iis-site
To verify if the default site is loading or not, you can browse the website in your browser. You require the IP address to do this as the Windows IIS website currently can’t be loaded with localhost input because of WinNAT exception. Docker documentation assures that this will be fixed in the future.
To extract the IP address from the running container, you can run the following command:
>docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" perficient-example
Note: The name should be the same as in the Docker run command. If it’s not, it will throw you an incorrect value or exception error.
To simply browse the site, use the below URL in your browser with the mapped port as in the run command.
>curl -I http://192.168.0.5:8000
We can also run an existing .Net MVC application in Docker containers. To do this, you just need to follow the instructions as provided in this link from Microsoft.
We can also run SQL Server with a combination of IIS. To do this, we just need to pull the image again from the Docker store with the below command.
>docker pull microsoft/mssql-server-windows-expres
And then, to run the SQL server, run the below command:
>docker run -d -p 1433:1433 -e sa_password=<sa_password> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-express
This will set up a SQL server with a password. You can then log in to SQL server by running SSMS in Windows with the SQL server container IP, username, and password that have been assigned.
“If you really want to master Docker, make sure to find a project or a course as practice makes a man perfect.”
]]>
Outliers are considered as single points that are not part of 99% of datasets. Outliers represent the things that are present outside the normal experience. In this post, we will see how to detect these extreme outliers in Tableau.
Steps for detecting Outliers in Tableau:
I have used Tableau Superstore dataset for detecting these outliers. Here I am going to visualize the outliers taking month as time, so that anyone can spot which month contains outliers with respect to the profit.
Next step is, we need upper band and lower band to identify the outliers. Any circles that are above the upper band and below the lower band will be considered as outliers.
Any circle that lies above this calculated field are outliers, similarly we must calculate our lower limits using window_stdev function. Any points that appear below this lower limit are outliers.
Right click on profit axis and select add reference line. Select reference band , this band starts from lower and end at upper.
Every circle within this reference band are normal distributions for our profits. All the circles that lie outside this reference band are outliers which we must bisect it.
Select the circle chart type in the mark shelf and place the Boolean outlier calculated field in the color shelf. The circles in orange color are outliers and blue colors are normal distribution of profits for Month as time. Hide the header of one axis, which is on the right, enable tooltips. When you check the tooltips, if the circle is outliers, it will display as true or it will display as false.
]]>In my previous post, How to Set Up Your Own VPN Server Using Amazon Web Services, we set up and configured an OpenVPN server using Amazon Web Services, and then we configured Windows and Linux machines to use the new VPN server. In this post, I’ll show you how to setup your Android phone to connect to that same VPN server. This configuration will let you safely use the internet from your phone when you are connected to an untrusted network.
We’ll setup our connection to the VPN with these basic steps:
Let’s get started!
On your phone, open the Google Play Store app and search for “OpenVPN Connect”.
Click the Install button to begin the installation process.
The OpenVPN Connect app needs access to your internal storage in order to read the profile configuration file that we will move to your SD card or other internal storage. If you are comfortable with this access, then click the Accept button.
Now that we have successfully installed the OpenVPN Connect app, let’s go back to our Windows PC to setup our profile configuration file.
We’ll perform this step on our Windows PC. Recall from my previous post that we had edited a configuration file that specified the location of four different key files needed by the VPN client for communication with the server. Our new configuration file will be a near-copy of this original file, except we will embed the content of our four key files directly into this new file. This will give us a configuration file in “unified form” format, as described in this OpenVPN Android FAQ page.
On our Windows PC, start by locating our working OpenVPN configuration file. In the following screenshot, our configuration file is named “client.ovpn”, and it is located in the c:\Program Files\OpenVPN\config directory. You can also see the four key files in that same directory.
Make a copy of the file, naming it something different like “android.ovpn”. In the following screenshot, I’ve made my copy, giving it the name “android.ovpn”.
Now use the Notepad application to open the file for editing (be sure and use “Run as Administrator” to start Notepad, as our file is in system directory – you can do that by right-clicking on the Notepad shortcut, then choosing “Run as Administrator” from the popup menu). Once you have opened the android.ovpn file, scroll down to the location where we specify the key files.
Following the guidance in the OpenVPN Android FAQ page, we will replace each filename reference with the actual contents of that file. For example, the line containing:
ca “c:\\program files\\openvpn\\config\\ca.crt”
must be replaced with this:
<ca>
— actual file contents of the ca.crt file–
</ca>
Likewise, we will replace:
cert “c:\\program files\\openvpn\\config\\client1.crt”
with this:
<cert>
— actual file contents of the client1.crt file–
</cert>
and we will replace:
key “c:\\program files\\openvpn\\config\\client1.key”
with this:
<key>
— actual file contents of the client1.key file–
</key>
and replace:
tls-auth “c:\\program files\\openvpn\\config\\ta.key”
with this:
key-direction 1
<tls-auth>
— actual file contents of the ta.key file–
</tls-auth>
Note that for the tls-auth line we need to add the additional directive “key-direction 1”.
To grab the contents of the key files, use Notepad to open the key file, type “Control-A” to select all of the data, type “Control-C” to copy the selected text, then in the android.ovpn file type “Control-V” to paste it into the destination location.
In the following screenshot, you can see where I have performed these substitutions. The lines don’t appear to evenly break, but that is the nature of the original files. As long as you simply copy and paste, the data will be fine.
You can actually test your file on the Windows PC by using it to connect to the VPN Server. You use the same connection process as you used with your original ovpn file:
If there are no errors in your file, your VPN connection will connect and work as expected.
Now let’s continue by moving the file to our phone.
With Android, there are normally countless ways to transfer files from your PC to your phone. Choose any method that is convenient for you, but I decided to use the bluetooth file transfer method. This avoids accidental disclosure of the file (and the keys it contains) through email, shared cloud folders, etc. Another good transfer method is USB transfer, or better yet you could transfer the file directly to your SD card if it is removable and if your PC can write to SD cards.
After pairing my phone and transmitting the file via bluetooth, I can see it in my phone’s internal storage download folder.
Now let’s open the OpenVPN Connect app.
Press the 3-dots icon in the upper right. This will expose the next menu.
Choose “Import”, then “Import Profile from SD card”
I’ll open the Download folder, as that is where the android.ovpn file is located after the Bluetooth file transfer.
Choose the “android.ovpn” file, then press the “Select” button.
Now press the “Connect” button.
Android gives us a warning that our app will be monitoring network traffic. Let’s continue by pressing OK.
The app indicates that we have successfully connected. Now let’s perform the usual test where we visit the www.dnsleaktest.com website.
So far so good. The reported IP address indicates that our traffic is originating from our AWS VPN server. Continue by pressing the “Standard test” button.
The results of the test confirms that our VPN is working and no DNS leakage is occurring. Now we have one final task to make sure we can use our VPN from anywhere.
I’ve just run the test while my phone is connected to the same wifi that my other machines are connected to. Remember that I originally configured the AWS Security Group to only allow access from my own IP. This means that if I left my house and began accessing the internet through my cellular network, I would end up with a different external IP address and wouldn’t be able to connect to my AWS VPN. Likewise, if I visited my local coffee shop and joined its wifi network, I would have yet another external IP address and still couldn’t connect to my VPN. We briefly touched on this in my previous post, but let’s fix this now.
Start by logging in to the AWS management console, navigating to the EC2 Dashboard, then choosing Security Groups.
Select our OpenVPN security group from the list, click the Actions button, then choose “edit inbound rules” from the actions list.
As seen in the screenshot above, for the “Custom UDP” rule in the list (first item seen in the list above), change the source to “anywhere”. Then click Save.
Our Security Group will now allow inbound VPN connections from anywhere. This makes it important that you don’t accidentally share the keys or configuration files for the VPN server – you don’t want unknown users connecting to it. Also make sure that you don’t change the SSH rule – you still only want to access that from your home IP.
Now let’s test our phone VPN connection again, doing the following:
Let’s get started. Open the OpenVPN Connect app:
Press the Disconnect button.
Now turn off the phone wifi, and allow it to reconnect to the cellular network.
Once we are back on the cellular network, press the Connect button in the OpenVPN Connect app.
Testing with the www.dnsleaktest.com website confirms that we still have a good, non-leaking, VPN connection.
Now we can connect to the VPN from any location.
]]>
Oracle recently announced the general availability of the latest patchset for Siebel Innovation Pack 2016 (IP2016).
Patchset 16.9 can be found on Oracle Support, although it is a little tricky to locate. Use these search terms to find it:
For Windows, you would select this file:
This takes you to the download window and the two files that you need to download:
After the files are downloaded and unzipped, the next step is to create the actual installers. Run the Siebel network installer (snic.bat) as an administrator:
Choose “create a new image”:
Choose where you want the installers to be created:
Select your platform:
Lastly, choose the products for which you want to create installers:
The complete list of fixes can be found in the readme file. However, for us here at Perficient, the most important issue that was resolved was the dedicated web client’s inability to launch.
If you are a user of Oracle’s applications and are interested in learning about our capabilities (e.g., implementations, integrations, support), please let us know through the “contact us” form at the bottom of the page.
]]>It’s been 4 years since the first Raspberry Pi was released on February 29, 2012. In the last four years there have been a number of updates to the Raspberry Pi platform, including the Raspberry Pi 2 released 1 year ago. The Raspberry Pi has been fairly popular among hobbyist, education and industrial markets with over 8 Million units shipped to date! Every new edition of the Raspberry Pi hardware released has seen an increase is sales over previous editions with the Raspberry Pi 2 shipping 3 Million units in 1 year since it’s release. The IoT (Internet of Things) market is growing rapidly, and the Raspberry Pi is one of the core hardware platforms being used. This 4th birthday of the Raspberry Pi brings the release of the all new Raspberry Pi 3 with even faster CPU and built-in wireless capabilities!
To celebrate the 4th birthday of Raspberry Pi, the Raspberry Pi Foundation announced today that they are releasing the all new Raspberry Pi 3. While the Raspberry Pi 2 was a really nice incremental upgrade from the Raspberry Pi 1, the Raspberry Pi 3 is yet another pretty big upgrade itself. The biggest hardware changes with the Raspberry Pi 3 are a new, faster, 64-bit CPU (now @ 1.2 Ghz) and integrated 802.11n wireless LAN with Bluetooth 4.1 (including Bluetooth Low Energy). This integrated wireless capability just might be the biggest feature update to the Raspberry Pi platform that will simultaneously help propel the IoT market to new heights. This is all at the exact same $35 price point.
Here’s a simple comparison of the changes between Raspberry Pi 2 and 3:
The new Raspberry Pi 3 offers some significant performance gains over older models. This is a natural progression since computer hardware technology keeps advancing year over year according to Moore’s Law. The Raspberry Pi 3 is yet another indicator of this law in action.
The Raspberry Pi 3 have displayed up to a 10x performance increase over the original Raspberry Pi. This was measured using a multi-threaded CPU benchmark, so real-world performance comparisons will vary depending on the application code being executed.
The Raspberry Pi 3 now uses a 64-bit CPU. This is a first for the Raspberry Pi as up through the Raspberry Pi 2 the ARM CPU’s used were only 32-bit.
Another notable performance increase for the more traditional PC-style usage (not IoT, but many Pi’s are used as “desktops”) is that even the graphics capabilities have been improved with the new ARM chip. The Raspberry Pi Foundation hasn’t released any specific benchmark comparison on this, but have mentioned a slight performance increase.
The Windows 10 IoT Core has supported the Raspberry Pi 2 since it was launched, and as announced today by Microsoft the Raspberry Pi 3 will also be supported. Since the Raspberry Pi 3 is built to be compatible with the Raspberry Pi 2 this sounds logically to be the case. However, not every platform change, especially with changing the CPU, can automatically be supported without code changes. While Microsoft hasn’t mentioned how extensive the changes were in order to support Raspberry Pi 3, but they have released a new Insider Preview build available today to target the new Raspberry Pi 3.
Here’s a neat preview video from Microsoft demonstrating an example of what Windows 10 IoT Core and Raspberry Pi 3 can be used to create:
This is my second blog article covering Windows 10 IoT Core. See the first part here: https://blogs.perficient.com/microsoft/2016/01/windows-10-iot-editions-explained/.
So, you decided to develop your first IoT (embedded) application using Windows 10 IoT Core. Awesome! What do you need for that? Surprisingly, not that much.
Regarding the board choice: Raspberry Pi 2 board is by far the most popular IoT board and in my blog series I’m going to cover only this board. It doesn’t mean that other board choices are not that good, but we have to decide on something.
Raspberry Pi 2 (make sure it’s a version 2 – version 1 is not supported by Windows) could be obtained from many different retailers. Microsoft and Adafruit (maker of Raspberry Pi) teamed up and developed a Starter Pack for IoT which included Raspberry Pi board, SD card, case and power supply. However, at the moment this starter pack is sold out. Plus, there are less expensive options… Let’s explore them.
Technically, when we talking about IoT board, we mean the following:
So, getting set up with IoT development is not that complicated and expensive. In the next blog post I’m going to cover setting up IoT board and development environment.
]]>
With the release Office 2016, Microsoft has made their goals clear. Led by Satya Nadella (CEO), the Redmond, WA based company seeks to re-engineer our productivity and collaboration. Their strategic initiative is based on three tiers: mobility, communication, and productivity.
There are many subtle changes that offer an instant feel of increased production. While composing an email in Outlook adding an attachment can be as easy as using the new dropdown accordion on the attachments button. This menu has your recently saved or opened documents similar to Windows 10’s quick access in File Explorer.
As you become acclimated with up-to-date Microsoft products you’ll notice yourself doing more while clicking less. Here are a few notable additions to the Office Suite:
The ability to define a word by right clicking it and selecting definition has been replaced with “Smart Lookup”. Smart Lookup pulls data from Bing, Wikipedia, Oxford Dictionaries, and other relevant online sources.
Results are displayed in the right side panel as ‘Insights’. These are divided into 2 categories: “Explore” and “Define”. The Explore tab will return results based on the context you are using. “Ford” will return different results when the context differs between the truck and the person. However, the definition of the word will return the same result. Click and drag photos from Smart Lookup to quickly add content to your documents.
Unlike defining words in Office 2013, in 2016 you are not required to sign in to your Microsoft account to use Smart Lookup.
Remember Clippy? If not here’s a photo from Smart Lookup:
The legacy of Clippy lives on. Clippy asked you what you might want to do and came into your space uninvited. Neatly tucked in the top ribbon a box waits for when you may need something. Simply begin to type what you want to do and a responsive menu will appear. You can make real time changes from the Tell Me box. This eliminates the hassle during those time you cannot recall which tab to use or you would rather not use dialog boxes.
Document collaborating (team editing) in real time gives us even more reason to use Microsoft Cloud storages. This functionality is a must for everything from college group projects to corporate presentations. Here’s a brief demo:
Microsoft also released Visio and Project 2016. At first glance the featured templates have some notable additions.
You are encouraged to use Project 2016 templates for Scrum projects, a start-up business plan that spans over 4 months, to manage a Six Sigma process, and to plan a wine tasting fundraiser. Microsoft has expanded Project’s support for Team Foundation Server and Visual Studio Online.
Inside the flagship diagramming application, Visio, the shapes have become more detailed and modernized (see below). Featured templates now have “Stater Diagrams” to aid you in designing and mapping.
Welcome back from a great Ignite Conference! By now, I hope everyone knows that the conference recordings are posted to channel9, a section of MSDN. Microsoft does a great job of recording and publishing all of this content quickly, its pretty awesome.
One of my biggest challenges at the conference was knowing which session to pick. There were 3-6 sessions at any given time that I wanted to go to. All week it was like that, crazy.
This year, Microsoft added “foundational keynote” sessions. Sadly, most of them were on Monday and over-lapped each other. I went back and downloaded the videos and they are all amazing, filled with product name changes, roadmap discussions, and a very transparent look at Microsoft’s Cloud Strategy.
Also, Microsoft’s top talent gave the various presentations – Julia White, Seth Patton, Bill Baer, Brian Harry, Robert Lefferts, Sam George, etc…
DevOps as a Strategy for Business Agility
This was my first session of the conference I saw live, and it was awesome! I’ve already blogged here with a thorough review. Everyone interested in Visual Studio Online, TFS, and Agile tools, watch this replay!
It’s also a great look at how Microsoft is now deploying code to the cloud every 3 weeks.
Since I saw this live, I’m cheating and not counting it on my replay list. So, here are my picks for top 3 replays of the conference:
1. Office Development Matters, and Here’s Why…
Microsoft wants people using Office, and being productive while using Office 365 in the cloud. And I think Microsoft finally understands the success of their platform should be about the ecosystem of developers who build solutions.
The facts show that people are using Office, and a good amount of that content is being generated on Office 365
Why is all this important? Office Add-Ins.
What used to called Office Apps is now being called Add-ins. User feedback was that the term App in this context was confusing. In addition, Microsoft is expanding development tools, methodologies, and patterns for developing solutions in Office.
For any of you in the SharePoint world, this means that Provider Hosted Apps are now Provider Hosted Add-ins. Kind of weird for SharePoint, but I get the reasoning with iOS and Android App inside an App terminology problem.
In addition to the name change, Microsoft is pouring resources into Office development capabilities. Users are getting work done in Office, so why not surface solutions in those same experiences. This means as a developer I can write an add-in for Word, Excel, or Outlook and surface that add-in through the mobile, tablet, desktop, or browser version of Office that user is operating.
If you think about that for a moment, that’s kind of a big deal. Instead of writing a completely custom web-based app with native or responsive mobile experiences that does all of your desired functionality, why not break up some of that into various Office add-ins and leverage the UI capabilities in those experiences. Huge.
Access to your data is also a key focus for development. Microsoft will be supporting a rich set of REST API’s that will allow your add-in to access every bit of information about your user, content, or application. This will allow anyone to build Enterprise Office add-ins for almost any scenario.
Moving on to SharePoint Add-Ins, the focus shifted to the Office 365 Patterns and Practices. If you haven’t heard about PnP yet, I encourage you to check it out. Its a bunch of folks from MSFT and the community coming together on a code repository in GitHub. This sample code pack helps to define how we create server side code solutions and integrate them with Office 365. Currently it is SharePoint focused, but will soon be expanding further into Office add-ins.
Another way to think about this new concept is to think about your app being a service. Your service displays information to the user via many endpoints – web browser, mobile browser, native mobile app, SharePoint, or Office. In order to unlock your app as an Office or SharePoint add-in, just connect to Azure Active Directory.
By connecting to Azure Active Directory you get:
So this is a really easy way to light up any existing web app into Office 365. There are a number of demo’s in this session I encourage you to watch them. Microsoft is partnering with a wide range of 3rd party vendors to help ensure the marketplace supports as many data sources, storage locations, platforms, frameworks, and vendors as possible.
Great session and I look forward to exploring new Office and SharePoint Add-in solutions.
2. The Evolution of SharePoint: Overview and Roadmap
Although I wasn’t there, I’m guessing this was the most attended session during its time slot. You can hear the crowd on a number of occasions in the replay.
SharePoint Online and Office 365 are growing FAST! In the past, Office 365 seats grew through email workloads. However, in the last 18-24 months the demand has shifted towards SharePoint workloads and mobile apps, both in users and content growth. 38% of all SharePoint’s seats are Online. However, this means that a significant number of customers are still using SharePoint Server on-premises.
Next, Seth does a good job of explaining the evolution of SharePoint, specifically defining Microsoft’s three key components of Experiences, Extensibility, and Management. As these services move more to the cloud, Microsoft is able to break down walls of product specific barriers.
The cloud allows for rapid deployment of new features and services. As Seth describes more about the vision for these new experiences, it’s clear that Microsoft is de-emphasizing the products themselves. Focusing instead on enabling hybrid solutions, increasing security, and connecting on-premises solutions to online experiences.
What does this really mean? From my interpretation, SharePoint Server 2016 will not implement many new features, certainly not as was the case with previous server versions. The base workloads of Team Sites, Search, Enterprise Content Management, BI, Portals, and others will remain largely untouched. New experiences will be built in the cloud and enabled back to on-premises users.
If you think about what those experiences are, it makes sense. For instance, Delve. How could you possibly deploy that on-premises? It’s natively based in the cloud on top of SharePoint Online Search and Azure Machine Learning. But, you can now connect up your hybrid environment, which now creates a single search index, and Delve will surface both online and on-premises content. Pretty cool!
In addition to focusing on enabling hybrid scenario’s, new targeted experiences will be introduced. Some of these experiences are available in the cloud today, such as: Power BI, Delve, Yammer, OneDrive and NextGen Portals.
Digging into the above statements is tough. I have a lot of open questions. What level of security exists on a search index in O365? If I’m a customer who is cloud averse, will I be able to use these new targeted experiences?
Management has also been updated. They’ve added a new unified service and compliance layer for management of all Office 365. Microsoft will also keep open configuration and customizations options by maintaining API’s, SharePoint Add-ins, and leveraging Azure.
After a demo from Bill, Seth emphasizes a key point for customers and developers – These new experiences are meant to be additive, and not meant to replace existing SharePoint workloads. He states they will continue to invest in improving the core features. (as I mentioned above)
What is the release cadence for Office 365 and On-premises?
Monthly updates to Office 365! This isn’t your father’s Microsoft. It’s really neat to see how they’ve adapted such an agile development methodology that delivers value at such short time intervals. Of course, the cloud enables that speed. If you remain on-premises, you will be on a much slower cycle.
Its very clear throughout this presentation that Microsoft wants to enable customers to move to the cloud on their own terms. This is a stark contrast from a few short years ago when the direction was moving full steam ahead towards the cloud. Bill discussed this history in his presentation.
In the past, Microsoft viewed hybrid as a way to rationalize getting to the cloud, with that being the sole premise. I’m glad to see that Microsoft has embraced the reality that many of us in the field have experienced for years – not all workloads are ready for the cloud. The new investments in hybrid scenarios will ensure a much more consistent and robust experience, enabling on-premises customers to subscribe to cloud innovation.
With a focus towards Files, there were also some hints in the presentation about improvements to OneDrive. There will be a new sync client tool that will separate your OneDrive personal from your OneDrive for Business. You will be able to do selective sync. And there will be new integration into Outlook called Modern Attachments – anytime you drag a file into a message, you will be prompted to instead upload that file to OneDrive, Outlook will create a link instead of the attachment, and everyone on the To or CC line will get access to that document. Awesome!
Finally, for all you IT Pro’s, Microsoft has changed the SharePoint Installation options for the first time since 2010. You now have the option at time of install to specify what SP role that server will be assuming – WFE, App, Search, Cache, or Custom. In the past, SharePoint installed all components to every server and you configured which services you wanted to run on each particular server. Now you can install only those specific services, reducing the overhead required on each server. Lastly, there is now the ability to update the servers in production, with no downtime. I’m curious to see the details behind this, but if it works, it will be amazing!
Great session and roadmap discussion f0r SharePoint. I encourage you to watch the session for full commentary and to see the demo’s!
3. Create the Internet of Your Things: The Microsoft Vision for IoT
The Internet of Things is here and it’s powered by Microsoft Azure! IoT has been around for a long time. So why is the demand just now picking up?
It’s important to understand how IoT has changed in just the last 5 years. Traditional IoT workloads were things like: alarm clock’s, refridgerator, car, security, television, coffee makers and HVAC. Those workloads are still around, but new and innovative workloads are now being developed cheaper and easier than ever:
Health monitoring, behavior modification, pet tracking, information capture, new devices and sensors, lawn care, sleep tracking, leak detection, medication adherence, sports and fitness, environmental sensors, smart vending machines, and many others.
Sam George, Microsoft Director of IoT, presents a maturity model for how organizations are progressing through IoT. Stage 1, is operational efficiency. Customers are primarily connecting devices and collecting basic information about them.
Example is given of a fleet management company. They need to monitor where their trucks are and basic information about their health, location, and state. Then the ability to set rules and alerts to improve operational efficiency.
Stage 2 of the IoT Business Maturity Model is Business Intelligence. Analyzing and visualizing all of that data. Using predictive analysis to discover patterns in your data. Finally, take those insights and do something with them. For the fleet management company, predicting traffic patterns based on time of day and making sure the truck takes the quickest route.
In another session, this example is taken further with that truck carrying seafood. The sensors on the truck monitor temperature, operational status of cooling fans and air conditioning units. In addition, weather information is being tracked for the route of the truck to its destination. By feeding all of that data into an Azure Machine Learning algorithm, predictive analysis can tell you if the fish will spoil before it reaches the restaurant.
Stage 3 of the model is Business Transformation. Combining IoT & other data with advanced analytics to power new services and revenue streams, expand into new adjacent businesses, and create new partnership opportunities.
Here Seth discussed ThyssenKrupp Elevator and how they are using IoT to change the way they think about maintenance schedules. Its a fascinating use case, and I’ll let you read all about it here.
So what is Microsoft specifically doing for IoT development?
Currently, Event Hubs can be used for IoT ingestion services. They support HTTP/AMQP protocols and can handle 1 million publishers and 1 GB/s ingress. Event Hubs are available worldwide today and process 18 billion messages and 60+ TB of data is ingested each day.
Later this year, Microsoft will be releasing the IoT Suite. A comprehensive set of tools to provide the above features and in addition command & control, device integrity, device registry, and device management. The IoT suite will have cross-platform support with an open source “agent” framework. It will work with RTOS, Linux, Android, and iOS. And there will be API’s in .NET, Java, JavaScript, and C.
The IoT Suite will encompass technologies like Azure HD Insight Storm, Azure Machine Learning, PowerBI, Azure Data Factory, Azure HBase NoSQL, and Azure Service Fabric. Presentation and connectivity can be provided to desktop and mobile devices through App Services, Azure BizTalk Services, Notification Hubs, and Microsoft Dynamics.
Really what the IoT Suite is doing in the background is provisioning a set of Azure Services. It creates the IoT Hub, sets up Steam Analytics or Storm, provisions a storage account. According to the presentation, this will all be configurable. With the point of making the IoT provisioning process and easy as possible.
Microsoft’s vision for IoT is not limited to only Azure. Windows 10 will also ship with a rich set of features for IoT that will provide a single OS, universal Windows drivers, security, industry peripheral support, interoperability and will be Azure IoT ready.
There will be 3 Windows IoT editions, details in the graphic on the right.
Also announced, Windows 10 IoT Core preview is now available for Minnowboard Max and Raspberry Pi 2!
For more information visit www.windowsondevices.com
Well that wraps up my list of the Top 3 Replays for Ignite 2015. I hope you enjoyed the review. And since I’m a sucker for knowing when to end my lists, here’s a few more sessions you should watch:
4. An Overview of the Microsoft Application Platform for Developers
5. Next generation Office 365 Controls, Extensibility and Team Productivity
6. Windows Server & System Center Futures—Bring Azure to your Datacenter (Platform Vision & Strategy)