Windows 11 AI Integration Marks a New Era for User Experiences

When I joined IBM in the 1980s, they assigned me the task of helping create what eventually became one of the very first CRM applications. Like most at that time, I had to work with MIS (now called IT), and the result was dreadful. Instead of the app making things easier by automating many repetitive manual tasks, it required more labor, was incredibly annoying to use, and showcased a disconnect between what I thought I’d asked for and what MIS delivered.

This experience was far from uncommon because, even though I could code, the MIS folks didn’t understand the business. They tended to make decisions in a vacuum that undoubtedly made their job easier in terms of creating the app, but that made users’ jobs far harder because the users had surprisingly little to do with the process.

Well, AI is about to change that by gradually turning users into programmers. Microsoft is at the forefront of this with its efforts to place increasing AI capability into Windows, Office, and the Microsoft Store.

Let’s explore how Microsoft AI will benefit collaboration and user experiences, and we’ll close with my Product of the Week: a new set of headphones from Dell’s Alienware unit that look like nothing you’ve ever seen.

The Bad Developer Joke

A Facebook post said something like, “Giving users the ability to work with AIs to code will mean that users will finally need to articulate what they want, so your jobs are safe.”

The implication was that users generally don’t know what they want, so giving them the ability to create directly with AI will end badly. However, both my experience and this joke highlight the underlying issue that programmers and users lack training in collaborating with each other.

Part of the underlying problem is that programmers typically have little interest in business operations, and operations employees have little interest in coding. Since neither side generally wants to learn the nuances of the other, this can lead to some very unsatisfied users and very frustrated programmers.


AI has the capability to break through this problem because, as it advances, it will naturally attempt to learn about the user and be better able, over time, to provide an outcome closer to what the user should want.

I say “should” because, in my experience, often one of the problems when creating an app is that the user hasn’t thought through what they want entirely. Only after seeing the draft app will they suddenly realize that what they want isn’t anything like what they got.

AI gets around this problem by not having a personality, so it doesn’t get irritated, angry, or frustrated. It learns through iteration and is willing to iterate infinitely to approach closing unmet user needs.

But users and programmers will still need to develop competence with the tool. Otherwise, they will likely become frustrated with the endless iterations that result because the user cannot completely articulate what they want, and particularly what they don’t want, in the new app.

Windows 11 Baseline

By placing generative AI into Windows, Microsoft creates a forcing function trend where users will learn how to work with generative AI to get better results. They will have to learn to fully articulate their needs to lower the number of annoying iterations the AI has to go through to understand those needs, and, most importantly, users will have to develop the skills needed to understand and communicate what they want.

We’ve had some mixed results with this kind of thing. Boolean logic is what the internet has used to refine searches. Those who learned Boolean logic found they could get the results they wanted far more quickly than those who didn’t. Still, we aren’t exactly up to our necks in Boolean logic users on the web, showcasing that the weak link remains users who refuse to learn the skills needed to become more efficient.

The difference with AI, however, is that AI can help bridge the gap by learning what makes a particular user unique and attempting to bridge the knowledge and experience gap. Unlike Boolean logic, which is static, the AI will evolve to become a much more personalized interface for the user and substantially reduce the need for the user to have a unique AI communications skill set.

Those users who put in the effort to learn how to work better with AIs will have an advantage, and given the AI will be in the operating system, they’ll get plenty of opportunities to practice. Still, I expect much of the communications heavy lifting will increasingly come from the AI, not the user, as shown in Microsoft’s Windows Copilot introduction video:

Microsoft is blending AI across the Windows 11 platform to make it easier to use, easier to find apps, easier for developers to present those apps through the Windows Store, and increasingly blending AI into every aspect of the OS and user experience.

Wrapping Up

The move to aggressively place AI into all aspects of Windows will dramatically change the user experience over time. Just as we began with a command line interface, then moved to a graphical user interface (GUI), and now are moving to an AI interface, each move should improve productivity, reduce user frustration, and bring the development process close to the related apps they are supposed to assist.

We are at the very beginning of the evolution of this technology, so expect growing pains as it matures. However, this marks the initial significant departure from the traditional approach to technology, which compelled users to acquire new skill sets to reap the benefits. Now we are developing AI systems that will learn how to work with the users and effectively flip this dynamic on its head, making for a far more interesting, hopefully far less frustrating, result.

While there are a lot of concerns surrounding AI, for now, these moves by Microsoft represent little risk but promise significant improvements to productivity and user satisfaction.

Tech Product of the Week

Alienware Tri-Mode Wireless Gaming Headset AW920H – Lunar Light

Alienware products aren’t cheap, so when Dell sent me a set of its AW920H headphones priced at a very reasonable $179.99 for tri-mode wireless headphones (I found them for as little as $159), I was interested because most headphones I get in this class are priced in the $250+ range.

These are Dolby Atmos earphones, so you get virtual surround sound, they have up to 55 hours of battery life that can charge to 6 hours in 15-minutes using a USB-C fast charger, and they have an ID consistent with Dell’s Alienware Aurora R13 Gaming desktop. They come with a mini-phone cable, so you can use them on an airplane or if you have a device that doesn’t support Bluetooth.

Alienware Tri-Mode Wireless Gaming Headset AW920H - Lunar Light

The Alienware Tri-Mode Wireless Gaming Headset AW920H supports Dolby Atmos and provides up to 55 hours of play on a full charge. (Images Credit: Dell)


One of the cool features is that if you have an Alienware PC or laptop, they will sync the colors of the LEDs on the headsets with the LEDs on your PC. Like most headphones in this price class, they have active AI noise cancelling on both the inbound sound and the microphone (I tend to use Discord when I can, and it can get annoying when game sounds are bleeding into the voice stream).

I still haven’t found a way to game on a plane successfully. There often isn’t enough bandwidth on the plane Wi-Fi, and there just isn’t enough room on the little plane table for a gaming PC by itself, let alone a gaming PC and a mouse — and most of what I play doesn’t work well with a gaming controller. However, Dell showcased a gaming controller prototype at CES that may fix this eventually.

The Dell Alienware Tri-Mode AW920H headphones are a bargain for what they do, and while they are primarily focused on gaming, they should be fine for movies and music, as well, and they are my Product of the Week.