Quantcast
Channel: MSDN Blogs
Viewing all articles
Browse latest Browse all 5308

The Sa11ytaire Experiment: Part 1 – Setting the Scene

$
0
0

This post starts a discussion on how a solitaire app can be played with a wide range of input and output methods, and invites feedback on how the app could be enhanced to provide a more efficient experience for those interaction methods. The app built to explore this subject is available at Sa11ytaire at the Microsoft Store.

Apology up-front: When I uploaded this post to the blog site, the images did not get uploaded with the alt text that I'd set on them. So any images are followed by a title.

 

Introduction

When considering how an app might be used with different methods of input or output, it's not enough to simply consider whether it can be used at all. It's also very important to consider what's the most efficient way in which the app can be used. With that in mind, some colleagues and I have built a simple solitaire app, aimed at generating discussion on efficient ways in which all input and output methods available on Windows might be used to play a game of solitaire. As the app is today, it's very much a "V1", with lots of opportunities to add more functionality. But hopefully it is at a point where people can get back to us and say: "Yes, I get how technically the app can be used in a bunch of ways, but what would work great for me, is …".

While the app explores many types of input and output methods, it's by no means a comprehensive set. In fact I wouldn't say that I know what a "comprehensive set" of input and output methods is. But to get the ball rolling on the discussion, the following input and output methods have been explored with the app to one degree or another.

 

 

A video briefly showing all the input and output methods listed above, is at The Sa11ytaire app for Windows 10. And yes, it's the typical sort of low quality video that I always make. (At least my dog stayed quiet throughout the filming.) And I've yet to add captions to the video.

 

In later versions of the app we can concentrate on other topics not discussed in this post. For example, configuring the size of text shown in the app, independently of the other visuals shown. And considering how announcements made by the app which include multiple lines of text, relate to the braille device experience. And how some success/failure notifications which are audio-only today, can have some matching visual output. Basically – there's no end of interesting topics to consider in an app like Sa11ytaire.

Another important aspect of the app is to consider how different types of input and output methods can be used together. It's not sufficient for the app to be usable with one of the methods listed above, but only so long as one of the other methods isn't also being used. For example, you use a switch device, and also prefer visuals shown using colors from the High Contrast Black theme that comes with Windows. In that case, the highlighting that cycles through the app as the switch device is used, had better be shown with an appropriate color from the High Contrast Black theme. Or say you use Windows Speech Recognition (WSR) to input at the app, and have the Narrator screen reader announce details about the state of the app. That's a particularly interesting case, given that both WSR and Narrator will leverage the programmatic representation of the app exposed through the UI Automation (UIA) API. So what single UIA representation can provide the most efficient experience when WSR and Narrator are interacting with the app at the same time?

Now, that all said, the current V1 of the Sa11ytaire app does not support all combinations of the various input and output methods listed above. But surely there's no reason why a later version of the app couldn't.

 

Figure 1: The default visuals of the Sa11ytaire app.

 

Key Learnings

The Sa11ytaire experiment has been a fascinating project to work on, and I've learnt a number of things in the process. But as I think back on this, there are two things that really stick in my mind.

Use standard controls that come with the UI framework

Many of the UI frameworks that you may be using on Windows, (for example, UWP XAML, Edge/HTML, WPF, WinForms, Win32), will do a ton of work on your behalf to make your app as accessible as possible by default. It often won't be able to make the app fully accessible by default, but by using standard controls that come with the framework, you'll get a great head start. The interactable elements presented in the Sa11ytaire app may not all visually appear to be standard controls, but they are all based on standard controls. In fact every interactable element in the main area of the app, is based on one of the following standard controls: Button, ToggleButton, ListView or ListViewItem. As such, the elements are going to support input methods such as touch, keyboard and eye control by default. And they'll all support being programmatically controlled through the UIA API, (which is essential for some types of input such as touch control of Narrator), again, without me having to take any specific action to enable that. By default, I would always base an interactable element on a standard control, regardless of whether it shows custom visuals.

If I pushed a UI change without testing both the programmatic and visual representation, then I pushed it without testing it

Typically when I'm working on a project like this, things are pretty frantic. I'll try to squeeze in a fix or update in the time I have, and then get on with whatever I was meant to be doing. During the app's development, I made a change to the UI relating to the cards being moved to what we called the "target piles". I ran the app, and the change looked good, and I pushed it to the repo. Before long, Tim, a dev collaborator on the app, pulled the change and let me know that after he'd moved a card to a target pile, his screen reader said that the target pile was still empty. So I'd done exactly what I ask devs not to do. I'd verified the visual representation of the app after my UI change, but not the programmatic representation. If I'd taken a few seconds to point the Inspect tool that comes with the Windows SDK, at the UI I'd changed, I would have learned I'd broken the app. So no matter how rushed I feel, if I don't verify both the visual and programmatic representation of the app, then I'm pushing a change without testing it. If I'm working on an app in collaboration with others, I have a responsibility to do better than that.

 

Methods of input and output

The sections below detail some of the input and output methods used with the Sa11ytaire app.

 

Touch

Touch control of the app worked by default, due to the use of standard controls in the app.

 

Mouse

Mouse control of the app worked by default, due to the use of standard controls in the app.

 

Pen

Now, someone might ask here, why would anyone want to play solitaire with a pen? But at this point I think a more interesting question is, if someone wants to play solitaire with a pen, can they?

Pen control of the app worked by default, both directly at the screen and with an external tablet, due to the use of standard controls in the app.

 

Keyboard via tabbing and arrowing, and use of the Space and Enter keys

Keyboard control of the app worked by default, due to the use of standard controls in the app.

This meant that the Tab key can be used to move keyboard focus between the Button, ToggleButtons and ListViews in the app, and Arrow keys could move between the ListViewItems in the ListViews. A press of the Enter or Space keys would then perform the appropriate action associated with the focused control. For example, the "Next card" element would be invoked, the "Upturned card" or "Target pile" elements would be toggled, and a "Dealt card" element would be selected.

Given that it's not efficient to have to tab through every interactable element in the app in order to reach a specific element of interest, we added support for a press of the F6 key to move keyboard focus to the first focusable element in each of the three main areas of the "Remaining cards", the "Target piles", and the "Dealt card piles".

Now, we did have an opportunity to start experimenting here. Often in the lists of dealt cards, some of the cards are face-up and some are face-down. Given that there's no action you can trigger with the face-down cards, we took action to disable them. This means keyboard focus can't reach them. On the one hand, this seems appropriate for the requirements of the game. But on the other hand, this isn't how lists typically work. You rarely encounter lists with a mix of enabled and disabled items. And one of the essential ingredients in the recipe for a great experience, is "intuitive". A colleague who uses a screen reader pointed out that it could be confusing to be told that you're at (say) the fourth card in a list of four cards, but then you can't arrow to any other card in the list. If this model really is going to be used going forward, then we'll need to update the app such that it's clearer to the person playing the game as to exactly what's going on. So for a later release we'll do that, unless we decide that overall it's preferable to have all the cards enabled, and take action to prevent a face-down card from being selectable.

Regarding one interesting implementation detail, say a list of cards has multiple face-down cards and one face-up card. Originally when we moved the face-up card away from that list, we enabled what was previously the face-down card beneath the card being moved. Sometimes we'd find that at that point, the newly enabled card was unexpectedly not in the tab order in the app. No amount of re-ordering the action taken in the code seemed to prevent that issue, and the enabled card would remain out of the tab order until it was selected somehow. So in order to account for that, the code's action is rather unusual now in that it always maintains at least one enabled item in the list. That item might change its appearance as cards are moved around, but it always stays in the list. Even when the list is "empty", it still contains an item which is the empty slot where a king can be placed.

Another aspect of keyboard usage which we'd like to explore more, is where would it be most helpful for keyboard focus to end up after specific actions? For example, say you tab over to a target pile and press Enter to move a selected card in a dealt card pile over to the target pile. Today, after taking that action, keyboard focus is left at the target pile. But might it be more helpful for keyboard focus to be set back in the dealt card pile from which the card was just moved? After all, it's probably more likely that you'll want to interact with that dealt card pile than the target pile. We actually tried doing that, but found that when we programmatically moved keyboard focus back to the dealt card pile, if Narrator was running, Narrator didn't follow that change in keyboard focus. Instead Narrator was left at the target pile, and so we decided to stick with the original design of having keyboard focus left at the target pile. We'll revisit this later if moving keyboard focus to the dealt card pile does seem like the most helpful thing to do in this situation.

 

Keyboard via access keys

While it's essential to provide an intuitive keyboard experience like that described above, some people might find an alternative method of using the keyboard more efficient. And that method is: access keys! With the Windows 10 Fall Creators update, it's absolutely trivial for a dev to add access keys to an app like Sa11ytaire, and so provide a very efficient experience for people who leverage those access keys.

To add an access key involving a single character to a control, add AccessKey="<some character>" in the XAML. For the Button and ToggleButtons in the Sa11ytaire app, use of the access keys results in those controls' Click and Checked event handlers being called. For each of the ListViews containing the dealt cards, we added the AccessKey value and also an AccessKeyInvoked event handler. When that handler was called, we took action to select the last item in the associated list. (For lists showing multiple face-up cards, this might be followed by presses of the Up Arrow key to reach another card in the list.)

With those few changes, a press and release of the Alt key will show all the available access keys. I think the access key experience is really slick, given that the Alt key can be kept depressed throughout the action to move a card. So to move a card from dealt pile 2 to dealt pile 6, press Alt+2+6. Or to move the upturned card to the Hearts target pile, press Alt+U+H.

I've said it before, and I'll say it again: Few things in life have a better dev effort-to-customer benefit ratio than access keys.

 

Figure 2: Access keys shown in the app in response to a press of the Alt key.

 

Windows Speech Recognition

In order to deliver a great experience with Windows Speech Recognition (WSR), it's important to know the programmatic representation of the app, as exposed through the UI Automation (UIA) API. Many standard controls that come with the UWP XAML framework are programmatically accessible by default. For example, say you add a Button control, set its content to some localized string like "Save", and add a Click handler. A UIA client app like the Narrator screen reader will say "Save, button" when it encounters the Button, and can programmatically invoke the Button through the UIA API. All that programmatic accessibility is provided by default through the use of a standard control.

In the Sa11ytaire app, the UI on the interactable elements has been customized to present specific visuals. For example, the dealt cards show a suit symbol in addition to a text string which specifies the value and suit of the cards. And an empty target pile shows nothing but the relevant suit symbol. So unless I take action to examine the programmatic representation of the app, I can't really be sure how my customization might have affected that programmatic representation. So in order to learn of the programmatic representation, I'll point the Windows SDK Inspect tool at the app.

 

Note: While I'd never assume this, it might be possible that I could technically play the game with WSR, without any specific action on my part around the programmatic representation of the app. For example, I expect I could use the WSR "Show Numbers" feature to have a unique number shown over every interactable element, and then say that number to interact with the element. But that's not at all the experience that I want people to have to rely on. For example, if I want to select the 2 of Hearts card, I don't want to have to say something like "Show numbers, 12". Rather I want to say "2 of hearts", because that provides an intuitive and efficient experience. And "efficiency" is another essential ingredient in the recipe for a great experience.

 

The UIA Name exposed by the various UI elements is managed in different ways in the app depending on the element. The "Next card" Button-based element takes no specific action at all. For that element, the UWP XAML framework simply leverages the element's Content property as the UIA Name. So whether it's visually showing "Next card" or "Turn over cards", that string becomes the UIA Name property.

 

Note: The current version of the app is not localized. If the visual strings are localized at some point, then action would be taken to verify that all the UIA Name properties of the elements in the app are also localized. A UIA Name property must be accurate, concise, unambiguous, and by default localized. (I say "by default", given that such things as brand names would often not be localized in either the visual or programmatic representation.)

 

For the ToggleButton-based elements, there are calls to AutomationProperties.SetName() in code-behind to have the appropriate UIA Name exposed for those elements. While this works fine, at some point I expect we'll update the app to have the AutomationProperties.Name for those elements bound in the XAML to some property on the element.

For the dealt cards in the ListViews, the AutomationProperties.Name is bound to a custom property on the outermost Grid in the DataTemplate defining the card's UI. That property can return three forms for the UIA Name, depending on the situation. These forms are either the visible text string, (such as "2 of Hearts",) or a string containing "Empty", (when all cards have been moved out of the list,) or a string containing "Face-down", (because a face-down card's value is not to be exposed either visually or programmatically). In the case of the "Face-down" values, they have the list index appended, to ensure that any such UIA Name is unique relative to its siblings.

 

The dealt card's XAML for that bound Name property is as follows:

AutomationProperties.Name="{x:Bind Name, Mode=OneWay}"

 

And the code-behind for the associated custom class is below. This class is set to be the x:DataType in the DataTemplate which is set as the ItemTemplate for each dealt card pile ListView.

 

public string Name

{

    get

    {

        string name;

 

        if (this.FaceDown)

        {

            name = "Face-down " + this.InitialIndex;

        }

        else if (this.card.Rank != 0)

        {

            name = this.card.ToString();

        }

        else

        {

            name = "Empty " + this.ListIndex;

        }

 

        return name;

    }

}

 

 

Having done the above, the Inspect SDK tool shows me that all the interactable elements in the app have appropriate UIA Name properties.

 

Figure 3: The Inspect SDK tool reporting the UIA Name properties on the elements in the Sa11ytaire app.

 

And with those UIA Names verified, I can then use Windows Speech Recognition to efficiently play the game.

Figure 4: Using WSR to play the Sa11ytaire app, with the WSR UI showing "4 of Diamonds" after the 3 of Clubs card has been moved to the 4 of Diamonds card.

 

By the way, the screenshot of the Inspect SDK tool shown above, shows that each card in the dealt card list is exposed through the Control view of the UIA tree as a single UIA list item element. (The Control view should contain only elements of interest to the person playing the game.) So the next question is, given that the outermost Grid in the card's DataTemplate contains multiple TextBlocks, how come those TextBlocks aren't also exposed through that UIA Control view? The answer is that those TextBlocks have the following set on them in XAML:

AutomationProperties.AccessibilityView="Raw"

By setting an AccessibilityView of "Raw", we're saying that the element should only be exposed through the Raw view of the UIA tree, and not the Control view. That means that while the element is still being exposed through UIA, it's being done in such a way that screen readers are being told that the element is not of interest to the person playing the game. It's reasonable to do that in this case, because all the information that the person playing the game needs is accessible through the containing list item element. To be able to navigate with the screen reader to the contained TextBlocks probably wouldn't be serving any useful purpose.

 

Eye Control

The Windows 10 Fall Creators update shipped with built-in support for eye control, so this is perfect timing for the Sa11ytaire app. I'd not used the eye control before, so I borrowed an eye tracking device and tried it out. To my delight, the Sa11ytaire app was useable via eye control by default, without any changes being required at all to the app.

Once again, use of standard controls in the app had led to the app being accessible by default for customers using a particular input method.

 

Figure 5: The Windows Eye Control UI showing over the Sa11ytaire app, and being used to move a 10 of Clubs over to a Jack of Hearts.

 

I have to say, my classic feeling of being rushed at all times did trip me up when first trying out eye control. When the eye control UI appeared, I'd never seen it before, and I went straight to the left mouse button icon at the top left of the UI, given that that was the equivalent of where I'd go to click the left mouse button on a physical mouse. Having done that, I could interact as expected (or so I thought) with the "Next card" button, and the cards in the dealt card lists. I could not, however, interact as expected with a card in the upturned cards pile. It turned out that I was using eye control to trigger a left mouse button double-click rather than a single-click. While a double-click led to some action that I thought was expected in some places in the app, it left elements based on ToggleButtons in their original state. What I should have been doing is going to the bottom left corner of the eye control UI to trigger a left mouse button single-click. Once I did that, everything worked as expected. So I'd recommend that you spend a little more time than I did with the eye control feature before wondering if all is not well with ToggleButtons.

 

Switch device control

The bulk of the input action taken in the Sa11ytaire app is to first select a "source" element, and then select a "destination" element. And that's it. And so it should be relatively straightforward to update the app to have it usable with a single-switch device as the input method. A press of the switch would start a highlight moving through all the interactable elements, and when it reaches an element of interest, another switch press would trigger the appropriate action at the element. That action might be an invoke, toggle or select, depending on the element. Repeating that process with a different element would complete the action required to move a source element to a destination element.

So the Sa11ytaire app was updated to have limited support for a single-switch device. A press of F10 will toggle switch device control mode. The app assumes today that a press of the switch device leads to the app receiving a Space key press, but can be updated to handle different types of input in the future based on feedback. The current implementation means that switch device control can be simulated without a switch device, simply by pressing the Space key.

The app does support different speeds of highlight while in switch control mode, and that's configurable through the AppBar.

One approach that we could take here is to create a timer, and on every timer tick, move keyboard focus to the next interactable element in the app. That would be relatively straightforward to do with the FocusManager. Then, once keyboard focus is on the element of interest, have the next switch press trigger some action at the element with keyboard focus. While that would technically work, again we wanted to consider how we could make the experience more efficient. So we built the feature such that the highlight first moves between the three main areas of the remaining cards, the target piles, and then the dealt card lists. Once the highlight is on an area of interest, a press of the switch device starts highlighting the elements within that area. And then depending on the area, a subsequent switch device press will trigger action on an element. I say "depending on the area", because a dealt card list with multiple face-up cards also needs to scan through the face-up cards, before having a switch device press select one of those cards.

As part of providing this more efficient experience of highlighting the three main areas, we needed to consider options for how that might be implemented. If those areas were implemented as keyboard focusable elements, then we could just move keyboard focus to them. But they intentionally weren't keyboard focusable, given that they're not interactable when using the keyboard as the input device. So instead, when we want the switch device control highlight to show at one of those areas, we take action to update a property of the related element resulting in a visual indication of what will react to a switch device press at that time. Depending on the element, the properties updated are either the BorderBrush and BorderThickness, or Background.

 

Important: With this change, we've now entered the world of seriously providing custom visuals, and that means we need to consider how this relates to everyone who uses the app. For example, if a high contrast theme is active, the custom highlight visuals had better use an appropriate color from that active theme. And how will the Narrator screen reader let people who are blind know what's going to react to a switch device press? We'll discuss those topics later.

 

It's recognized that the switch device control feature in the app is far from complete. The switch device control highlight does not move to the AppBar today, and so we added a "Restart game" button which is shown when switch device control is on, and that button is included in the highlight path through the app. In order to reduce the chances of the app being restarted unintentionally, a confirmation dialog appears when that Restart game button is invoked.

 

Note: Any dialog that appears in the app must be controllable through a switch device, when switch device control is on. In order to achieve this, a timer is started when the dialog appears, and on each timer tick, keyboard focus moves to the "next" control in the dialog. A press of the Space key invokes the button with keyboard focus by default, because when the dialog's up, the app's key event handler isn't called. The first attempt at implementing this had the timer tick handler calling the FocusManager's TryMoveFocus(), and that seemed to work ok. But I then noticed that sometimes the visual feedback for keyboard focus disappeared while the dialog was up. I suspect keyboard focus was still moving, but the visual feedback had gone. Perhaps this is related to how when keyboard focus is programmatically moved, visual feedback for the focus may or may not appear depending on the situation. So I replaced the use of TryMoveFocus(), with a call to FocusManager.FindNextFocusableElement() to get the "next" button on the dialog, and then a call to Focus() off that button, specifically passing in FocusState.Keyboard. With that change, I've yet to find the visual feedback for keyboard focus disappear while keyboard focus moves between the dialogs' buttons.

 

So while the switch device control feature is in its early days at the moment, I am hoping it's sufficient to generate feedback on how the switch device control can be made more efficient for this app.

 

Figure 6: Switch device control highlighting moving through the face-up cards in the last dealt card list.

 

And here's a note on another interesting point that cropped up during the development of the switch device control feature. Originally, I'd used the term "scan" to refer to the switch-related functionality, as that seemed fair enough for the highlight as it cycles through the various parts of the app's UI. But then I was asked, if switch device control was turned on and the app said "Scan mode on", what would that mean to someone who's also using Narrator, which has its own feature called "Scan mode"? This was a great point. While ultimately I want as many input and output methods as possible to be usable concurrently, there mustn't be any ambiguity around the terminology. So when referring to the switch device control feature in the app's UI, I changed the terminology from "scan" to "switch control". A work item still outstanding for me is to replace all the uses of "scan" in the source code too. There must be no ambiguity for people playing the game, or for devs working in the code.

 

Default visuals

The default visuals for the Sa11ytaire app are intended to present something similar to a typical experience for a card game. So the face-up cards have text and symbols shown in either black or red on a white background, and everything's on top of a green app background. Today the app visuals are not sophisticated at all. The visuals are functional, but not elegant, and that's fine for the app at this stage. The intention at the moment is to explore various input and output methods, not to spend a lot of time on delivering more attractive visuals.

 

Important: While it is recognized that the app's visuals are not elegant, this is in no way meant to imply that an accessible app doesn't need beautiful visuals. A great app delivers a delightful experience for all people, including those who can consume beautiful visuals. We mustn't build an app with an attitude that the app will be either accessible or visually beautiful, but not both. The rather utilitarian visuals presented by the app today are just another example of where the app has certain constraints, but not so limiting as to prevent people from providing feedback.

 

There are two very important considerations around the default visuals, and which apply to any app presenting UI.

 

1. No important information should be conveyed through color alone.

The colors shown in the app enable some people to determine at glance whether card is a club/spade, or a diamond/heart, and that can help them more efficiently play the game. But for other people who don't leverage the colors shown on the card, the game can still be played. The symbol used to represent a card's suit, and the text shown on the card, convey a superset of the information conveyed through the use of color.

When considering the use of color in an app, it can be interesting to go to the "Color filters" setting in the Settings app in the Windows 10 Fall Creators update. By examining an app's UI while different color filters are active, you can consider whether the efficient usability of the app has been impacted by the filter.

 

Figure 7: The Sa11ytaire app presented when the Grayscale color filter is active.

 

2. Contrast of text against its background.

When I encounter text in an app, I want the text to jump out at me. I don't want to have to lean in and take time to figure out what the text says. Light blue text on a slightly darker blue background might look cool to some people, but not to me. The more time I have to spend trying to consume important text in an app, the less I want to use that app.

So when creating the visuals, I ran a color contrast analyzer tool to check the contrast of the text shown in the app, against its background. As far as I was concerned, if the text has a contrast of 4.5:1 against its background, (which is the minimum default contrast required for business apps that I work with,) then I'd consider that to be sufficient for the Sa11ytaire app. While I knew black text on a white background would be fine, what about the black text on a blue, green or gray background, or red text on a white background? As it happened, the color contrast analyzer reported that in the app's default state, all the text met the minimum color contrast requirement.

 

However, things are not ok! I've verified the color contrast for the text on elements in their default state, but what about other states? For example, what happens if the background behind text gets darker on mouse hover? Sure enough, on mouse hover over some elements in the app, the background of the elements darkens, the color contrast of the text against the background becomes 2.1:1, and the text is almost illegible. Someone might say, "So what? You knew what the element said before you moved the mouse there". But really, that's not a delightful experience. A great app shows easy-to-read text regardless of the state of the element showing the text. And on a similar note, in the earlier Figure 6 screenshot, some switch device control highlight shows a gold border against white, with a color contrast of 1.4:1. That highlight might not be text, but in this case, the highlight used for switch device control should always stand out. So all in all, there are definitely some fixes required here, and I just haven't made them yet.

 

On another note, it's also worth considering what it would mean if important information was only conveyed through the use of a shape in the app. When the app starts, the four target piles show only the symbol for the suit associated with the target pile. If someone using the app found it a challenge to remember what each symbol represented, or found it a challenge to differentiate between the symbols, how would that impact the usability of the app? That's something I'd like to explore further, but this version of the app doesn't take any specific action to account for that.

 

Dark app mode

While discussing the Sa11ytaire app, a colleague who has low vision pointed out that she would prefer to use the app with the dark app mode Windows setting active. I've never created an app with colors that are specific to dark app mode, so I updated the app.xaml file to add a specific set of colors in its "Dark" ResourceDictionary.

This app feature is not so much "far from complete", as it is "not really started". I threw some colors in, but have no experience around what colors might typically be most helpful here. (And in the screenshot below, some black text has extremely low contrast against its dark green background.) So I'm mentioning support for the dark app mode here, not because the app has something useful to show today, but because I consider it absolutely necessary for the app to provide useful support for the dark app mode at some point. I'm looking forward to discussing this more with my colleague, and if you know of a set of dark app mode colors that would work well for you, do let us know.

 

Figure 8: The Sa11ytaire app showing non-default colors when the Windows dark app mode setting is on.

 

Windows High Contrast themes

I mentioned above that I'd updated the app to contain "Dark" theme colors, in addition to the "Default" (or "Light" in this app's case) theme. My next step was to update the "HighContrast" ResourceDictionary in the app.xaml file to have the app show only appropriate system colors when a high contrast theme is active.

The following is the set of system colors chosen for the various elements in the app UI.

 

<ResourceDictionary x:Key="HighContrast">

    <SolidColorBrush x:Key="CardTableBackgroundBrush" Color="{ThemeResource SystemColorWindowColor}" />

    <SolidColorBrush x:Key="NextCardBackgroundBrush" Color="{ThemeResource SystemColorButtonFaceColor}" />

    <SolidColorBrush x:Key="NextCardEmptyBackgroundBrush" Color="{ThemeResource SystemColorButtonFaceColor}" />

    <SolidColorBrush x:Key="NextCardForegroundBrush" Color="{ThemeResource SystemColorButtonTextColor}" />

    <SolidColorBrush x:Key="NextCardEmptyForegroundBrush" Color="{ThemeResource SystemColorButtonTextColor}" />

    <SolidColorBrush x:Key="TargetPileCardBackgroundBrush" Color="{ThemeResource SystemColorButtonFaceColor}" />

    <SolidColorBrush x:Key="TargetPileEmptyBackgroundBrush" Color="{ThemeResource SystemColorButtonFaceColor}" />

    <SolidColorBrush x:Key="PlayingCardBackFaceUpBackgroundBrush" Color="{ThemeResource SystemColorButtonFaceColor}" />

    <SolidColorBrush x:Key="PlayingCardBackFaceDownBackgroundBrush" Color="{ThemeResource SystemColorButtonFaceColor}" />

    <SolidColorBrush x:Key="PlayingCardBackPlaceHolderBackgroundBrush" Color="{ThemeResource SystemColorButtonFaceColor}" />

    <SolidColorBrush x:Key="SuitBlackForegroundBrush" Color="{ThemeResource SystemColorButtonTextColor}" />

    <SolidColorBrush x:Key="SuitRedForegroundBrush" Color="{ThemeResource SystemColorButtonTextColor}" />

    <SolidColorBrush x:Key="CardScannedBorderBrush" Color="{ThemeResource SystemColorHighlightColor}" />

    <SolidColorBrush x:Key="CardBorderBrush" Color="{ThemeResource SystemColorButtonTextColor}" />

</ResourceDictionary>

 

Most of the above system colors are the colors associated with the text and background of buttons. Note that I don't care what those colors actually are. They could be white on black, yellow on black, black on white, or some other colors specified by the person playing the game. All I care about is that the app presents the colors that the person using the app expects.

Given that I don't know what colors will end up being shown in the app, I mustn't hard-code any color in the app. Accounting for that can sometimes be a challenge when you're designing an app, given that you may have many things you'd like to show using different colors, and only a relatively small set of system colors to choose from when a high contrast theme is active. In the case of Sa11ytaire, the choice of colors was pretty straightforward. For example, the color of the switch device control highlight is the SystemColorHighlightColor.

 

An interesting aspect of supporting system colors in the app relates to the fact that many elements might show colors specific to their current state. For example, the background of a card in a dealt card list is affected by whether the card's face-up or face-down, or in fact represents the empty slot where a king can be placed. In order to account for that, a number of ValueConverters are used when binding visual properties to custom properties on the related classes. The example below is the converter for converting the card's "State" custom property to the UI element's Background property. I don't know if that's the recommended way to do all this, but it seemed to work fine, and accounted for whether a high contrast theme is active or not.

 

public class IsCardStateToCardBackgroundConverter : IValueConverter

{

    public object Convert(object value, Type targetType, object parameter, string language)

    {

        CardState state = (CardState)value;

 

        SolidColorBrush backgroundBrush;

 

        switch (state)

        {

            case CardState.FaceDown:

                backgroundBrush = Application.Current.Resources["PlayingCardBackFaceDownBackgroundBrush"] as SolidColorBrush;

                break;

 

            case CardState.KingPlaceHolder:

                backgroundBrush = Application.Current.Resources["PlayingCardBackPlaceHolderBackgroundBrush"] as SolidColorBrush;

                break;

 

            default:

                backgroundBrush = Application.Current.Resources["PlayingCardBackFaceUpBackgroundBrush"] as SolidColorBrush;

                break;

        }

 

        return backgroundBrush;

    }

 

    public object ConvertBack(object value, Type targetType, object parameter, string language)

    {

        throw new NotImplementedException();

    }

}

 

One thing that I'd like to do but not yet found a trivial way of doing, is to have the entire app's UI updated in response to a change in state of high contrast while the app's running. It's easy to add an event handler to react to a change in the state of high contrast, but in response to that, I want to take some simple "Refresh all app UI to account for the new state of high contrast" action. As things stand today, the app has to be restarted after a change in state of high contrast. That might not seem like a big deal, but say someone using the app finds it a challenge periodically to use the default visuals while the app's running, and wants to temporarily move to use high contrast colors in order to continue their current game. So they issue the Windows keyboard shortcut for moving into high contrast, and expect the app to react accordingly. Well, not today it won't.

 

Note: The app does make use of an interesting UWP XAML "HighContrastAdjustment" feature which I've not used before. Traditionally, many apps have defined cool default visuals, and unfortunately not considered the experience for people who use high contrast themes. That meant that in the past, the default colors could continue to be shown when a high contrast theme is active, and the app becomes unusable. Because of this, the UWP XAML framework now tries to reduce the chances of this sort of broken experience from reaching people using the app. UWP XAML can now automatically render elements using colors based on the active high contrast theme, even if the app wasn't built to explicitly request that. For example, static text will be rendered using the system color for static text, and text on buttons will be rendered using the system color for text on interactable controls. While this can significantly improve the experience for people using high contrast themes, the Sa11ytaire app fully manages the colors to be used when a high contrast theme is active. And the app has a better understanding of exactly which color should be used in cases such as a ListViewItem containing a TextBlock. As such, the app politely declines the offer of the UWP XAML framework's help here, and sets the HighContrastAdjustment property of the app's main Page to be "None".

 

Figure 9: The Sa11ytaire app with switch device control on, showing system colors from the active high contrast theme for button text and background, disabled controls, and highlighted UI.

 

Windows Display Scaling

This is another situation where no real work has yet been done in the app, but work needs to be done. The display scaling feature in the Windows Settings app is an incredibly important feature. To increase the size of everything shown in the app, can help to deliver an easier-to-use app experience for many people, and sometimes an experience that's usable at all. But that assumes that an app accounts for the space it has available to show stuff. The screenshot below shows the app running at a display scaling of 300%, instead of the 200% that's the default value for the device that I'm using now. And the results are a mess. The text and suit symbols shown on the card are truncated. The app today does change the width of the dealt card lists to account for the width of the app, but it doesn't take action to resize or otherwise adjust the contents of the cards.

And this raises the same question that most apps have when considering what to do when there's just not enough space to present the information that the app wants to present. For example, should the app reduce the amount of information shown, by adding ellipses on truncated lines of text? Or perhaps the information is so important that it must be shown in its entirety, and so text will wrap to lie across multiple lines. All the Sa11ytaire app does today is put the entire set of dealt card lists inside a ScrollViewer. By doing that, at least all cards can be reached, even if not all the cards can fit on the screen at the same time. While this might make the app technically usable at the display scaling of 300% on my device, there's no way I would claim this is a good experience.

So this is another interesting topic for a later release.

 

Figure 10: The Sa11ytaire app showing truncated suit symbols and truncated text when a higher display scaling is active.

 

Windows Magnifier

The app worked with Windows Magnifier by default, due to the use of standard controls in the app.

Perhaps the most interesting aspect of Windows Magnifier's interaction with the app relates to bringing the element with keyboard focus into view when the element receives keyboard focus. Since all interactable elements in the app are based on standard controls, when keyboard focus moves to an element, UWP XAML raises a UIA FocusChanged event on behalf of the app. UIA clients like Windows Magnifier, (and the Narrator screen reader,) can then react to those events, and learn of the bounding rectangle associated with the element that has gained keyboard focus. And once Magnifier knows that, it can change its magnified view to present the element with keyboard focus on the screen.

 

Figure 11: A magnified view of a portion of the Sa11ytaire app, with Windows Magnifier in its fullscreen mode, and at a magnification level of 300%.

 

Narrator screen reader

The Narrator screen reader that comes with Windows, is a UIA client app. Narrator is absolutely dependent on the programmatic representation of the app, as exposed through the UIA API. Because all the UI in the Sa11ytaire app is based on standard controls, the app has a great head start on supporting the Narrator experience. For example, as keyboard focus moves around the app, Narrator can respond to the UIA FocusChanged events that are automatically raised, and can announce details of which element has gained keyboard focus.

 

Important: The app uses event handlers on its controls, which are standard for the type of control. For example, it has a Click event handler on a Button, a Checked event handler on a ToggleButton, and a SelectionChanged event handler on a ListView. By using standard event handlers on the controls, the UWP XAML framework automatically adds support to have the same functionality programmatically accessible through UIA. For example, when using Narrator on a touch device, Narrator's double-tap gesture can be used on the "Next card" element to invoke that button. And the double-tap gesture can be used to select a card to be moved, and then to specify where it should move to. If an app uses its own custom pointer event handlers to trigger action, then it's quite possible that the handlers won't work with Narrator on a touch device. Instead, it's likely that Narrator will eat the original pointer events while determining if a Narrator gesture has been issued, and those original pointer events won't make it through to the app.

 

While some support for Narrator will be available by default, it's important to verify that the app's UIA representation is a great match for the meaning of the UI. The Windows Speech Recognition section earlier mentioned how all elements should have an accurate, concise and localized UIA Name property. It's also very important for the Narrator experience to consider the LocalizedControlType property of the elements. For standard controls, UIA will automatically provide an appropriate LocalizedControlType, based on the element's UIA ControlType property. For example, a Button and ToggleButton both expose a LocalizedControlType of "button". But is that string the most helpful to people playing the Sa11ytaire game? To start exploring this question, the LocalizedControlTypes for some elements have been customized. For example, using strings such as "Remaining card pile", "Top upturned card", and "Target card pile".

 

Note: When first working on the app, the UIA representation of some elements was such that the default LocalizedControlType was exposed, custom friendly strings such as "Top upturned card" was set to be the UIA Name of the element, and the value of the card, (for example, "5 of hearts"), was set as a custom Value property. (Support for the UIA Value pattern can be added to a Button or ToggleButton through use of a custom AutomationPeer.) Technically this worked, but it didn't lead to a great experience when using Windows Speech Recognition (WSR). When using WSR, it should be possible to speak the UIA Name of an element and immediately interact with the element. When using WSR, it's natural to speak (say) "5 of Hearts" to interact with a card which visually shows "5 of Hearts". In order to enable that, the UIA Name property must match the value and suit of the card. So this was an interesting case where we needed to consider what single UIA representation can provide the best experience for a variety of UIA client apps.

 

Another interesting consideration here related to the UIA Help property associated with the elements. When I first started experimenting with the UIA representation, I was fairly liberal with adding Help properties which described the purpose of some elements. But now, I actually think all that information will become pretty irritating once the person using the app is familiar with the game, and so I expect I'll remove some of it. Having said that, it's critical that we provide some intuitive way to explain how to use the app. One colleague pointed out that at a minimum, some help content needs to be available, and must include the basics of the rules, and what keyboard shortcuts are available. And given that the color of the suit of the cards affects what moves are valid in the game, the help content also needs to describe which suits are black and which are red. It was also suggested that the UIA representation be updated to include the suit color, for people who are new to the game. I have to say, that's a really interesting idea. Perhaps we could add an option which would have the color included in the Narrator announcement. For example, "4 of clubs, black" or "King of Hearts, red".

 

After accounting for the above considerations, I expect the app would technically be accessible to someone using Narrator. So the next step is to consider how we can do better than that. What can we do to deliver a really efficient experience? And when considering this, the Windows 10 Fall Creators Update feature that really got our attention is the UWP XAML Notifications feature.

 

UWP XAML Notifications

By using this feature, a UWP XAML app can call AutomationPeer.RaiseNotificationEvent() to raise an event which requests that Narrator speak some arbitrary string supplied by the app. While this may seem a quite attractive thing for an app to want to do, inappropriate use of the event could quickly result in a grim experience for the person using the app. The person does not want to be bombarded with unnecessary announcements which could be at best irritating, or worse, impact their ability to complete their tasks. But due to the nature of the Sa11ytaire UI, it did seem at least worth considering whether the Notification event could help deliver a more efficient experience.

Another important aspect of the Notification event is that the app can specify how the event should be handled by a UIA client like Narrator, relative to other UIA events being raised by the app around the same time. Traditionally an app has had no control over that. For example, traditionally if an app raised a LiveRegionChanged event to make Narrator aware of a change in the app's UI, and Narrator received both that event and a FocusChanged event around that same time, it might be impossible in practice to get Narrator to make the announcement that the app intended. The new Notification event is designed to provide much more control to the app over that sort of experience.

 

The exploration into leveraging the Notification event, included the five cases listed below.

1. Announcing the current state of the three main areas of the app. (These announcements are triggered with presses of the F2, F3 and F4 keys.) For example, say keyboard focus moves to a card in a dealt card list, it would be tiresome to have to tab over to the upturned card pile or target pile, just to get a reminder of what cards are shown in those areas. So a press of F2 or F3 will announce the state of the remaining cards area or the target card piles respectively.

The following text is an example of what Narrator would announce in response to a press of F4, when the dealt card piles contain the cards shown in Figure 12.

"Pile 1: 7 of Diamonds to 10 of Spades, Pile 2: Queen of Hearts, Pile 3: 2 of Clubs to 3 of Diamonds, 2 cards face-down, Pile 4: Jack of Spades to Queen of Diamonds, 3 cards face-down, Pile 5: 4 of Spades, Pile 6: 2 of Hearts to 5 of Spades, 4 cards face-down, Pile 7: 3 of Hearts, 4 cards face-down,"

 

Figure 12: The Sa11ytaire app showing the dealt card lists with a mix of face-up and face-down cards.

 

2. Announcing the three cards shown in response to invoking the "Next card" button. For example: "Jack of Hearts, 5 of Clubs, 8 of Hearts on top".

3. Announcing what face-down card in a dealt card list became face-up as a card above it was moved away. For example: "Revealed 6 of Spades in dealt card pile 3". Note that this announcement would be made in such a way that Narrator would fully announce both it, and details of where keyboard focus has moved to as the card with focus moved to another dealt card list.

4. Announcing a hint on what moves are available. Overall the app's pretty much at a stage where the app can technically be played, and that's it. But Tim added an interesting feature that goes beyond that, and which actually offers some help for people unfamiliar with the game. The optional "Enable automatic hints" feature uses the Notification event to have Narrator automatically announce details of what moves are available between the dealt card piles, after a card has been moved. Given that the related announcement will be made around the same time as the "Revealed" announcement mentioned above, and also when keyboard focus moves, the Notification event would be raised in such a way that all these announcements would be completed by Narrator.

5. Announcing the switch device control highlight moving from one element to another. Given that this highlighting can be independent of an element gaining keyboard focus, by default Narrator would not react to the highlight moving. By using the Notification event, the person using the app can be made aware of the change in highlighting. This is another situation where app control of how the announcements should be managed is so important. The switch device control highlighting may be moving faster than Narrator can complete the related announcements. In that case, Narrator must interrupt itself to begin the next highlight-related announcement. To have the Narrator announcement lag behind where the switch device control highlight actually is, would result in the app being unusable.

Note: Tim also suggested that action should be taken by the app to have Narrator make an announcement when the state of switch device control is toggled. This is particularly important given that both the app and Visual Studio react to F10. If you're constantly switching between the app and the debugger, and a press of F10 only results in a Narrator announcement of "F10", it's not obvious at that moment whether the state of switch device control in the app has changed, or you've moved to the next line in the debugger.

 

Important: After experimenting with the Notification events for a while, we've not been able to achieve a consistent Narrator experience in the Windows 10 Fall Creators Update. That is, even when raising some events in such a way that we request that the related announcement must be made, sometimes it seems that the announcement is not made. However, it seems that the experience is more consistent with the latest Insider Builds of Windows 10. As such, we are not considering the Notifications events to be of interest in practice to the Sa11ytaire app experience until the next update of Windows 10. That said, if it all possible, we would like the use of Notification events to be a key part of delivering an efficient Sa11ytaire experience with Narrator, as soon as the next update of Windows 10 is available. In preparation for that, we'll continue to react to feedback on how these events might help to deliver the best possible experience.

 

A few more thoughts

At some point we'll add a Settings page to the app, but for now, all configurable settings are in the AppBar. The AppBar also includes a couple of buttons to launch the help and to restart the game. We set those buttons' AutomationProperties.AcceleratorKey properties to "F1" and "F5" respectively. By doing that, their UIA AcceleratorKey properties are exposed with those values, and Narrator can make the person playing the game aware of these handy keyboard shortcuts.

The ComboBox for selecting the switch device control highlight speed is available in the AppBar, and it is recognized that that UI itself is not accessible with the switch device. For the app to be really considered accessible to someone using a switch device, all its UI, including all settings, must be accessible. This is something we can explore further as we get feedback on this topic. (And as part of this, we can add more configurable settings relating to switch device use.)

By the way, the ComboBox uses AutomationProperties.LabeledBy to reference the TextBlock that provides the visual label for the ComboBox. By doing this, the UIA Name property of the ComboBox is set to the TextBlock's text string. The XAML for this is shown below:

 

<TextBlock x:Name="SwitchScanSpeedLabel" x:Uid="SwitchScanSpeedLabel" VerticalAlignment="Center" Margin="10 0 10 0" />

<ComboBox x:Name="SwitchScanSpeedComboBox" AutomationProperties.LabeledBy="{Binding ElementName=SwitchScanSpeedLabel}" Width="100" VerticalAlignment="Center" Margin="10 0 10 0" />

 

 

We do currently have a couple of settings relating to the Notifications event raised by the app through RaiseNotificationEvent(). These settings were helpful during development of the app, particularly when Narrator wasn't running. The "Show most recent notification" setting, adds the latest notification string to a read-only TextBox near the bottom of the app, and the "Notification audio" setting sends the notification string to the default text-to-speech (TTS) engine on the device.

Now, that last bit raises an interesting point. If the "Notification audio" setting is on, then the TTS engine will output audio while a screen reader might be making announcements at the same time. That leads to a poor audio experience, and a couple of other settings in the AppBar can be also affected by this. There's the "Enable automatic hints" settings which uses the Notification event to have Narrator automatically describe any available moves between the dealt card piles, after a move has been made. And while use of the Notification event allows control over how related screen reader announcements should mesh with other announcements from the screen reader, it can't account for strings being independently sent to a TTS engine.

And that brings us to the final setting in the AppBar, "Sound effects". That setting enables a success or failure audio notification following an attempt to move a card. This audio can be useful for such cases where you attempt to move a card, but the card didn't move. In that case, was the attempted move not valid, or did the target element not get toggled or selected as intended? But this audio is sent directly to the TTS engine, and so can effectively overlap with whatever Narrator's currently saying. So this is another opportunity for us to explore how an app might provide helpful success/failure audio feedback, regardless of whether a screen reader's being used.

And we also recognize that the "Sound effects" feature is inaccessible to someone who's deaf. For example, when an attempt is made to move a card to somewhere it can't be moved, there's no visual cue that the move was invalid. Given that we felt some feedback might be helpful in that situation, that feedback must be accessible to everyone playing the game.

All in all, I'm very excited about all the app can do today, and all the ways that people can interact with it, but there's a great deal still to explore. As the title of this post says, all we've done so far is set the scene.

 

Summary

The Sa11ytaire app has been a fascinating project to work on, with all the considerations around how the app can be built to be as accessible as possible by default, and then to take this further to deliver the most efficient experiences for all types of input and output methods that are available to a Windows Store app. The app is usable in many ways already thanks to the collaboration and feedback of my colleagues, and I really feel this is just the tip of the iceberg. I can't wait to learn how far we can take this.

And of course, if based on how you interact with your device, you feel that the app could be updated in some particular way to become more efficient for you, just let us know!

Guy


Viewing all articles
Browse latest Browse all 5308

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>