Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

When is it appropriate to use the current processor number as an optimization hint?

$
0
0


Some time ago, on the topic of avoiding heap contention,
I left an exercise that asked

whether it would be appropriate
to use the current processor number
(as reported by Get­Current­Processor­Number)
to select which heap to use
.



In this case, the answer is no.
While using the current processor would avoid contention at allocation,
it would make contention at deallocation even worse.



Suppose thread 1
is running on processor 1 and allocates some memory.
It allocates it from heap 1.
Later,
thread 2
is running on processor 1 and allocates some memory.
It also allocates it from heap 1.



Time passes,
and the two threads are now running simultaneously,
say one on processor 1 and another on processor 2.
They both go to free the memory, but since you have to free
the memory back to the heap from which it was allocated,
the fact that they are running on separate processors right
now is immaterial.
They both have to free the memory back to heap 1,
and that creates contention.



Okay, so what guidelines can we infer from this analysis?



If you are going to use the current processor as a hint
to avoid contention,
the entire scenario needs to be quick.
If the processor changes while your scenario is running,
then you will have contention if the new thread also tries
to perform that same processor-keyed operation.



In the case of memory allocation,
the memory is allocated, and then used for a while,
possibly a long time,
before finally being freed back to the heap from which
it was allocated.
Since the scenario is a very long one,
using the current processor number as a hint is
going to run into a lot of cases of accidental contention.



On the other hand, if you had a linked list of available
memory blocks,
then using the current processor may be helpful.
Keep a free list per processor.
When it's time to allocate a node, you consult the
free list for the current processor.
And when you want to free a node,
you free it back to the list associated with the
processor doing the free.



Unlinking a node from a linked list and pushing
a node to the front of a linked list are relatively
fast operations,
so the processor is unlikely to change out from
under you.



Of course, if you find that the free list is empty,
then you'll have to go create some new nodes.
Yes, this introduces the risk of contention,
but creating new nodes will be a comparatively slow operation,
so the hope is that
added risk of contention is not noticeable
in practice.


Bonus chatter:
In

the discussion that followed the exercise
,
there were a pair apparently contradictory claims:


  • The scheduler tries to keep load even across
    all processors.

  • The scheduler tries not to move threads
    between processors.



Both are right.



When a thread is ready to be scheduled, the
scheduler will try to put it back on the processor
that it had been running on most recently.
But if that processor is not available,
then the scheduler will move it to another processor.



In other words, the
"try not to move threads between processors" rule
is a final tie-breaker if the scheduler has
multiple processors available to it.
But if the thread becomes ready, and there is
only one available processor,
then the thread will run on that processor.
If the system has decided to shut down a processor
to conserve power,
then the thread will go to a processor that still
has power.


IP Restrictions on Azure App Service as expected behavior

$
0
0

A method to 100% shutdown the public endpoint of an App Service running in the public tenant is not provided.  However, you can create an ILB ASE (which is not a public tenant) or you can restrict the access using an IP Restriction.  Here is some information on the detailed feature for setting this up in IIS and into the web.config file.  If you want to do the same on an Azure App Service, then check this out.

An interesting behavior I wanted to document concerns what happens when you have made some configuration in the portal for your App Service but also added some ‘supplemental’ configurations into the web.config file for the application running on the platform…

As seen in Figure 1, I created an IP Restriction that allows only a client with an IP address of 10.0.0.1 to access the App Service.  I knew this was not my client IP so it was good for testing.

image

Figure 1, IP Restrictions not applying, not working

Once applied, when I accessed my App, I received the following:

You do not have permission to view this directory or page.

This is because the default denyAction is set to Forbidden and which returns a 403.503

But, if I modify my web.config file

<ipSecurity allowUnlisted="true" />

Then the setting in the portal is overridden by the configuration in my web.config file, and now I can access my App Service regardless of the portal setting.

A cool thing I found though was that instead of accepting the default denyAction I can configure that in my web.config so a different status is returned to the client.

<ipSecurity allowUnlisted="false" denyAction="NotFound" />

Adding that configuration to the web.config results in the following being returned, along with a 404.503

The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.

I confirmed also that having this configuration, I.e. the change to the denyAction does not override the IPs allowed to access, see Figure 2.  After adding my client IP address, I was able to access even after customizing the denyAction in the web.config.

image

Figure 2, IP Restrictions not applying, not working

Which leads to the possible misunderstanding.  Suppose I have a configuration in my web.config file like the following which allows my client IP address.

<ipSecurity allowUnlisted="false" denyAction="NotFound">
    <add ipAddress="###.220.196.###" />
</ipSecurity>

You may get a mixture of behaviors depending on the IPs entered via the portal and IPs configured in the web.config.  The recommendation is to use the portal.  At the same time, my experience was positive that changing the denyAction had no negative impact.

TIP #1:

You can get the IP configuration you made in the portal via the Azure Resource Explorer here.  Navigate to the following URL:

https://management.azure.com/subscriptions/"1"/resourceGroups/"2"/providers/Microsoft.Web/
       sites/"3"/config?api-version=2016-08-01

Where 1 = subscriptionID, 2 = resource group name and 3 = the App Service name.  You would see similar output as seen in Figure 3.

image

Figure 3, viewing IP Restrictionsconfigurations in resource explorer

TIP #2:

You can learn a lot about IIS and the ASP.NET pipeline via Failed Request Tracing.  I mention how to enable that here “Enable Failed Request Tracing for an Azure App Service Web App”.  However, as seen in Figure 4, I was quickly able to find my client IP address, the status and sub-status code plus, see that it was indeed the IpRestrictionModule which applied to request restriction on the request.

image

Figure 4 IP Restrictions not applying, not working

Cool stuff.

Find the Session ID of an executing SQL Agent job

$
0
0

On a busy production server I wanted to check when a job started running , I tried to look at the sysjob history table in the MSDB

 

However there are no entries for inflight jobs (meaning no entries for job that haven't failed or pass)

The sysjobschedules table from msdb will only show you failed or successful runs

 

While there are scripts available online to do this , you can use the below DMV's as well

 

select * from sys.jobs

-->Copy the job ID of your job name

select PROGRAM_NAME , * from sysprocesses where spid> 50

 

The result will be something like:

SQLAgent - TSQL JobStep (Job xx23453AB7EC97864084xxxxxF72B8CC9E : Step 1)

-->Make sure the program_name matches the Job ID you are looking for
Get the spid from here

 

--> You can now use it in below DMV's to check for blockings or waittypes

select * from sys.dm_exec_requests where session_id = 102
select last_batch, * from sysprocesses where spid = 102

 

Virtual Bytes が、Windows 10 April 2018 Update 適用前後で異なる

$
0
0

こんにちは、Platform SDK (Windows SDK) サポートチームです。Windows 10 April 2018 Update (1803) 適用後、 64 ビットプロセスの Virtual Bytes が以前の OS で返されていた値と比較して大きく異なるようになりました。今回はその件に関してご案内します。

 

現象

Virtual Bytes はプロセスが使用している仮想アドレス空間の現在のサイズを表していますが、2018 年 4 月 30 日に公開された Windows 10 April 2018 Update (1803) 適用後の環境において、64 ビットプロセスの  Virtual Bytes が大きく異なるようになりました。

 

原因

Windows 10 April 2018 Update (1803) 適用後の新しいメモリマネージャーにおいて、メタデータ用に予約される仮想メモリのサイズが変更されました。具体的には、64 ビットプロセスにおいて 4GB 、 32 ビットプロセスにおいては 36KB が予め確保されます。 Virtual Bytes は、現在コミットされている仮想メモリのサイズだけではなく、予約されているだけの仮想メモリのサイズも含みます。新しいメモリマネージャーにおいて予約される仮想メモリのサイズが変更になった結果、 Virtual Bytes の値も Windows 10 April 2018 Update (1803) を適用する前と後とで異なります。

 

対処策

64 ビットプロセスのメモリ利用状況の指標としては、コミットされている仮想メモリのサイズだけではなく予約されているだけの仮想メモリのサイズも含む Virtual Bytes を使うことは推奨されません。一方で、 Private Bytes はプロセスによって割り当て済みのメモリのサイズを反映するため、プロセスのメモリ利用状況を監視するという用途においてはこちらの値を使用することをご検討ください。

 

参考情報

.NET Framework の Process.VirtualMemorySize64 プロパティは Virtual Bytes の値が取得されます。こちらのプロパティを利用してプロセスの仮想メモリを確認している場合も同じ現象が発生します。対処策として、 Process.PrivateMemorySize64 プロパティをご利用ください。

DevOps for Data Science On The DevOps Lab Show

$
0
0

 

If you are yet to check out The DevOps Lab then now’s your chance! Launched in December 2017, DevOps Lab is a Channel 9 show hosted by Damian Brady on how to solve real DevOps problems using a range of tools and techniques.

In the latest episode, Coud Developer Advocate Damian sits down with Microsoft’s Paige Bailey as well as MVP Terry McCann to discuss DevOps for Data Science. These two popular fields are not often combined, but there are some fantastic opportunities for cross-pollination of ideas. Damian, Paige and Terry explore what's important to data scientists and where to start when it comes to a DevOps process. From using source control, testing and refreshing predictive models to operationalizing and evaluating success in production.

Data scientists are rarely developers, and developers are rarely data scientists, but they can work together, using tools and techniques that still allow the experts to do what they do best! To learn how and watch the full episode click here!

The DevOps Lab show also has a collection of other great episodes released this year, covering subjects such as unit testing and databases, data warehousing, Azure cloud services and more! Check them out here.

Dynamics 365 Customer Engagement (オンライン) のバージョンに関する重要なお知らせ

$
0
0

みなさん、こんにちは。

先日、Dynamics 365 Customer Engagement (旧 Dynamics CRM) オンラインをご利用のお客様に関わる重要なお知らせをお伝えする記事が Microsoft Dynamics ブログ に公開されましたので、本 Blog でもご紹介いたします。こちらは特に、現在 バージョン 8.2 をご利用のお客様にとって重要な変更となります。

Title : Dynamics 365 のアップデート方法を刷新
URL : https://community.dynamics.com/b/dynamicsblog-ja-jp/archive/2018/07/11/modernizing-the-way-we-update-dynamics-365

 

■重要な変更点

上記記事の [すべてのユーザーが 1 つの最新バージョンを利用] セクションにございます通り、全てのお客様は、最新のバージョンをご利用いただく必要があります。

従来は最新バージョン (現行 9.x)、一つ前のバージョン (同 8.2)、二つ前のバージョン (同 8.1)のご利用が可能でしたが、
バージョン 8.1 をご利用のお客様は、既に通知が行われております通り、2 月から 7 月 31 日 の期間内に最新バージョンへの更新が必要です
バージョン 8.2 をご利用のお客様は、2019 年 1 月 31 日までに最新バージョンに更新していただく必要があります。

更新をスケジュールする方法については、こちらをご参照ください。

 

■影響を受けるお客様

現在 バージョン 8.2  をご利用のお客様は、本ポリシー変更の影響を受けます。

バージョン 8.2  をご利用のお客様は、従来のポリシーであれば、次期メジャー アップデートまで更新をスキップできますが、この変更により、2019 年 1 月 31 日までに最新バージョンへの更新が必要となります。

 

- Dynamics 365 サポート 遠藤
※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります

System.Diagnostics.Trace Application Logging Log Stream on Azure App Service Function App

$
0
0

Here is what I wanted to do.

Add some code like this to my ASP.NET application.

using System.Diagnostics;
...
System.Diagnostics.Trace.WriteLine("System.Diagnostics.Trace.WriteLine() in the Page_Load method");
Trace.Write("Trace.Write() in the Page_Load method");
System.Diagnostics.Trace.TraceError("System.Diagnostics.Trace.TraceError() in the Page_Load method");
System.Diagnostics.Trace.TraceWarning("System.Diagnostics.Trace.TraceWarning() in the Page_Load method");
System.Diagnostics.Trace
      .TraceInformation("System.Diagnostics.Trace.TraceInformation() in the Page_Load method");

Then I wanted to write those logs out to the Application Logging (File System), Figure 1, Application Logging (Blob), Figure 2 and to the Log stream window in the portal, Figure 3.

image

Figure 1, how to write System.Diagnostics.Trace logs to application logging (file system)

image

Figure 2, how to write System.Diagnostics.Trace logs to application logging (blob)

image

Figure 3, how to write System.Diagnostics.Trace logs to application Log stream

Obviously, you can see that I did it.  But how Benjamin, HOW!?  Well, first enable it, as seen in Figure 4.

image

Figure 4, how to enable Application Logging to log System.Diagnostics.Trace

Then, the way I got this to work was to add the following to the top of the page which I wanted to log the traces from, Figure 5.

#define TRACE

image

Figure 5, how to enable Application Logging to log System.Diagnostics.Trace

For other application types you may be required to enable the tracing in a different way.  If you have some examples please add a comment below.  However, the point I learned is that more than simply adding the System.Diagnostics.Trace() method wasn’t enough to get the logs to be output.

Use VS Code to call Azure IoT Hub REST APIs

$
0
0

The REST APIs for IoT Hub offer programmatic access to the device, messaging, and job services, as well as the resource provider, in IoT Hub. With the Azure IoT Toolkit extension for Visual Studio Code, you could easily use IoT Hub REST APIs in VS Code, no extra toolchain needed! Let's see how quick it is to send a device-to-cloud message to Azure IoT Hub!

Prerequisites

Prepare HTTP request

In VS Code, create a file called d2c.http with below CURL request, and replace {iothub-name} and {device-id}:

curl --request POST 
  --url 'https://{iothub-name}.azure-devices.net/devices/{device-id}/messages/events?api-version=2018-06-30' 
  --header 'authorization: {sas-token}' 
  --data 'Hello IoT Hub!'

Generate SAS token

Right-click your device and select Generate SAS Token for Device. The SAS token is created and copied to clipboard, then replace {sas-token} d2c.http in the with the SAS token.

Send HTTP request

  1. Right-click your device and select Start Monitoring D2C Message to monitor the device-to-cloud message
  2. In d2c.http file, click 'Send Request' link to call Azure IoT Hub REST API to send a d2c message
  3. In right response area, you could see the HTTP response which is HTTP 204 meaning the message is sent successfuly
  4. In the Output Channel of Azure IoT Toolkit, you could see the IoT Hub receive the 'Hello IoT Hub!' message

Without any extra toolchain, you could do all the things to call Azure IoT REST APIs in Visual Studio Code. If you want to learn more about how Azure IoT Toolkit extension helps you to develop with Azure IoT Hub, checkout our Wiki Page to see the full features and tutorials.

Useful Resources:


Sa11ytaire on Xbox: Let the experiment begin!

$
0
0

This post describes considerations when porting an accessible solitaire game to the Xbox, and focuses on the experience when using the Narrator screen reader or a switch device.

Apology up-front: When I uploaded this post to the blog site, the images did not get uploaded with the alt text that I'd set on them. So any images are followed by a title.

 

Introduction

Earlier this year, at The Sa11ytaire Experiment: Part 1 – Setting the Scene I described how a colleague and I built a Microsoft Store app which explored a variety of input and output mechanisms that could be used to play a solitaire game. I felt this was a fascinating topic, involving input mechanisms such as speech, eye control, switch device, keyboard access keys, and also considering the most effective Narrator experience we could build. While the app was far from being complete, (for example, it didn't provide a usable experience on small screens or when different themes were active,) it had reached a point where we could start tuning the app based on feedback.

Since we built the app, Microsoft announced the Xbox Adaptive Controller, and so it was inevitable that I wasn't going to be able to relax until I'd played Sa11ytaire with an Adaptive Controller. This post describes the steps I took to get Sa11ytaire running on an Xbox. I've never built an Xbox game before, and so I mention a few places where there were hiccups along the way, (mostly caused by me not paying much attention to all the helpful resources available at docs.microsoft.com).

Please do send feedback on the Sa11ytaire Xbox experience. I know there are many things that could be improved with the app, (for example the visuals are still rudimentary,) but I'm hoping the current switch device and Narrator experiences provide the foundation for a usable experience.

The Sa11taire Xbox game is available at Sa11ytaire at the Microsoft Store.

 

Figure 1: The Sallytaire game being played on the Xbox, with a switch device plugged into an Adaptive Controller. A ten of diamonds is being moved onto a jack of clubs.

 

A few other thoughts

Before we get started, a few additional comments.

Is the game really usable?

There are a fair number of known bugs and limitations in the game, but I do believe it's worth sharing it out as it is, in order to help us focus on the top priority changes first. The most significant constraint with the switch device experience, is that on Xbox, the switch scan speed cannot be adjusted through switch input. So the speed needs to be set using something other than the switch device.

What about Magnifier on the Xbox?

As far as I know, Magnifier works with Sa11ytaire just as it does with any other app. I couldn't find a way to have Magnifier follow keyboard focus in the app as the Magnifier feature can on the desktop, so the view needs to be manually updated to bring the cards of interest into view.

What about colors used in the app?

Color usage is a critical part of building an app. Providing sufficient contrast, and respecting customers' choice of light-on-dark or dark-on-light, is such a fundamental concept. That said, I didn't focus on that for this porting of the game to Xbox, simply due to time constraints. The visuals today provide a means to learn about the switch device and Narrator experiences, and we can prioritize updates to the app based on your feedback.

What about my Windows Phone?

Since we'd built a UWP XAML app, it should work on any Windows 10 device right? I was porting the Windows desktop app to the Xbox, and who knows, maybe the Sa11ytaire experiment will move to Hololens at some point. But what about my Windows Phone? When I'm sitting on the 255 bus, why wouldn't I want to play the game and consider the next steps for improving it? Well, it looks like I'm out of luck. I tried deploying the app to my Microsoft Lumia 950, and Visual Studio told me the phone's OS was too old. Perhaps I could address that my changing the Sa11ytaire app to build for the older OS, but there's no way I'm doing that. One of the key points of Sa11ytaire app is to explore how to get the most out of the great new accessibility-related features available in whatever the latest version of Windows 10 is. And as far as I can tell, there is no later version of the OS available for my phone. So I think it might be time for me and my Windows Phone to part ways. That's too bad really, I'll miss it.

That last point raises the question: Should the app be turned into a Xamarin app, enabling it to run on multiple mobile platforms? That is a tempting idea, but I'm not sure if a Xamarin app running on Windows 10 can leverage all the latest accessibility features of Windows. The Sa11ytaire app makes some very deliberate use of the UIA Notification event through the UWP XAML platform, as part of trying to deliver the most efficient Narrator experience possible. If I can't do that with a Xamarin app today, then I'll continue with the regular UWP XAML app.

 

Porting the desktop app to the Xbox

Typically, I was going to do the least amount of preparation that I could while porting the game to Xbox. I tell myself that that's because I just don't have time to read all the material out there which explains what I should be doing. But I have been called a slacker in the past, so perhaps that's the real problem. Either way, I wondered if I could simply publish the existing desktop game as-is for the Xbox, and maybe it would "just work".

So the first step was publishing for Xbox. At the Services section for the Sa11ytaire app at the Dev Center, I selected "Xbox Live" and viewed all the things I could do there. The only thing I actually did there was invoke the Settings button, and then say that the app supports both Desktop and Xbox One. Back in the main Properties for the app, the app was already declared to be a "Card + board" game, and since Xbox seems to require that the app is a game, that was all fine. I set the "Game settings" to be the most basic they could be. That is, single player for PC and Xbox. (I left the Broadcasting setting checked as that seemed like that might be handy if in the future someone wants to provide feedback through a broadcast.)

Having done that, an error string appeared on the submission page, saying "Document: accesspolicies.xml Error: AccessPoliciesRequired The access policies document is not present in the config set. This document is required for all publish operations". I searched high and low for information about that, and in the end got help from someone in Xbox support. It turned out that all I had to do was press the "Test" button on the Xbox Live Services page at the Dev Center, and the error went away. I suspect others might not hit this error as I did, as I think my dev account might have been in an unusual state. (The account was created way back, perhaps when the process for setting up Xbox accounts was still evolving.) But if you ever do hit that mysterious error, try invoking the "Test" button. (You would have done that already of course, if you were actually testing your app on an Xbox before publishing it…)

At that point I published the app, and to my delight soon after found I could download the app on my Xbox. It technically worked, but there were a couple of things I'd need to do to make it usable. The first related to preventing some UI clipping, and so needed to support a smaller screen resolution that the app currently supported. Addressing that was routine UWP XAML UI work, including making sure I reduced minimum widths for some elements. And while the results still led to some text clipping, it would do for now. The second point related to input via the Xbox controller. By default the controller was moving the pointer on the screen, and instead I needed it to move focus between the cards in the app. In order to address that, I added the following to the app, after which, the controller's D-pad moved me left/right/up/down just great.

RequiresPointerMode = ApplicationRequiresPointerMode.WhenRequested;

(For more details, visit ApplicationRequiresPointerMode.)

 

While making the above changes, I did re-publish the app a couple of times. But then after one attempt, I was told the app submission was rejected due to:

"Create at least one active user and allow the user to sign the user in to Xbox. Display the user's Xbox gamertag as the primary display and profile name. Please add this behavior and resubmit your game."

 

It was becoming clear I couldn't keep publishing the app and hoping for the best. I needed to develop and test using my own Xbox, and publish when I was reasonably confident that the app would actually work. So I took a look at a variety of resources, including:

 

Setting up my Xbox in dev mode and connecting to it through the Dev Portal from a browser on my laptop was surprisingly straightforward once I'd got used to configuring the Xbox's Remote Access Settings, as described at Introduction to Xbox One tools. (I found that page after first encountering the "This site is not secure" thing described there, and I have to say, I did find that a little disconcerting at the time.)

Having effectively turned my Xbox into a dev machine, the next step was to debug Sa11ytaire running on it. I learned about deploying the app from Visual Studio to the Xbox, as described at Set up your UWP on Xbox development environment, and successfully entered the VS pin when requested. Following that, I got stuck. All my attempts to actually deploy the app ended with me being told "Failed to launch remote debugger with the following error: 'Command failed: 0x800705b4'". I then spent 90 minutes scouring the web for tips on how to deal with this, but was not successful. (I must admit that by the end of that, I felt my original attempts to publish the app without testing it, weren't perhaps that outrageous after all.) Since that day when I hit the 0x800705b4 problem, it was suggested to me that given that this is a ERROR_TIMEOUT, I should consider moving to a wired, rather than wireless connection. And in fact after reviewing Create your first app again, it does say "A wired network connection is recommended". I've yet to try that, but it's next on my list of things to do…

With my attempt to debug the app on hold, I instead installed the app from my laptop through the Dev Portal, and ran it on the Xbox. Not being able to debug made diagnosing problems rather inefficient, but I could still make a ton of progress anyway. The first interesting thing I learned was that the app failed to start. This was because I'd not set up the sandboxing as required. But once I'd set up the Xbox's sandbox to match that of the Sa11ytaire app I was installing, the app started fine.

The next step was to add the Xbox Live sign-in code for app start-up, and so copied in the C# code sample at Authentication for UWP projects. Initially, this seemed to work great. I ran the app, worked through various sign-in-related UI on app start-up, and displayed the player tag in the UI. At this point, I thought I was good to go. I then found that after playing the game for a while, it would crash at apparently random moments. As I recall, I was encountering a NullReferenceException beneath Microsoft.Xbox.Services.dll, with none of my own code shown in the callstack. I have no clue what was going on there, but after some experimenting, I found that if I removed the creation of the unused XboxLiveContext object, the app stopped crashing. As such, for now, I'll not be creating an XboxLiveContext object.

And so I'd reached the point where I had an Xbox app that could be published to the Microsoft Store, and could charge ahead on considering the switch control and Narrator experiences.

 

Switch device control

The switch device control of the Sa11ytaire app on the desktop, worked through Space key input. I tested this with a switch device and an adaptor, such that in response to a switch device press, the app would receive the Space key input. (Limiting the type of input to the Space key on the desktop was only due to time constraints.) In order to test this out on the Xbox, I hooked up the Y button to toggle the state of switch control, and pressed the Space key on my controller's chatpad. As a result, the switch control scan highlight cycled through the UI, and I could play the game with the controller.

But supporting only a chatpad's Space key input is not sufficient for the app on Xbox, so I hooked up the left and right bumpers to control the app when switch control is active. I did this by adding a Window.Current.CoreWindow.KeyDown handler, and responding to presses of VirtualKey.GamepadLeftShoulder/GamepadRightShoulder. (The original Space key action was triggered in response to the page's overridden OnPreviewKeyDown() being called.) All in all, that seemed pretty straightforward.

One additional modification that was required related to interaction with standard buttons in a ContentDialog. (For example, the "Are you sure you want to restart the game?" dialog.) When switch device control is enabled, when one of those dialogs appears, the app simply moves keyboard focus between the buttons, and whenever the Space key is pressed, then the focused button gets invoked. So the fact that this worked on the desktop was a side-effect of having the app react to a Space key press when switch control is active. In order to make this work on Xbox, I added the following code, to be run when a bumper is pressed and a ContentDialog is up. (Sorry about the dodgy indentation of the code. The blog site won't seem to let me fix that.)

 

var buttonWithFocus = FocusManager.GetFocusedElement() as Button;

if (buttonWithFocus != null)

{

ButtonAutomationPeer peer = FrameworkElementAutomationPeer.FromElement(buttonWithFocus) as ButtonAutomationPeer;

if (peer != null)

{

peer.Invoke();

}

}

 

I don't actually remember building a UWP XAML app that invokes one of its own buttons through UIA, but it seemed to work ok. (Note that I've not yet had a chance to update the app such that its appbar UI is usable with an Xbox controller.)

 

Now I could get to the bit that I was really excited about: Playing the game with the Xbox Adaptive Controller. Playing the game using a bumper or any other specific button on the controller might be fine for some players, but I want to make the game playable for as many players as I can. I've pre-ordered an Adaptive Controller, but it won't arrive for a while yet. So a colleague kindly lent me a device, and I tried it out. The Adaptive Controller supports customization such that its big buttons can be configured to effectively become other buttons, and I did consider doing that. But instead, I just paired the Adaptive Controller with my Xbox, plugged a switch device into the back of it, and hey-presto, I could play Sa11ytaire on my Xbox with a switch device. That was just fantastic!

I can't wait for my own Adaptive Controller to arrive now, so I can get familiar with all that it can do.

 

Figure 2: The Sa11ytaire app being played with the right bumper of a Xbox controller. A ten of diamonds and nine of clubs are being moved on to a jack of spades.

 

Narrator

As far as Narrator goes, I pretty much knew what the experience was going to be like with Sa11ytaire on the Xbox. After all, the same Narrator, UIA and UWP XAML is running on both the Windows desktop and Xbox. We'd built Sa11ytaire on the desktop to have Narrator make specific announcements which we felt would be valuable to the player, and those announcements would also be made on the Xbox. That said, we did adjust the experience, simply due to further consideration while we played the game.

For example, I added an announcement to confirm that the game had been restarted by the player. When I originally added that I felt that the announcement was too important to be truncated by another announcement, and so when I raised the related notification event, I supplied ImportantAll. That turned out to be the wrong thing to do. Say a player restarts the game many times in quick succession. With my change, an entire "Game restarted" announcement will be made for each restart of the game. The game and the player may well be ready for further play long before the full set of announcements have completed. So that's a tedious experience. As it happens, Tim caught this error and fixed it by replacing ImportantAll with ImportantMostRecent. (Thanks Tim!)

Another change Tim made relates to the announcement on whether a move is available between dealt card piles. Previously if no move is available there would be no announcement, and so the player might wonder if they really issued the command to learn about available moves or not. So now, if no moves are available, Narrator announces "No move is available".

By the way, I also removed some of the UIA Help properties from a few elements. When the game was originally built, I think I went over the top with the Help properties. I was effectively stuffing the instructions on how to play the game into the Help properties, and I don't think that's appropriate. It led to far too much information being announced simply when trying to play the game.

 

I'd say the most interesting aspect of using Narrator when porting Sa11ytaire to the Xbox, was how the controller should be used as a form of input at the game. With the D-pad working great, and the A button interacting as required with the focused card by default, the game was usable. I could move to one card, press A to select it, move to another card, and press A to move the first card over to it.

But this is where the classic question arises: Is this the most efficient experience that can be delivered?

 

The desktop app supports F6 to move between the three main groups in the app. So I updated the app such that a press of the left/right bumpers, (when switch device control is not enabled,) would move focus to the first control in the previous/next group respectively. What's more, given that the access keys on the desktop can provide for some players an extremely efficient means of playing the game, I said that once a bumper is pressed, the keys on the chatpad keyboard would behave in the same way as the access keys on the desktop. For example, press N to turn over a new card, or press U then C to move the upturned card to the Clubs pile, or press 4 then 6 to move a card from the fourth dealt card pile to the sixth dealt card pile.

On the desktop, function keys are used for a variety of actions, and so it was fun to consider how a controller might be used to access that same functionality. In the end, I implemented access to the function key functionality using a mix of buttons on the controller.

By the end of all this, this is how the Sa11ytaire app reacts to input at the Xbox controller:

 

  • D-Pad: Moves keyboard focus left/right/up/down through the app.

 

  • A Button:
    • Invoke the Next Card button.
    • Check the Upturned Card button or Target Card Pile buttons.
    • Select an item in the Dealt Card Pile lists.

     

  • LeftThumbstick Button: Replicates A button action, in order for the game to be playable with controls only on the left side of the controller.

 

  • Left/Right Bumpers:
    • If switch device control is not enabled, moves keyboard focus to the first card in the three main areas in the app. (Equivalent to Shift+F6 or F6 on the desktop.)
    • If switch device control is enabled, then triggers the switch device action in the app. (Equivalent to Space key.)

 

  • RightThumbstick Left/Up/Right: Have Narrator announce the state of the remaining cards, target card piles, or dealt card piles respectively. (Equivalent to F2/3/4.)

 

  • RightThumbstick Down: Have Narrator announce the hint on whether a card can be moved between the dealt card pile lists. (Equivalent to F7.)

 

  • RightThumbstick Button: Restart game. (Equivalent to F5.)

 

  • Y Button: Toggle the state of switch device control. (Equivalent to F10.)

 

  • X Button: Deselect and uncheck everything. (Equivalent to Escape.)

 

 

Figure 3: The Sa11ytaire app running with Narrator on an Xbox. The four of clubs is being moved onto the five of diamonds.

 

Summary

I'm really excited to have been able to port the Sa11ytaire experiment to the Xbox. While it was a fun learning experience for me to have published an Xbox game to the Microsoft Store for the first time, what I'm so thrilled about is to now have the opportunity to learn from game players as to what the app really needs to do if it's to provide an efficient experience in practice for players using Narrator or a switch device. And if some of the feedback we get is "You're nowhere near that", then that's exactly what we need to know. And who knows, maybe for some players, we're already pretty close today.

So let us know, and together we can all be a part of delivering that great experience on Xbox.

Guy

​​DevOps Stories – Interview with John-Daniel Trask of Raygun​

$
0
0

App Dev Manager Dave Harrison talks with John-Daniel Trask, co-founder and CEO of Raygun, about the adoption of DevOps.


The following content is shared from an interview with John-Daniel Trask, co-founder and CEO of Raygun, a New Zealand-based company that specializes in error, crash, and performance monitoring. John-Daniel (or JD) started out with repairing laptops out of college, to working as a developer, to finally starting several very successful businesses, including what became Mindscape and its very successful monitoring product, Raygun.

We covered a lot of ground here, and we think you’ll love the following thoughts:

  • Is a DevOps team really such a bad thing?
  • Why forcing your devs to go to an event booth might be a very good thing
  • When is a “requirement” not really a requirement?
  • Starting from scratch, with nothing - where would you start?
  • What’s the golden ticket to get funding and support for your requests and projects?

And last but not least – “it’s not the big that eat the small, it’s the fast that eat the slow!”

Note - these and other interviews and case studies will form the backbone of the upcoming book “Achieving DevOps” from Apress, due out in late 2018. Please contact me if you’d like an advance copy!

johndanieltraskIs DevOps culture first? Well I definitely run into a lot of zealots who swing one side or another. Some people pound the table and say that DevOps is nothing about tools, that it’s all culture and fluffy stuff. These are usually the same people who think a DevOps team is an absolute abomination. Others say it’s all about automation and tooling.

Personally, I'm not black and white on it. I don't think you can go and buy DevOps in a box; I also don't think that “as long as we share the same psychology, we've solved DevOps.” Let’s take the whole idea of a DevOps team being an antipattern for example. For us it’s not that simple – it’s very easy, on a 16-person startup, to say that a DevOps team is a horrible idea. Well, of COURSE you’d think that, for you cross team communication is as easy as turning around in your chair! But let’s take a larger enterprise, 50,000 people or so, with hundreds of engineering teams. You can’t just hand down “we’re doing DevOps” as an edict and it’s solved. In that case, I have seen a DevOps team be a very successful as a template, something that helps spread the good word by example.

What’s a common blind spot you see with many programmers? It’s really quite shocking how little empathy there is by most software engineers for their actual end users. You would think the stereotypical heads-down programmer would be a dinosaur, last of a dying breed, but it’s still a very entrenched mindset. I sometimes joke that for most software engineers, you can measure their entire world as being the distance from the back of their head to the front of their monitor. There’s a lack of awareness and even care about things like software breaking for your users, or a slow loading feed. No, what we care about is – how beautiful is this service that I’ve written, look how cool this algorithm is that I wrote.

We sometimes forget that it all comes down to human beings. If you don’t think about that first and foremost, you’re really starting off on the wrong leg.

One of the things I like about Amazon is the mechanisms they have to put their people closer to the customer experience. We try to drive that at Raygun too. We often have to drag developers to events where we have a booth. Well, once they’re there, the most amazing thing happens – we have a handful of customers come by and they start sharing about how amazing they think the product is. So you start to see them puff out their chests a little – life is good! And the customers start sharing a few things they’d like to see – and you see the engineers start nodding their heads and thinking a little. We find those engineers come back with a completely different way of solving problems, where they’re thinking holistically about the product, about the long term impact of the changes they’re making. Unfortunately, the default behavior is still to avoid that kind of engagement, it’s still way out of our comfort zone.

Using Personas to Weed Out Red Herrings: I don’t know if we talk enough in our industry about weeding out bad feedback. We often get requests from our customers to do things like dropping a data grid with RegEx on a page. That’s the kind of request that comes from the nerdiest of the nerds – and if we were to take that seriously, think of the opportunity cost and what it would do to our own UX!

We weed out outlier requests like this by using personas. For our application, we think in terms of either a CEO, a tech lead, or an operator. Each has their own persona and backstory, and we’ve thought out their story end to end and how they want to work with our software. So for the CXO level, the VP’s, the directors – these are people who understand their whole business hinges on the quality of their software. They need to keep this top of mind at the very top levels of decision making. So for this person, there are graphs and charts showing this strategic level fault and UX information, all ready to drop into reports to the executive board. Then there’s the mid tier – these are your tech leads, the Director of Engineering – they need to know both high level strategic 30K foot information, and a summary of key issues. The cutting edge though is that third tier, your developer or operator. This person needs to have information when something goes bump in the night. So for them, you have stack traces, profiling raw data, user request waterfalls. Without that information, troubleshooting becomes totally a stab in the dark.

Lots of companies use personas, I know. They’re really critical to filter out noise and focus on a clear story that will thrill your true user base.

How can error and crash reporting make for a better performing business? And yet, most of the DevOps literature and thinking I see focuses entirely on build pipelines, platform automation, the deployment story, and that’s the end of it. Monitoring and checking your application’s real-world performance and correcting faults usually just gets a token mention, very late in the game. But after you deploy, the story is just beginning!

I hate to say this – but I think we’re still way behind the times when it comes to having true empathy with our end users. It’s surprising how entrenched that mindset of monitoring being an afterthought or a bolt-on can be. Sometimes we’ll meet with customers and they’ll say that they just aren’t using any kind of monitoring, that it’s not useful for them. And we show them that they’re having almost 200,000 errors a day – impacting 25,000 users each day with a bad experience. It’s always a much, much larger number than they were expecting – by a factor of 10 sometimes. Yet somehow, they’ve decided that this isn’t something they should care about. A lot of these companies have great ideas that their customers love – but because the app crashes nonstop, or is flaky, it strangles them. You almost get the thinking that a lot of people would really rather not know how many problems there really are with what they’re building.

Yet time and again, we see companies that really care about their customers excel. Let’s say I take you back in time to 2008, and I give you $10,000 to invest in any company you want. Are you going to put that into Microsoft, Apple, Google, or Dominos Pizza? Well guess what – Dominos has kicked the crap out of all those big tech companies with their market cap growth rate. The answer is in their DNA – they devote all their attention into ensuring their customers have a great experience. Their online ordering experience is second to none. And that all comes from them being customer obsessive, paying attention to finding where that experience is subpar and fixing it. It’s never a coincidence that customer centric companies consistently outperform and dominate.

dominos

Source: https://www.theatlas.com/charts/S18QCJyhe

What’s forced us as an industry to change and driven a better user experience is Google, believe it or not. They started publishing a lot of research and data around application errors and performance, and prioritizing well performing sites. This democratized things that data scientists were just starting to figure out themselves. And it seemed like overnight, a lot of people cared very much that their website not be dog slow – because otherwise, it wouldn’t be on the first page results of a web search, and their sales would tank. But folks often didn’t care about performance or the end user experience – until Google forced us to.

What would you say to the company that is starting from ground zero when it comes to DevOps? I’m picturing here a shop where they take ZIP files and remote desktop onto VM’s and copy-paste their deployments. If that’s the case – I like to talk about what are the small things you could put into place that would dramatically improve the quality of life on the team. These are big impact, low cost type improvements. So where would I start?

  • First would come automating the deployments. Just in reliability alone, that’s a huge win. Suddenly I have real peace of mind. I can roll out releases and roll them back with a single button push, and it’s totally repeatable as a process. If I’m an oncall engineer, being able to roll out a patch through a deployment process that runs automatically at 3 a.m. is a world of difference from manually pulling assets.
  • The second thing I would do is set up something like StatsD. You don’t need to allocate a person to spend several days – it’s a Friday afternoon kind of thing. When you start tracking something – anything! - and put it up on the wall that’s when people start to get religion. We saw this ourselves with our product – once we put up some monitors with some of the things coming from StatsD, like the number of times users were logging in and login failures. And it was like watching an ultrasound monitor of your child. People started gathering around, big smiles on their faces – things were happening, and they felt this connection between what they were doing and their baby, out there in the big bad old world. Right away some of that empathy gap started to close.
  • Third would come crash reporting. There’s just no excuse not to put this into place – it takes like ten minutes, and it cuts out all that waste and thrash in troubleshooting and fuels an improvement culture.

How do we communicate in the language of business? What I wish more engineering teams understood is how to communicate in the language of business. I’m not asking developers to get an MBA in their off hours – but please TRY to frame things in terms of dollars, economic impact, or cost to the customer. Instead we say, this shiny new thing looks like it could be helpful.

There’s a reason why we often have to beg to get our priorities on the table from the business. We haven’t earned the trust yet to get “a seat at the table”, plain and simple. We tend to be very maxed out, overwhelmed, and we’re pretty cavalier with our estimates around development. This reflects technology – which is fast moving, there’s so much to learn, and it’s not in a stable state. But when engineers hem and haw about their estimates, or argue for prioritizing pet projects that are solely tech-driven, it makes us look unreliable as a partner. And we haven’t learned yet to use facts and tie our decisions into saving money or getting an advantage in the market.

Always keep this in mind – any business person can make the leap to dollars. But if you’re making an argument and you are talking about code – that’s a bridge too far. It’s too much to expect them to make that jump from code to customer to dollars. So if you tell me you need React 16, that won’t sell. But if you say 10% of your customers will have a better experience because of this new feature – any business person can look at that and make the connection, that could be 5,000 customers that are now going to have a better customer experience. You don’t have to be Bill Gates to figure out that’s a good move!

Let’s get down to brass tacks – how do I make this monitoring data actionable? We wouldn’t think about putting planes in the air without a black box – some way of finding out after something goes wrong what happened, and why. That’s what crash monitoring is, and it’s incredibly actionable. You know the health of your deployment cycle, you can respond faster when changes are introduced that degrade that customer experience.

errorsLet’s say you are seeing 100,000 errors a month. Once you group them by root cause, that overwhelming blizzard of problems gets cut down to size which is more common than you’d think. You may have 1,000 distinct errors, but only 10 actual, honest-to-goodness bugs. Then you break it down by user, and that’s when things really settle out. You might find that one user is using a crappy browser that’s blocking half your scripts – that isn’t an issue really. But then there’s that one error that’s happened only 500 times – but it’s hitting 250 of your customers. That’s a different story! So you’re shifting your conversation already from how many errors you’re seeing to the actual number of customers you’re impacting – that’s a more critical number, and one that everyone from your CEO down understands. And it’s actionable. You can – and you should – take those top 2 or 3 bugs and drop it right into your dev queue for the next sprint.

This isn’t rocket science, and it isn’t hard. Reducing technical debt and improving speed is just a matter of listening to what your own application is telling you. By nibbling away on the stuff that impacts your customers the most, you end up with a hyper reliable system and a fantastic experience, the kind that can change the entire game. One company we worked with started to just take the top bug or two off their list every sprint and it was dramatic – in 8 weeks, they reduced the number of impacted customers by 96%.

Think about that – a 96% reduction in two months. Real user monitoring, APM, error and crash reporting – this stuff isn’t rocket science. But think about how powerful a motivator those kinds of gains are for behavioral change in your company. Data like that is the golden ticket you need to get support from the very top levels of your company.

One of my mentors was Rod Drury, who founded Xero right here in Wellington, New Zealand. He says all the time: “It’s not the big that eat the small, it's the fast that eat the slow”. That’s what DevOps is about - making your engineering team as reliably fast as possible. To get fast, you have to have a viable monitoring system that you pay close attention to. Monitoring is as close as you can get in this field to scratching your own itch.

What about building versus buying a monitoring system? I’ll admit that I’m biased on the subject, running a SAAS-based monitoring business. But I do find it head-scratching when I talk to people that are trying to build their own. I ask them, “how many people are you putting on this?” And they tell me – oh, 4 people, say a six month project. And then I say, “what are their names?” They look at me funny, and ask why – I tell them, “I’ve had 40 people working on this for 5 years – now I can fire them and hire your people!” Back in 2005, it made total sense to roll your own, since so much of the stuff we use nowadays didn’t exist. But those times have changed. Even self-hosting as its issues. Let’s say you decide to go down the ELK stack route. Well, that means running a fairly large elastic instance, which is not a sit-and-forget type system. It’s a pain in the ass to manage, and it’s not a trivial effort.

To me it also is answering the wrong question. To me, there’s one question that should be the foundation for any decision an engineering team makes – does this create value for our customer? Is our customer magically better off because we made the decision to build our own? I think – for most companies – probably building a robust monitoring system has little or nothing to do with answering that question. It ends up being a distraction, and they spend far more to get less viable information.

Etsy says “if it moves, track it.” Do you agree – should customers track everything? I’m pragmatic on this – if you’re small, tracking everything makes sense. Where it goes wrong is where the sheer amount of data clogs our decision making.

So then you start to think about sampling data. However, what I often see is someone sitting in a chair, looking off into the distance and says – “yeah, I think about 10% of the data would give us enough”. Rarely do we see people breaking out Excel and talking about what would be statistically significant - people tend to make gut calls. If you’re tracking everything you possibly could with real user monitoring for example, it can be a real thicket – a nightmare, there’s so many metric streams. You trip over your own shoelaces when something goes wrong – there’s just so much detail, you can’t find that needle in the haystack quickly. So you need both aggregate and raw data – to see high level aggregates and spot trends, but then be able to drill in and find out why something happened at the subatomic particle level. We still see too many tools out there that offer that great strategic view and it’s a dead end – you know something happened, but you can’t find out exactly what’s wrong.

Any closing thoughts? I never get tired of trying to tie everything back to the customer, to the end user experience. It’s so imperative to everything you're doing. There is literally no software written today for any reason other than providing value to humans. Even machine to machine, IOT systems are still supporting a human being.

Human beings are the center of the universe. But you wouldn’t know that by the way we’re treated by most of the software we write. Great engineers and great executives grasp that. They know that to humans, the interface is the system – everything else simply does not matter in the end. So they never let anything get in the way of improving that human, end user experience.

References:


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

宣布模板智能感知

$
0
0

[原文发表地址] https://blogs.msdn.microsoft.com/vcblog/2018/06/26/template-intellisense/

[原文作者] Nick Uhlenhuth

[原文发表时间] 6/26/2018

使用函数模板和类模板的C++开发人员现在可以在模板体中的充分利用智能感知。在Visual Studio 2017 15.8 Preview 3中,当您的插入符号位于模板中时,一个名为“模板栏”的新UI元素会出现在模板定义的旁边,模板栏允许您为智能感知提供示例模板参数。

例如,让我们在algorithm.hpp里面看一下函数模板is_partitioned_untilBoost库 (我对此稍作修改) 。 我们可以使用模板栏为智能感知提供一个 InputIterator类型和UnaryPredicate 类型的示例。

  • 单击<T>图标以展开/折叠模板栏。
  • 单击铅笔图标或双击模板栏以打开编辑窗口。

请注意我们能够在名为myPredicateUnaryPredicate 上使用decltype 。有了这些信息,我们在编辑模板主体时就拥有了智能感知的全部功能。我们得到了所有恰当的曲线,快速信息,参数帮助等。

我们考虑到模板栏信息是针对特定用户的,因此将它存储在.vs文件夹中,而不是在提交时共享。

下一步是什么呢?

下载最新的Visual Studio 2017 Preview预览版并尝试使用您的项目。要禁用/启用该特性,请转到工具>选项>C/C++>高级>智能感知>启用模板智能感知。

我们将在后续版本中继续改进这个特性。我们已经计划支持嵌套模板并处理Visual Studio外部的编辑。

和所有新特性一样,您的反馈对于指导我们的发展非常重要。您可以在Twitter@nickuhlenhuth向我发送您的反馈,或者联系Visual Studio C++团队@visualc或visualcpp@microsoft.com

PowerShell Core now available as a Snap package

$
0
0

The goal of PowerShell Core is to be the ubiquitous language for managing your assets in the hybrid cloud. That's why we've worked to make it available on many operating systems, architectures, and flavors of Windows, Linux, and macOS as possible.

Today, we're happy to announce an addition to our support matrix: PowerShell Core is now available as a Snap package.

What's a Snap package?

Snap packages are containerized applications that can be installed on many Linux distributions.

What does this do for me?

Snap packages have a number of benefits over traditional Linux software packages (e.g. DEB or RPM):

  • Snap packages carry all of their own dependencies, so you don't need to worry about the specific versions of shared libraries installed on your machine
  • Snap packages can be installed without giving the publisher root access to the host
  • Snap packages are "safe to run" as they don't interact with other applications or system files without your permission
  • Updates to Snaps happen automatically, and include the delta of changes between updates

How do I get it?

First, you need to make sure you've installed snapd.

Then, just run:

snap install powershell --classic

Now you've got PowerShell Core installed as a Snap! Simply start pwsh from your favorite terminal, and you're in!

Interested in our latest preview bits?

If you live on the bleeding edge and want to grab the latest PowerShell preview, just install powershell-preview instead of powershell:

snap install powershell-preview --classic

Now you can launch PowerShell Core's latest preview as a Snap by launching pwsh-preview from your terminal.

What about your other Linux packages?

We will continue to support our "traditional" standalone Linux packages that ship on https://packages.microsoft.com/, and we have no plans to discontinue that support.

However, we highly encourage you to check out the Snap package as a way to simplify your updates and reduce the permission set required for installation.

Happy Snapping!

Joey Aiello
PM, PowerShell

How can I get the actual window procedure address and not a thunk?

$
0
0


We saw some time ago that

the Get­Window­Long­Ptr
function returns a magic cookie if it needs to thunk the message
.
The Call­Window­Proc function
understands this magic cookie and calls the original window procedure
after converting the message to the appropriate character set.
But what if you want to get the actual window procedure and not a thunk?
(For example, because you're writing some debugging or diagnostic code,
and you want to log the actual window procedure address.)



The system returns a thunk if you call
Get­Window­Long­PtrA
but the window procedure expects Unicode messages,
or if you call
Get­Window­Long­PtrW
but the window procedure expects ANSI messages.
So you can avoid the character set thunk by checking
the character set of the top-level window procedure
by calling Is­Window­Unicode.
If it reports that the top-level window procedure is Unicode,
then use
Get­Window­Long­PtrW.
Otherwise, use
Get­Window­Long­PtrA.



Unrelated bonus chatter:
This blog has been running for 15 years now.
Sorry I didn't celebrate with some super-fascinating topic.

Friday Five: MVPs with Fresh Insights on .NET Core and More!

$
0
0

ASP.NET Core 2: Architecture & Design Pattern Ideology

Asma Khalid is a Technical Evangelist, Technical Writer and Fanatic Explorer. She enjoys doodling with technologies, writing stories and sharing her knowledge with community. Her core domain is Software Engineering, but she’s also experienced in Product Management, Product Monitoring, Product Implementation, Product Execution and Product Coordination. She is the 1st female from Pakistan to receive Microsoft Most Valuable Professional (MVP) recognition and the 1st female from Pakistan to receive C-sharp corner online developer community Most Valuable Professional (MVP) recognition. She has 6+ years of experience as IT professional, freelancer & entrepreneur. She is currently working on her entrepreneur venture, AsmaKart. Follow her on Twitter @asmak. 

.NET Core Microservices – DShop

Piotor Gankiewicz is a Microsoft MVP, a Bottega trainer, Software engineer & architect, co-founder of Noordwind teal organization. He's also a consultant, open source contributor, blogger, advocate of DDD, CQRS, DevOps and distributed systems.

Piotr belongs to the Polish .NET community, is on a mission to deliver the best free and open software and programming courses. In his post, he discusses creating a distributed shop (DShop) using .NET Core, Microservices, Docker, and other tools. Follow him on Twitter @spetzu

Asp.Net Core 2.0 Web Api Unit Testing

Gora Leye is a Solutions Architect, Technical Expert and Devoper based in Paris. He works predominantly in Microsoft stacks: Dotnet, Dotnet Core, Azure, Azure Active Directory/Graph, VSTS, Docker, Kubernetes, and software quality. Gora has a mastery of technical tests (unit tests, integration tests, acceptance tests, and user interface tests). Follow him on Twitter @logcorner.

ASP.net Core + Angular Photo Booth App: Do it Yourself

David Pine is a Technical Evangelist and Microsoft MVP working at Centare in Milwaukee, Wisconsin. David's passion for learning has led to his desire to give back to the developer community. David is a technical blogger, whose blogs have been featured on asp.netmsdn webdev and msdn dotnet. David is active in the developer community, speaking at technical conferences, contributing to popular open-source projects, serving as a mentor and giving back to stackoverflow.com. Follow him on Twitter @davidpine7.

Process Azure Analysis Services objects using a Logic App part 2

Jorg Klein is a Technology Consultant Data & Analytics and Microsoft Data Platform MVP working at Macaw in The Netherlands. He has many years of experience in the areas of business intelligence, data warehousing and analytics. In the last years he focused only on the Microsoft Azure PaaS Data Platform and Power BI. Jorg has been blogging for more than 10 years on his blog at jorgklein.com. He likes to work together with the Microsoft product teams and regularly provides them with feedback as Azure advisor. Follow him on Twitter via @jorg__klein

Performance Degradation in West Europe – 07/20 – Mitigated

$
0
0

Final Update: Friday, July 20th, 2018 10:18 UTC

We’ve confirmed that all systems are back to normal as of July 20th, 2018 09:50 UTC. Our logs show the incident started on July 20th, 2018 08:50 UTC and that during the 1 hour it took to mitigate the issue customers must have observed intermittent slowness while using Visual Studio Team Services.

We observed high tempdb contention on one of the databases which caused high active requests being queued causing slowness to the customers. We failed over the database which mitigated the issue. We will work on understanding the cause of the issue. Sorry for any inconvenience this may have caused to the customers.

Sincerely,
Aman


Initial notification: Friday, July 20th 2018 09:30UTC

We are actively investigating performance issues with Visual Studio Team Services. Some customers may experience slower than usual performance while accessing their VS Team Services accounts.

  • Next Update: Before Friday, July 20th 2018 10:30UTC

Sincerely,
Kalpana


Adding existing projects to an existing solution hosted on GitHub

$
0
0

I have written number of articles about configuring GitHub and deploying some code to it.  Here is a list of them just in case you want to get some perspectives and better know where I am coming from.

As I was creating another repository on GitHub I was having a problem adding existing projects to my local Git solution and then getting them deployed with the correct title.  Here is my story.

As seen in Figure 1, I started by simply creating a repository on GitHub.

image

Figure 1, create a GitHub repository

That worked no problem.  I noticed that the Description in Figure 1 was also the title of the repository on the main page of my GitHub repository here and shown in Figure 2.

image

Figure 2, create a GitHub repository

I committed my initial solution to the local Git and entered a commit message as seen in Figure 3.

image

Figure 3, create a GitHub repository

There was another window that asked for another commit message, like Figure 9 here, so I am not sure where this message actually goes on GitHub, but it worked out that I got the title in the place I wanted it, Figure 4, without messing anything up that already existed.  Wait, nothing else existed at this point.  Just take my word for it, it didn’t mess anything up, I’m actually writing this after having already done this from start to finish.

image

Figure 4, create a GitHub repository

Here is where I got the problem.  When I added a new project to the solution I had deployed to GitHub they were not added to my local Git repo and so I couldn’t get them committed to my GitHub repo.  Here is the trick, real simple, copy the solution to the repo on your machine and add it from there.  I.e. copy the project/solution to the Local Path, see Figure 1 and add it from there.

The project must be in the repo that was created via Figure 1.  At least when I tried adding a project that was not in that directory it would not add to the source code repository.  Only after copying it into the sourcerepo directory, it worked and I could then deploy it out to GitHub.

Then, I created a ‘New Solution Folder’ and added the projects into that folder as needed, see Figure 5.

image

Figure 5, add existing project / solution to a GitHub repository

Then, once I added all the projects to the ‘New Solution Folder’ as seen in Figure 6.

image

Figure 6, add existing project / solution to a GitHub repository

I committed all my changes to the local Git, as seen in Figure 7.

image

Figure 7, add existing project / solution to a GitHub repository

Then I Sync’ed and Pushed my changes to GitHub as seen in Figure 9 and Figure 10 in this article and in the below Figure 8.

image

Figure 8, add existing project / solution to a GitHub repository

Then the projects went to my GitHub with the titles I wanted and I was a happy camper, Figure 9.

*NOTE: I found that if there is only 1 project within a ‘New Solution Folder’ that VS or GitHub changes the form of the rendering.  So I added a second dummy project just to make it look pretty.

image

Figure 9, add existing project / solution to a GitHub repository

This is cool stuff indeed.

I searched for some reasons of my existing projects not being added to my local Git or GitHub and couldn’t find ‘my reason’ so I write this in hopes it will help someone else some day.  If it does, let me know.

Hosted agent assignment delays in West Europe – 07/20 – Investigating

$
0
0

Update: Friday, July 20th 2018 13:31 UTC

The delays in assigning hosted agents has also been noticed with the VS2017 machine pool in addition to the Linux machine pool. Our telemetry shows that the cause of the delay is due to performance degradation while trying to re-image virtual machines that are part of the machine pool. We are currently working with our Azure partner team to understand the root cause of the performance degradation. In the meantime in an effort to mitigate the incident we are adding additional capacity to our machine pools in North Europe to handle the load.

  • Next Update: Before Friday, July 20th 2018 17:45 UTC

Sincerely,
Ladislau


Initial notification: Friday, July 20th 2018 12:30 UTC

We're investigating delays in assigning hosted agents from the Linux machine pool in West Europe. Users in West Europe may see increased delays in getting hosted agents assigned to their build request. We are working on mitigating the incident. Apologies for the inconvenience.

  • Next Update: Before Friday, July 20th 2018 13:05 UTC

Sincerely,
Ladislau

Azure API Management release notes – July 20, 2018

$
0
0

On July 20, 2018, we initiated a regular service update. We upgrade service instances in batches, and it usually takes about a week for the update to reach every active service instance.

Payload in this and a few next updates will be lighter than usual as we are focusing on some internal refactoring.

Changes and fixes

  • In the "Select Logic App to import" list, we now filter out Logic Apps that don't have an HTTP trigger.
  • We fixed a bug in tag deduplication logic that was causing 500 error when importing from OpenAPI.
  • We added two new regions in China - China East 2 and China North 2.

Advisory on July 2018 .NET Framework Updates

$
0
0

The July 2018 Security and Quality Rollup updates for .NET Framework was released earlier this month. We have received multiple customer reports of applications that fail to start or don't run correctly after installing the July 2018 update. These reports are specific to applications that initialize a COM component and run with restricted permissions. You can reach out to Microsoft Support to get help.

We have stopped distributing the .NET Framework July 2018 updates on Windows Update and are actively working on fixing and re-shipping this month's updates. If you installed the July 2018 update and have not yet seen any negative behavior, we recommend that you leave your systems as-is but closely monitor them and ensure that you apply upcoming .NET Framework updates.

As a team, we regret that this release was shipped with this flaw. This release was tested using our regular and extensive testing process. We discovered while investigating this issue that we have a test hole for the specific combination of COM activation and restricted permissions, including impersonation. We will be mitigating that gap going forward. Again, we are sorry for any inconvenience that this product flaw has caused.

We will continue to update this post and dotnet/announcement #74 as we have new information.

Technical Context

The .NET Framework runtime uses the process token to determine whether the process is being run within an elevated context. These system calls can fail if the required process inspection permissions are not present. This causes an “access denied" error.

Workaround

Temporarily uninstall the July 2018 Security and Quality Rollup updates for .NET Framework to restore functionality until a new update has been released to correct this problem.

Symptoms

A COM component fails to load because of “access denied,” “class not registered,” or “internal failure occurred for unknown reasons” errors.

The most commonly reported failure results in the following error message:

Exception type: System.UnauthorizedAccessException
Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

Sharepoint

When users browse to a SharePoint site they may see the following HTTP 403 message:

"The Web Site declined to show this webpage"

The SharePoint ULS Logs will contain a message like the following:

w3wp.exe (0x1894)         0x0B94  SharePoint Foundation  General 0000       High                UnauthorizedAccessException for the request. 403 Forbidden will be returned. Error=An error occurred creating the configuration section handler for system.serviceModel/extensions: Could not load file or assembly <AssemblySignature>  or one of its dependencies. Access is denied. (C:WindowsMicrosoft.NETFramework64v2.0.50727Configmachine.config line 180)    

w3wp.exe (0x1894)         0x0B94  SharePoint Foundation  General b6p2      VerboseEx                Sending HTTP response 403:403 FORBIDDEN.      

w3wp.exe (0x1894)         0x0B94  SharePoint Foundation  General 8nca       Verbose                Application error when access /, Error=Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

When crawling a people content source, the request may fail with the following entry logged to the SharePoint ULS Log:

mssearch.exe (0x118C) 0x203C SharePoint Server Search Crawler:Gatherer Plugin cd11 Warning The start address sps3s://<URLtoSite> cannot be crawled.  Context: Application 'Search_Service_Application', Catalog 'Portal_Content'  Details:  Class not registered   (0x80040154)  

IIS Hosted Classic ASP calling CreateObject for .NET COM objects may receive error "ActiveX component can't create object" 

.NET Application creates instance of .NET COM application within an Impersonation Context may receive error "0x80040154 (REGDB_E_CLASSNOTREG)"

BizTalk Server Administration Console

BizTalk Server Administration Console fails to launch properly with the following errors:

An internal failure occurred for unknown reasons. (WinMgmt) 

Program Location:  

   at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) 

   at System.Management.ManagementObject.Get() 

   at Microsoft.BizTalk.SnapIn.Framework.WmiProvider.SelectInstance

Use the following guidance as a workaround:

  • Add “NETWORK SERVICE” to the local Administrators group.

IIS with Classic ASP

IIS Hosted Classic ASP calling CreateObject for .NET COM objects may receive the following error: "ActiveX component can't create object". Use the following guidance as a workaround.

  • If your web site uses Anonymous Authentication, change the Web Site Anonymous Authentication credentials to use the "Application pool identity"
  • If your site uses Basic Authentication, log into the application once as the application pool identity and then create an instance of the .NET COM component. All subsequent activations for that .NET COM component should succeed, for any user.

.NET applications using COM and impersonation

.NET Applications that creates instances of .NET COM application within an Impersonation Context may receive the following error: "0x80040154 (REGDB_E_CLASSNOTREG)". Use the following guidance as a workaround.

  • Create an instance of the .NET COM component prior to the impersonation context call. Later impersonated create instance calls should work as expected.
  • Run the .NET Application in the context of the impersonated user
  • Avoid using Impersonation when creating the .NET COM object

Computer Vision talks at ICML 2018

$
0
0

The 35th International Conference on Machine Learning (ICML) was held in Stockholm on July 10-15, 2018.  Links to the associated papers and video recordings (when they are posted) will be available on the website under each individual session. 

ICML2018-overview-wordcloud

Overall, deep learning and reinforcement learning were still the hottest topics.  During each session slot, there was a reinforcement learning track and at least one deep learning (neural network architectures) track (and sometimes multiple deep learning tracks).  Here is the breakdown of submitted and accepted papers, which clearly shows the popularity of these topics. 

ICML2018-paper-topics

Talks on the fairness of machine learning algorithms continue to rise.  One of the "best paper" awards went to a
paper on delayed impact of fair machine learning.  This was an interesting paper comparing different methods of fairness – demographic parity, equality of opportunity, and unconstrained utility maximization – and introduced the “outcome curve”, a tool for comparing the delayed impact of fairness criteria.  They showed that fairness criteria may cause harms to groups they intended to protect if you consider the long-term effects. 

fairness-20180712_102108  fairness-20180712_102322

There was a decent amount of work in generative adversarial attacks. The first keynote on AI & Security and the other “best paper” award were on this
topic, as well as some tracks.  The “best paper” winner was “Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples”.  This paper examined the non-certified white-box-secure defenses against adversarial examples from ICLR 2018, and found that 7 of the 9 defenses relied on obfuscated gradients.  Their paper developed 3 attack techniques which circumvented 6 defenses completely and 1 partially.  With this, they argued that future work should avoid relying on obfuscated gradients, and also spoke about the importance of reevaluating others’ published results. 

Of particular interest to me this year were the Computer Vision talks.  Here is a quick summary of some of the innovation in Computer Vision.  There were 2 computer vision tracks, on Wed July 11 and on Fri July 13

Deep Predictive Coding Network for Object Recognition (link to paper)
They described a bi-directional and recurrent neural net, namely deep predictive coding networks (PCN), that has feedforward, feedback, and recurrent connections. Feedback connections from a higher layer carry the prediction of its lower-layer representation; feedforward connections carry the prediction errors to its higher-layer. Given image input, PCN runs recursive cycles of bottom-up and top-down computation to update its internal representations and reduce the difference between bottom-up input and top-down prediction at every layer. After multiple cycles of recursive updating, the representation is used for image classification. With benchmark datasets (CIFAR-10/100, SVHN, and MNIST), PCN was found to always outperform its feedforward-only counterpart: a model without any mechanism for recurrent dynamics, and its performance tended to improve given more cycles of computation over time. In short, PCN reuses a single architecture to recursively run bottom-up and top-down processes to refine its representation towards more accurate and definitive object recognition. 

Gradually Updated Neural Networks for Large-Scale Image Recognition (link to paper)
Neural networks keep getting deeper, traditionally by cascading convolutional layers or building blocks.  They present a new way to increase the depth: computational orderings to the channels within convolutional layers or blocks.  This not only increases the depth and learning capacity with the same amount of computational cost and memory, but also eliminates the overlap singularities resulting in faster convergence and better performance.  They use “GUNN” for an acronym. 

Neural Inverse Rendering for General Reflectance Photometric Stereo (link to paper)
Photometric stereo is the problem of recovering 3D object surface normals from multiple images observed under varying illuminations.  They propose a physics-based unsupervised learning approach to general BRDF photometric stereo where surface normals and BRDFs are predicted by the network and fed into the rendering equation to synthesize observed images.  This learning process doesn’t require ground truth normals; using physics can bypass the lack of training data. SOTA results outperformed a supervised DNN and other classical unsupervised methods. 

One-Shot Segmentation in Clutter (link to paper)
This was an interesting look at visual search.  They cited “Where’s Waldo?” as a fun example of solving a problem with only one example.  🙂  They tackled the problem of one-shot segmentation: finding and segmenting a previously unseen object in a cluttered scene based on a single instruction example.  The MNIST of one-shot learning is the omniglot dataset, and they proposed a novel dataset called “cluttered omniglot” which used all characters but dropped them on top of each other in different colors. Using an architecture combining a Siamese embedding for detection with a U-net for segmentation, they show that increasing levels of clutter make the task progressively harder.  In this kind of visual search task, detection and segmentation are two intertwined problems, the solution to each of which helps solving the other.  They tried a pre-segmenting characters approach.  After segmenting using color, performance got very good.  They introduced MaskNet, an improved model that attends to multiple candidate locations, generates segmentation proposals to mask out background clutter, and selects among the segmented objects (segment first, decide later).  Such image recognition models based on an iterative refinement of object detection and foreground segmentation may provide a way to deal with highly cluttered scenes.  http://github.com/michaelisc/cluttered-omniglot

Active Testing: An Efficient and Robust Framework for Estimating Accuracy (link to paper)
Supervised learning is hungry for annotated data.  There are many approaches for dealing with the lack of labelled data in training (unsupervised, semi-supervised, etc).  Assemble a small high-quality test dataset.  The gold standard: given a fixed budget, annotate "all we can afford".  Their approach: Trade annotation accuracy for more examples.  They reformulate the problem as one of active testing, and examine strategies for efficiently querying a user so as to obtain an accurate performance estimate with minimal vetting.  They demonstrate the effectiveness of their proposed active testing framework on estimating two performance metrics, Precision@K and mean Average Precisions, for two popular Computer Vision tasks, multilabel classification and instance segmentation, respectively.  They further show that their approach is able to significantly save human annotation effort and more robust than alternative evaluation protocols.   

Noise2Noise: Learning Image Restoration without Clean Data (link to paper)
This is interesting for those working with grainy images.  They train a denoiser.  They apply basic statistical reasoning to signal reconstruction by machine learning - learning to map corrupted observations to clean signals - with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption.  In practice, they show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans - all corrupted by different processes - based on noisy data only.

Solving Partial Assignment Problems using Random Clique Complexes (link to paper)
This is an interesting paper to read if you need to perform a matching task, finding the same image (like a building) with occlusion, rotation, etc.  They present an alternate formulation of the partial assignment problem as matching random clique complexes, that are higher-order analogues of random graphs, designed to provide a set of invariants that better detect higher-order structure. The proposed method creates random clique adjacency matrices for each k-skeleton of the random clique complexes and matches them, taking into account each point as the affine combination of its geometric neighborhood.  They justify their solution theoretically, by analyzing the runtime and storage complexity of their algorithm along with the asymptotic behavior of the quadratic assignment problem (QAP) that is associated with the underlying random clique adjacency matrices. 

Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data (link to paper)
Future predictions on sequence data (e.g., videos or audios) require the algorithms to capture non-Markovian and compositional properties of high-level semantics. Context-free grammars are natural choices to capture such properties, but traditional grammar parsers (e.g., Earley parser) only take symbolic sentences as inputs. This paper generalizes the Earley parser to parse sequence data which is neither segmented nor labeled. This generalized Earley parser integrates a grammar parser with a classifier to find the optimal segmentation and labels, and makes top-down future predictions. Experiments show that this method significantly outperforms other approaches for future human activity prediction.

Neural Program Synthesis from Diverse Demonstration Videos (link to paper)
Interpreting decision making logic in demonstration videos is key to collaborating with and mimicking humans.  For example, learning how to make fried rice from watching a bunch of YouTube videos; humans understand variations like brown or white rice, etc.  To empower machines with this ability, they propose a neural program synthesizer that is able to explicitly synthesize underlying programs from behaviorally diverse and visually complicated demonstration videos.  Their model uses 3 steps: extract unique behaviors (using CNNs feeding into an LSTM), summarize (compare demo pairs to infer branching conditions, using multi-layer perceptron, to improve the network’s ability to integrate multiple demonstrations varying in behavior), and decode.  They also employ a multi-task objective to encourage the model to learn meaningful intermediate representations for end-to-end training.  They show that their model is able to reliably synthesize underlying programs as well as capture diverse behaviors exhibited in demonstrations.  Performance got better with the number of input videos.  The code is available at https://shaohua0116.github.io/demo2program.

Video Prediction with Appearance and Motion Conditions (link to paper)
Video prediction aims to generate realistic future frames by learning dynamic visual patterns. One fundamental challenge is to deal with future uncertainty: How should a model behave when there are multiple correct, equally probable futures? They propose an Appearance-Motion Conditional GAN to address this challenge. They provide appearance and motion information as conditions that specify how the future may look, reducing the level of uncertainty. Their model consists of a generator, two discriminators taking charge of appearance and motion pathways, and a perceptual ranking module that encourages videos of similar conditions to look similar. To train their model, they developed a novel conditioning scheme that consists of different combinations of appearance and motion conditions. They evaluate their model using facial expression and human action datasets – transforming input faces into different emotions with motion/video (generative videos).  They showed one interesting bug: Trump’s eyebrows turned black because no training data had white/blond eyebrows.  You can see this at http://vision.snu.ac.kr/projects/amc-gan, and the code is coming soon to https://github.com/YunseokJANG/amc-gan

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>