Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

[Upcoming changes] Exchange Web Services API for Office 365

$
0
0

Exchange Web Services (EWS) was launched as a part of Microsoft Exchange 2007 as a SOAP based API that allows access to Exchange and Exchange Online data. Starting July 3, 2018, Exchange Web Services (EWS) will no longer receive feature updates. While the service will continue to receive security updates and certain non-security updates, product design and features will remain unchanged. This change also applies to the EWS SDKs for Java and .NET as well. While we are no longer actively investing in it, EWS is still available and supported for use in production environments.

As we make progress on this journey, we have continued to evaluate the role of Exchange Web Services (EWS). Also we are sharing our plans to move away from Basic Authentication access for EWS over the next two years, with support ending Oct. 13, 2020. These plans apply only to the cloud-based Office 365/Exchange Online products; there are no changes to EWS capabilities of on-premises Exchange products.

Here’s the related documents:

In this scenario, we strongly suggest migrating to Microsoft Graph to access Exchange Online data and gain access to the latest features and functionality.

Hope this helps.


Experiencing issue while registering to Microsoft.insights Resource Provider properly – 07/17 – Resolved

$
0
0
Final Update: Tuesday, 17 July 2018 06:42 UTC

We've confirmed that all systems are back to normal with no customer impact as of 07/17, 05:53 UTC. Our logs show the incident started on 07/13, 02:29 UTC and that during the 4 days 3 hours 23 mins that it took to resolve the issue. Customers were not able to register to the Resource Provider Microsoft.Insights properly that was preventing storage and event hub creation that internally uses insights and because of it around 81 subscriptions were impacted.
  • Root Cause: The failure was due to an issue in one of our dependent platform services.
  • Incident Timeline:  4 days 3 hrs & 23  minutes - 07/13, 02:29 UTC through 07/17, 05:53 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Abhijeet


How to find apiVersion used for REST API or ARM templates

$
0
0

I wrote a few articles that discuss how and or why you would need to know the apiVerion api-version when calling Azure REST APIs or making a deployment with an Azure Resource Manager (ARM) template.

If you are getting errors, you might want to use a more current or a different API Version to see if it then works, but how to find them?  Look no further.  There are 2 approaches I recommend:

PowerShell

I wrote this article “How to set Azure PowerShell to a specific Azure Subscription” that explains how to login and set your azure subscription using Azure PowerShell so I will not cover that again.

To get a list of available providers, execute the following cmdlet after logging in.

Get-AzureRmResourceProvider

You should see output similar to the following, Figure 1.

image

Figure 1, azure PowerShell finding resource providers

Then, find the ProviderNamespace for which you want the the apiVersion for and execute the following cmdlet, assume we want the apiVersion for Microsoft.Web.

((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes 
| Where-Object ResourceTypeName -eq sites).ApiVersions

And you should see the output similar to Figure 2.

image

Figure 2, azure PowerShell finding resource providers and api versions

Then you can try the newest or different one.

Resource Explorer

This is a cool tool –> https://resources.azure.com

Select the AAD tenant, the Subscription then providers.  Search for your desired provider and you will see the same values, see Figure 3.

image

Figure 3, use resource explorer or azure PowerShell finding resource providers and api versions

Azure Machine Learning パッケージを活用して「AI機能」を構築してみる(概要編)

$
0
0

Microsoft Cloud Solution Architect

阪本 真悟

本日、ご紹介する Azure Machine Learning Package は、”Azure AI Services” のひとつとしてエンタープライズ分野でも需要の高い「画像認識」「テキスト分析」「予測分析」の つのシナリオを実現する際に、開発のお供としてご活用いただける便利なツールです。 便利なポイントは何か? 3 つのシナリオ毎にどんな機能を提供していて、何が出来るのか?などについて順番に説明していきたいと思います。

Azure AI Services 概要

その前に、まず弊社の ”Azure AI Services” 概要について紹介します。  

つ目、Azure Cognitive Services

Pre-Built AI(構築済AI) のソリューションとして位置付けています。先日のブログ記事で紹介した「Custom Vision Service」も Azure Cognitive Services の API です。汎用的に利用可能な、トレーニング済の高精度な認識・認知機能を API として提供しています。

つ目、Azure Bot Service

Conversational AI (会話AI)のソリューションとして位置付けています。Azure Cognitive Servicesは API なので C# や Java などプログラミング言語を使って呼び出すことは出来ますが、人が使うためには Bot Service が仲介役となって人とのインタフェースを担当する必要があります。様々なチャネルを使って自然なやり取りでサービスを提供することが可能です。

3つ目、Azure Machine Learning

Custom AI(カスタマイズ可能AI)のソリューションとして位置付けています。Cognitive Services も Bot Service もある程度のカスタマイズは可能なので誤解を招くかもしれませんが、Azure Machine Learning を使うとゼロから ユーザのニーズを満たす Machine Learning モデルを構築することが出来ます。構築した Machine Learning モデルは Azure Machine Learning Services に展開することも出来ますし、On-Premise の Docker コンテナ上に展開してクラウド接続のない環境で呼び出すことも出来ます。   そして本日ご紹介する Azure Machine Learning Package は、つ目の Azure Machine Learning ソリューションの一部として、AI 機能の構築を支援してくれる便利なツールなのです。

Azure Machine Learning Package 概要

機械学習のビジネス活用において、エンタープライズ分野でも需要の高い「画像認識」「テキスト分析」「予測分析」の3つのシナリオを実現するために活用可能な、Python 向け Package です。データエンリッチメント、パラメータチューニング、モデル評価と選択、デプロイなどの機能を関数として用意していますのでコーディングが楽になります。

Azure Machine Learning Package をお勧めする理由

他のツールと比較して便利なポイントは何でしょうか。需要の高い特定のシナリオ向けに、よく使われる機能が提供されており、またサンプルコードも Github 上で公開されています。 ユーザはまずサンプルコードを実行することにより全体の流れを把握することが出来ます。必要に応じてカスタマイズを加えながら、高品質なモデルを比較的容易に、さらに圧倒的に早く作成することが出来るはずです。他のフレームワークと比較してもシンプルな実装が可能です。

Azure Machine Learning Package の つのシナリオご紹介

「画像認識」「テキスト分析」「予測分析」の 3 シナリオについて、提供している機能をご紹介します。

1.AML Package for Vision(画像認識)の機能紹介

・Image Classification(画像分類)

・Object Detection(物体検出)

・Image Similarity(類似画像識別)

APIに含まれている機能

・Dataset Creation

・Data Augmentation

・Modeling and Training

・Model Evaluation

・Deployment

2.AML Package for Forecasting(予測分析)

・Financial Forecasting(財務予測)

・Demand Forecasting(需要予測)

API に含まれている機能

・Time Series Data Preparation

・Exploratory Report

・Featurization

・Modeling and Time Series Cross-Validation

・Model Evaluation and Selection

・Model Deployment

3.AML Package for Text Analytics

・Text Classification(テキスト分類)

・Custom Entity Extraction(カスタム エンティティ抽出)

・Word Embedding(単語の特徴ベクトル化)

API に含まれている機能

・Feature Engineering

・Text Preprocessing

次回、実践編では3つのシナリオの中で Computer Vision の Package を使って実際にカスタムモデルを構築してみます。

Bot Framework V4: What I learnt in 4 days in July 2018

$
0
0

I was recently involved in a short, 4-day customer hack based on Microsoft Bot Framework V4 (C# SDK), Azure Bot Service and Language Understanding Intelligence Service (LUIS, part of Cognitive Services).

It was a frustrating yet insightful 4 days and have some observations which might be helpful to anyone else who is thinking about Bot Framework V4 (BFv4), what state it is in right now and whether it is worth seriously looking at yet.

This article is intended as a point-in-time brain dump based on my limited exposure of the new framework and probably only has a shelf life of a few months, but if you are considering a BFv4 bot now, this could be useful.

BFv4 is currently in public preview which was announced at the Build conference back in May 2018. We do not have any dates for the final release, you can read the initial announcement here; the blog.botframework.com blog is a good one to watch for announcements on dates etc.

It is worth noting that I'm writing from a perspective of someone who has done a fair few bots using the V3 framework, almost exclusively with C#. I’m also reasonably well experienced with both Azure Bot Service and LUIS for use with V3 bots. That said, as you’ll see when you read on, V3 experience is of limited value for BFv4, in some ways I think it even made BFv4 harder to learn.

Headlines

I've certainly seen the headlines about BFv4 and seen a few conference videos, but have never got 'hands on' before with any preview version. Here are the main headline changes I was aware of before my 4 days project:

  • Re-write. BFv4 is a complete re-write of the framework with new concepts, terminology, documentation, architecture etc
  • More Languages. BFv4 SDK is available for JavaScript, C#, Python and Java.
  • Open Source. BFv4 is open source and always has been, You can see the GitHub repositories here
  • Overall service architecture remains the same. The concept of the bot as a single code base which is published to multiple channels (Skype, Cortana, Facebook etc) remains the same. The Azure Bot Service idea also remains the same.
  • .net Core 2. For the .netters amongst you, you’ll be glad to learn that the C# BFv4 SDK is built on .net core 2.0 which means that you get to use all those cool .net core features like middleware and DI. See the open source .net core SDK here

Key Learnings

These are just some of the key learnings about BFv4. These are things I had not already gleaned before starting my 4 day project and things I may have spent slightly longer than I'd have expected figuring out.

Terminology & Concepts

There are several new concepts in BFv4 which bring new terminology with them. The concepts are covered in some detail in the docs, but here are some of the key new terms you'll hear and need to understand to build BFv4 bots.

  • Adapter: The Adapter is like the orchestration engine for the bot and is responsible for directing incoming and outgoing communication, authentication, and so on. When your bot receives an activity, the adapter wraps up everything about that activity, creates a TurnContext object, passes it to your bot's application logic, and sends responses generated by your bot back to the user's channel. We don't typically work directly with the adapter. Read about activity processing and the adapter here
  • Middleware: Middleware is a pipeline which sits between the adapter and the bot code. The pipeline can contain multiple middleware components and many of the built in capabilities are represented as middleware such as state. Read more about middleware here
  • Turn: A Turn is the action of the bot receiving an activity (i.e. a message from the user), and subsequently processing it, normally involving the bot replying back to the user and awaiting further input. A Turn carries a TurnContext object which contains useful information such as Conversation, Activity, Intent, State and other information.
  • Dialogs and Conversation flow: The way conversation flows through the bot has changed significantly compared to BFv3. Key docs include Manage conversation flow with dialogs and Create modular bot logic with a dialog container. These are some of the key concepts:
    • Dialog: A Dialog is a little different to BFv3. In BFv4, a Dialog is used for very simple, single turn interactions. For example if you ask the bot "what is 2 + 2", it would reply "4" and that would be the end of the dialog. A Dialog can receive data either via arguments passed in from the OnTurn function or via state. Dialogs cannot contain child dialogs so cannot be used on their own for complex branched conversations, that is where the DialogContainer is used as it contains a collection of Dialogs.
    • Prompt: A Prompt is a type of built-in dialog intended to capture and verify specific pre-defined data from the user such as text, numbers, dates, confirmation or choices. Conceptually this is the same as BFv3.
    • DialogContainer: A DialogCointainer is a collection of Dialog or Prompt which are executed sequentially in WaterfallStep. DialogContainers can and do contain child dialogs and are the logical equivalent to the Dialog in BFv3.
    • DialogSet: A DialogSet is a collection which can contain child Dialog, Prompt or DialogContainer. DialogSets are generally used to manage the top level menu for your bot and then branch out to different DialogContainers for different branches of the conversation. This is often known as a "root dialog", but should more accurately be described as "root dialog set".
    • WaterfallStep: A WaterfallStep step can be thought of as a granular step in the conversation either prompting the user for an utterance or processing what the user said.
  • State: State is conceptually similar to BFv3 in that it stores data relating to either the conversation or the user. State is a middleware component. Read about Managing conversation state here.

Docs

The docs are actually in quite decent shape at the time of writing. The main, high level conceptual stuff is documented fairly well.

As is usual with a preview technology, the low-level code samples are fairly out of date compared to what you can find on GitHub (more on that next).

The V4 docs are available here: https://docs.microsoft.com/en-us/azure/bot-service/?view=azure-bot-service-4.0

If you try to search for them, you may end of on the V3 docs, but if you hit the 'Azure Bot Service' drop-down in the top left corner and select "SDK V4.x (preview)", it will filter on the BFv4 docs. Rather embarassingly, this took me several hours to figure out!

Samples

As is usual with open source technologies, the samples are maintained on GitHub first and then updated into the docs at a later stage, so I'd always use GitHub as your 'go to' location for code samples and examples.

Being an open source SDK, you can track progress of the SDK itself here: https://github.com/Microsoft/botbuilder-dotnet

You'll find a selection of fairly good samples in the samples-final folder of this repository which cover how to do the most common tasks.

However, the best samples I found for .net were on a specific contosocafe-v4-dotnet branch of the general BotFramework-Samples repository. The samples are focused on a Contoso Cafe scenario and seem to be very fresh (last updated 29th June 2018): https://github.com/Microsoft/BotFramework-Samples/tree/contosocafe-v4-dotnet/docs-samples/V4/dotnet/ContosoCafe/ContosoCafe-5-DialogsWithLUISEntities/ContosoCafe

Azure Bot Service

A bot is essentially a web API and so in theory, it can be hosted on any web service. However, it seems very clear to me that the intention is that Bot Framework bots are hosted as part of the Azure Bot Service.

My experience with the Azure Bot Service in both BFv3 and BFv4 has been good and I struggle to think of a reason not to use the service for hosting as it has many advantages, including:

  • Easy channel publication
  • Azure build, deploy and continuous integration capabilities
  • Templates with popular Cognitive Service integrations
  • Speech priming
  • Analytics

That said, the option is yours but my advice would be that if you are going to 'stray from the beaten path' and host elsewhere, you may find that many of the docs and samples become harder to follow. Certainly as you are learning BFv4, you may find it easier to stick with the Azure Bot Service.

Luis Middleware & Strongly Typed Class

As with BFv3, almost every bot will need some level of natural language processing which is where Cognitive Services Language Understanding Service (LUIS) comes in. LUIS extracts intent and entities from a user's utterance and makes that data available to the bot to inform teh application logic.

Intent is normally used to define the top level branch of the conversation, on the top level intent is known, this information is not required.

Entities are not always required, but can be useful to capture data points form the user via natural language rather than using Dialogs and Prompts.

In BFv4, there is a middleware component called LuisRecognizerMiddleware. You can read about how to use it at Using LUIS for Language Understanding.

In my experience, i found a few conceptual issues with the LuisRecognizerMiddleware:

  1. Luis is generally used at the top of the conversation; once you have the intent and entities, Luis is no longer required. However, middleware is executed on every Turn, which will become expensive and wasteful in terms of bandwith, cost and latency for most scenarios.
  2. The LuisRecognizerMiddleware does not provide a strongly typed object to work with. You can still get to the same data but it is presented as a very deep collection of Dictionary objects.

An alternative approach is to use a LuisRecognizer as and when it is required in your bot's root DialogSet to gather top level intent and entities and then brand out to DialogContainers from there, thus reducing the use of Luis.

The other main advantage of the LuisRecognizer class is that you can generate a strongly typed class based on your actual Luis model using a NPM tool called LuisGen

The Extract intents and entities using LUISGen docs give you a high level overview of this approach, however at the time of writing, the samples in the docs were incomplete and difficult to follow.

For a complete example of the LuisRecognizer in action, please see my Bot-V4-Banko example on GitHub. This is based on a fictitious bank which enables balance checks and money transfer via their bot. This is also a good example of multiple DialogContainers in use.

In Summary

If you are a BFv3 developer, be prepared to discard a lot of your knowledge, samples and experience for BFv4 as it really is a big change in terms of terminology, capabilities and overall architecture.

However, as with all platform re-writes of this nature, BFv4 is a very good developer platform and once the initial learning curve has been overcome, I think it is a much easier and overall better platform for writing bots compared to BFv3.

I'll attempt to keep my Bot-V4-Banko example up to date with the latest patterns and I'll also re-visit this article from time-to-time as my understanding develops and patterns and best practices emerge.

This article is just what I learnt in 4 days, your mileage may vary!

Performance implications of default struct equality in C#

$
0
0

If you're familiar with C#, then you most likely heard that you should always override Equals and GetHashCode for custom structs for performance reasons. To better understand the importance and the rationale behind this advice we're going to look at the default behavior to see why and where the performance hit comes from. Then we'll look at a performance bug that occurred in my project and at the end we'll discuss some tools that can help to avoid the issue altogether.

How important the issue is?

Not every potential performance issue affects an end-to-end time of your application. Enum.HasFlag is not very efficient (*), but unless it is used on a very hot path, it would not cause a severe issue for your product. The same is true for defensive copies caused by non-readonly structs in the readonly contexts. The issues are real but they're unlikely to be visible in regular applications.

(*) The implementation was fixed in .NET Core 2.1 and as I've mentioned in the previous post, now you can mitigate the issue with a custom HasFlag implementation for the older runtimes.

But the issue we're talking about today is different. If a struct does not provide Equalsand GetHashCode, then the default versions of these methods from System.ValueType are used. The versions that could easily affect an application's end-to-end performance in a very significant way.

Why the default implementations are so slow?

The CLR authors tried their best to make the default implementations of Equals and GetHashCode for value types as efficient as possible. But there are a couple of reasons why they won't be as efficient as a custom version written by hand (or generated by a compiler) for a specific type.

  1. Boxing allocation. The way the CLR is designed, every call to a member defined in System.ValueType or System.Enum types cause a boxing allocation (**).

(**) Unless the method is a JIT intrinsic. For instance, in Core CLR 2.1 the JIT compiler knows about Enum.HasFlag and emits a very optimal code that causes no boxing allocations.

  1. Potential collisions of the default GetHashCode implementation. An implementer of a hash function faces a dilemma: make a good distribution of the hash function or to make it fast. In some cases, it's possible to achieve them both, but it is hard to do this generically in ValueType.GetHashCode.

The canonical hash function of a struct "combines" hash codes of all the fields. But the only way to get a hash code of a field in a ValueType method is to use reflection. So, the CLR authors decided to trade speed over the distribution and the default GetHashCode version just returns a hash code of a first non-null field and "munges" it with a type id (***) (for more details see RegularGetValueTypeHashCode in coreclr repo at github).

(***) Based on the comment in the CoreCLR repo, this behavior may change in the future.

public readonly struct Location
{
   
public string Path { get
; }
   
public int Position { get
; }
   
public Location(string path, int position) => (Path, Position) =
(path, position);
}


var hash1 = new Location(path: "", position: 42).GetHashCode();
var hash2 = new Location(path: "", position: 1).GetHashCode();
var hash3 = new Location(path: "1", position: 42).GetHashCode();
// hash1 and hash2 are the same and hash1 is different from hash3

This is a reasonable behavior unless it's not. For instance, if you're unlucky enough and the first field of your struct has the same value for most instances, then a hash function will provide the same result all the time. And, as you may imagine, this will cause a drastic performance impact if these instances are stored in a hash set or a hash table.

  1. Reflection-based implementation is slow. Very slow. Reflection is a powerful tool when used correctly. But it is horrible if it's used on an application's hot path.

Let's see how a poor hash function that you can get because of (2) and the reflection-based implementation affects the performance:

public readonly struct Location1
{
   
public string Path { get
; }
   
public int Position { get
; }
   
public Location1(string path, int position) => (Path, Position) =
(path, position);
}

public readonly struct Location2
{
   
// The order matters!
    // The default GetHashCode version will get a hashcode of the first field
    public int Position { get
; }
   
public string Path { get
; }
   
public Location2(string path, int position) => (Path, Position) =
(path, position);
}

public readonly struct Location3 : IEquatable<Location3
>
{
   
public string Path { get
; }
   
public int Position { get
; }
   
public Location3(string path, int position) => (Path, Position) =
(path, position);

   
public override int GetHashCode() => (Path, Position).
GetHashCode();
   
public override bool Equals(object other) => other is Location3 l &&
Equals(l);
   
public bool Equals(Location3 other) => Path == other.Path && Position == other.
Position;
}


private HashSet<Location1> _locations1;
private HashSet<Location2> _locations2;
private HashSet<Location3
> _locations3;


[
Params(1, 10, 1000)]
public int NumberOfElements { get; set
; }

[
GlobalSetup]
public void
Init()
{
    _locations1
= new HashSet<Location1>(Enumerable.Range(1, NumberOfElements).Select(n => new Location1(""
, n)));
    _locations2
= new HashSet<Location2>(Enumerable.Range(1, NumberOfElements).Select(n => new Location2(""
, n)));
    _locations3
= new HashSet<Location3>(Enumerable.Range(1, NumberOfElements).Select(n => new Location3(""
, n)));
    _locations4
= new HashSet<Location4>(Enumerable.Range(1, NumberOfElements).Select(n => new Location4(""
, n)));
}

[
Benchmark]
public bool
Path_Position_DefaultEquality()
{
   
var first = new Location1("", 0
);
   
return _locations1.
Contains(first);
}

[
Benchmark]
public bool
Position_Path_DefaultEquality()
{
   
var first = new Location2("", 0
);
   
return _locations2.
Contains(first);
}

[
Benchmark]
public bool
Path_Position_OverridenEquality()
{
   
var first = new Location3("", 0
);
   
return _locations3.Contains(first);
}

 

Method | NumOfElements | Mean | Gen 0 | Allocated | -------------------------------- |-------------- |--------------:|--------:|----------:| Path_Position_DefaultEquality | 1 | 885.63 ns | 0.0286 | 92 B | Position_Path_DefaultEquality | 1 | 127.80 ns | 0.0050 | 16 B | Path_Position_OverridenEquality | 1 | 47.99 ns | - | 0 B | Path_Position_DefaultEquality | 10 | 6,214.02 ns | 0.2441 | 776 B | Position_Path_DefaultEquality | 10 | 130.04 ns | 0.0050 | 16 B | Path_Position_OverridenEquality | 10 | 47.67 ns | - | 0 B | Path_Position_DefaultEquality | 1000 | 589,014.52 ns | 23.4375 | 76025 B | Position_Path_DefaultEquality | 1000 | 133.74 ns | 0.0050 | 16 B | Path_Position_OverridenEquality | 1000 | 48.51 ns | - | 0 B |

If the first field is always the same, the default hash function returns the same value for all the elements. This effectively transforms a hash set into a linked list with O(N) for insertion and lookup operations. And the operation that populates the collection becomes O(N^2) (N insertions with O(N) complexity per insertion). It means that the insertion of 1000 elements into a set will cause almost 500_000 calls to ValueType.Equals. A method, that uses reflection under the hood!

And as you can see from the benchmark, the performance may be tolerable if you're lucky and the first element of the struct is unique (Position_Path_DefaultEqualitycase). But the performance may be horrible if this is not the case.

Real-world issue

Now you can guess what kind of issue I've faced recently. A couple of weeks ago, I received a bug report that the end-to-end time for an app I'm working on increased from 10 seconds to 60. Luckily the report was very detailed and contained an ETW trace that immediately showed the bottleneck: ValueType.Equals was taking 50 seconds.

After a very quick look at the code it was clear what was the problem:

private readonly HashSet<(ErrorLocation, int)> _locationsWithHitCount;
readonly struct ErrorLocation
{
   
// Empty almost all the time
    public string OptionalDescription { get
; }
   
public string Path { get
; }
   
public int Position { get; }
}

We used a tuple that contained a custom struct with default equality implementation. And unfortunately, the struct had an optional first field that was almost always equals to string.Equals. The performance was OK until the number of elements in the set increased significantly causing a real performance issue, taking minutes to initialize a collection with tens of thousands of items.

Is the default implementation of ValueType.Equals/GetHashCode always slow?

Both ValueType.Equals and ValueType.GetHashCode have a special optimization. If a type does not have "pointers" and is properly packed (I'll show an example in a minute) then more optimal versions are used: GethashCode iterates over an instance and XORs blocks of 4 bytes and Equals method compares two instances using memcmp.

// Optimized ValueType.GetHashCode implementation
static INT32 FastGetValueTypeHashCodeHelper(MethodTable *mt, void *pObjRef
)
{

    INT32 hashCode = 0;
    INT32 *pObj = (INT32*)pObjRef;

  
// this is a struct with no refs and no "strange" offsets, just go through the obj and xor the bits
   INT32 size = mt->GetNumInstanceFieldBytes();
   for (INT32 i = 0; i < (INT32)(size / sizeof(INT32)); i++)
       hashCode ^= *pObj++;

return hashCode
;
}


// Optimized ValueType.Equals implementation
FCIMPL2(FC_BOOL_RET, ValueTypeHelper::FastEqualsCheck, Object* obj1, Object* obj2
)
{

    TypeHandle pTh = obj1->GetTypeHandle();

    FC_RETURN_BOOL
(memcmp(obj1->GetData(), obj2->GetData(), pTh.GetSize()) == 0);
}

The check itself is implemented in ValueTypeHelper::CanCompareBits and it is called from both ValueType.Equals and from ValueType.GetHashCodeimplementations.

But the optimization is very tricky.

First, it is hard to know when the optimization is enabled and even minor changes in the code can turn it "on" and "off":

public struct Case1
{
   
// Optimization is "on", because the struct is properly "packed"
    public int X { get
; }
   
public byte Y { get
; }
}

public struct Case2
{
   
// Optimization is "off", because struct has a padding between byte and int
    public byte Y { get
; }
   
public int X { get; }
}

For more information about memory layout see my blogpost "Managed object internals, Part 4. Fields layout".

Second, a memory comparison will not necessarily give you the right results. Here is a simple example:

public struct MyDouble
{
   
public double Value { get
; }
   
public MyDouble(double value) => Value =
value;
}


double d1 = -0.0;
double d2 = +0.0
;

// True
bool b1 = d1.
Equals(d2);

// False!
bool b2 = new MyDouble(d1).Equals(new MyDouble(d2));

-0.0 and +0.0 are equal but have different binary representations. This means that Double.Equals returns true, but MyDouble.Equals returns false. For most cases the difference is non-substantial, but just imagine how many hours you may potentially spend trying to fix an issue caused by this difference.

And how can we avoid the issue in the future?

You may wonder, how the issue mentioned above may appear in the real world? One obvious way to enforce Equals and GetHashCode for structs is to use FxCop rule CA1815. But there is an issue with this approach: it is a bit too strict.

A performance critical application may have hundreds of structs that are not necessarily designed to be used in hash sets or dictionaries. This may cause app developers to suppress the rule that can backfire ones a struct use cases changes.

A more appropriate approach is to warn a developer when "inappropriate" struct with default equality members (defined in the app or in a third-party library) is stored in a hash set. Of course, I'm talking about ErrorProne.NET and the rule that I've added there ones I've faced this issue:

image

The ErrorProne.NET version is not perfect and will "blame" a valid code if a custom equality comparer is provided in a constructor:

image

But I think it still valid to warn when a struct with default equality members is used by not when it is produced. For instance, when I tested my rule I've realized that System.Collections.Generic.KeyValuePair<TKey, TValue> defined in mscorlib, does not override Equals and GetHashCode. Today it is unlikely that someone will define a variable of type HashSet<KeyValuePair<string, int>> but my point is that even BCL can violate the rule and it is useful to catch it until its not too late.

Conclusion

  • The default equality implementation for structs may easily cause a severe performance impact for your application. The issue is real, not a theoretical one.
  • The default equliaty members for value types are reflection-based.
  • The default GetHashCode implementation may provide a very poor distribution if a first field of many instances is the same.
  • There is an optimized default version for Equals and GetHashCode but you should never rely on it because you may stop hitting it with an innocent code change.
  • You may rely on FxCop rule to make sure that every struct overrides equality members, but a better approach is to catch the issue when the "wrong" struct is stored in a hash set or in a hash table using an analyzer.

Additional resources

IT-Ausstattung an Schulen: Welche Lösung ist die richtige?

$
0
0

Wenn die technische Ausstattung von Schulen aktualisiert oder ersetzt werden soll, stehen die Einrichtungen und vor allem die IT-Verantwortlichen vor zwei großen Herausforderungen: Zum einen müssen passende und günstige Geräte gefunden werden, zum anderen müssen diese in kurzer Zeit für eine große Zahl von Lernenden, Lehrkräften und Mitarbeitenden bereitgestellt werden. Mit Blick auf diese Schwierigkeiten hat Emphatic Thinking Research im Auftrag der Microsoft Corp. eine umfassende Untersuchung zu den Möglichkeiten und Chancen zeitgemäßer IT-Lösungen für Bildungseinrichtungen durchgeführt.

Zeitgemäße IT in der Schule – die größten Herausforderungen

„Die Bildung soll uns in diese Zukunft begleiten, die wir gar nicht begreifen können. Niemand hat auch nur einen blassen Schimmer, [...] wie die Welt in fünf Jahren aussehen wird, aber wir müssen jetzt schon dafür ausbilden.“

Dieses Zitat von Sir Kenneth Robinson stammt aus dem Jahr 2006 und ist heute so aktuell wie vor 12 Jahren. Und damit nicht genug: Der technische Wandel schreitet immer schneller voran, sodass es kaum voraussehbar ist, wie die Welt in zwei oder drei Jahren aussieht. Das macht die Wahl einer praktikablen und bezahlbaren IT-Ausstattung für Schulen nicht leichter. Die technische Ausstattung an Schulen regelmäßig komplett zu erneuern, ist kaum möglich. Zukunftsfähige Lösungen müssen daher ein Höchstmaß an Flexibilität bieten.

Die Studie von Emphatic Thinking hat sich diesen aktuellen Problemstellungen gewidmet und Hunderte IT-Fachleute in Bildungseinrichtungen befragt. Zusätzlich wurden die beiden derzeit meistgenutzten Betriebssysteme Chrome und Windows mit ihren jeweiligen cloudbasierten Verwaltungslösungen unter die Lupe genommen. Die Ergebnisse wurden in dem Whitepaper „Der Stand der modernen Geräteinstallation im Bildungsbereich – Auswirkungen auf die Rolle der IT hinsichtlich der Verbesserung des Lernerfolgs“ zusammengefasst.

Kostenloses Whitepaper herunterladen

Erfahren Sie mehr über moderne Gerätebereitstellung und Identitätsverwaltungslösungen speziell für Bildungseinrichtungen. Das umfassende Whitepaper zur Studie können Sie sich über Microsoft kostenlos herunterladen!

Configuration Manager Third Party Software Updates Video Tutorial

$
0
0

Posted another one focused on the third-party update features currently in ConfigMgr TP 1806.  WIll update the video when the feature releases into production builds.  Summary and link below.

hird-party software update integration is one of the most requested features on the Configuration Manager UserVoice feedback site.  If you have been watching the Configuration Manager technical previews since TP1803, you may have noticed this new feature being previewed.  To help you learn more about third-party software updates, a detailed video tutorial has been prepared available here.  In it he covers some history, requirements, configuration and does a demo.


SQL Server on Linux: Why Do I Have Two SQL Server Processes

$
0
0

When starting SQL Server on Linux why are there two (2) sqlservr processes?

systemctl status mssql-server
mssql-server.service - Microsoft SQL Server Database Engine

   CGroup: /system.slice/mssql-server.service
          
├─85829 /opt/mssql/bin/sqlservr       <--------- WATCHDOG | MONITOR
           └─85844 /opt/mssql/bin/sqlservr       <--------- SQLSERVER.EXE

The simple answer is the first entry (85829) is not what you are used to on a Windows system as sqlservr.exe and does not listen for TDS traffic or open database files.  The parent process handles basic configuration activities and then forks the child process.  The parent process (WATCHDOG) becomes a lightweight monitor and the child process runs the sqlservr.exe process.

Hint: Process ids can be reused so do not write scripts looking for the largest process id as the first entry may have a process id larger than the second entry.

In general, it is an unsafe practice to capture a dump from the process encountering the exception or fatal condition.  Instead, the WATCHDOG is able to work safely as an external process.  On Windows, if a process terminates unexpectedly the Watson infrastructure is invoked to capture process dumps and add entries to the event log.  The Linux, sqlservr, WATCHDOG can perform a similar role.  If an unexpected SIGNAL or Fatal Error condition is encountered the WATCHDOG is signaled and can capture a dump.  The Linux process, signal handlers as well as the death signal are established to handle issues.

The default behavior for some Linux signals is to TERMINATE or generate a full process CORE DUMP.  For example:  On Linux a process is not allowed to catch the SIGKILL signal.  If SIGKILL is sent to the child process Linux sends the WATCHDOG the registered death signal, informing the WATCHDOG of the child's exit.   For those signals that default to capturing a full process CORE DUMP, SQL Server on Linux installs alternate signal handlers.  The alternate handlers use PalDumper to capture the process information, which are much smaller than the entire process CORE DUMP.

Bob Dorr – Principle SQL Server Software Engineer

“Technology gives the quietest student a voice”- Jerry Blumengarten

$
0
0

 

 

Today's special Flipgrid blog post comes from Paul Watkins. Currently Leader of Digital Learning at Microsoft Showcase School Ysgol Bae Baglan, Port Talbot, Wales. Paul is, at present, the only UK based Flipgrid Ambassador. He is also a Skype Master Teacher Mentor, MIE and Surface Master Trainer and MIEExpert. In addition to this Paul currently sits on the Welsh Government’s National Digital Learning Council.

 

 

 


 

Monday 18th June 2018 was a day that educators listened with excitement to the announcement from Microsoft CEO Satya Nadella that Flipgrid had joined the Microsoft family and in doing so was now going to be made available to all teachers for free.  As a teacher who had been using Flipgrid prior to this announcement, I was filled with excitement, purely from thinking about how many colleagues would be able to use it and in doing so their pupils would benefit hugely.  In the motion picture Dead Poets Society, inspirational teacher, John Keating, played by the late Robin Williams challenged his students to “Strive to find your own voice, because the longer you wait to begin, the less likely you are to find it at all”.  Now all teachers have the means by which to provide a platform for all students to discover their voices, no matter how loud or quiet they are, and in doing so show the pupils that what they have to say matters - and that journey of discovery can start now!  So, how are pupils benefitting from Flipgrid being used in the classroom, but also can teachers benefit from it too? 

 

 

You don’t have to look far on social media (simply search for #flipgridfever) to see teachers from across the world making unbelievable use of Flipgrid in their classrooms.  Through all year groups and across all subject areas, teachers are showing creativity in its uses, but more than that, we are seeing confident learners developing, pupils speaking up for the first time, no longer worried about what others may think about what they have to say.  We are seeing pupils using it to develop their reading skills while others are using it as the perfect platform to develop oracy in their foreign language classes.  Many pupils struggle with exam questions where extended writing is required (the type of question that would normally begin with ‘Discuss’ or ‘Explain’ and would be worth 4 or more marks).  Whilst many pupils will often struggle with formulating their answer on paper, being able to talk about it first helps them to grow in confidence. Now, when we ask a question to the class, it provides an opportunity for every child to answer it, for their thoughts, views and opinions to be heard!  It also allows peers to respond, along with the teacher, suggesting how their answer could be improved further or to further discussions.  All they need is access to the internet, a camera and a mic. In a recent department review a year 8 pupil stated when being interviewed by a member of SLT

“Using Flipgrid has helped me to become more confident in speaking out loud.  I am now not afraid to put my hand up when a question is asked.” 

Transition from year 6 to year 7 can be hard time for many.  A nervous and scary.  At Ysgol Bae Baglan we are using Flipgrid in a project managed by year 8 peer support pupils to answer questions about moving into year 7 from pupils in our feeder cluster schools.  This was presented at the National Digital Learning Event in June and it’s been encouraging to see the number of schools who are now looking to adopt this project themselves – pupils supporting pupils. 

But teachers can support teachers too.  We have established a Flipgrid for teachers from across Wales to engage with each other, sharing ideas, practices and supporting though offering help and advice. Though in its early days its been so encouraging to see the teachers sharing how they are using Flipgrid in their classrooms Flipgrid becoming free couldn’t have come at a more exciting time for teachers in Wales with the strong current focus on digital learning and competency. 

US Politician Madeleine Albright said,

“It took me quite a long time to develop a voice, and now that I have it, I am not going to be silent”. 

Through introducing Flipgrid into your classrooms and schools you are providing a digital tool that will have lifelong impact through developing their analogue voices – that’s life changing! 

 

Visual Studio中共享的PCH使用示例

$
0
0

[原文发表地址] Shared PCH usage sample in Visual Studio

[原文作者] EricMittelette

[原文发表时间] 2017/7/5

这篇文章由Olga ArkhipovaXiang Fan撰写.

通常,Visual Studio解决方案中的多个项目使用相同(或非常相似)的预编译头. 由于pch文件通常很大并且构建它们需要花费大量时间,这就导致了一个共性的问题:几个项目是否可以使用相同的且被构建一次的pch文件?

答案是肯定的,但它需要一些技巧来满足cl的检查,用于构建pch的命令行与使用此pch构建源文件的命令行相同。

这是一个示例解决方案,它有3个项目 - 一个(SharedPCH)正在构建pch和静态库,另外两个(ConsoleApplication 12)正在使用它。您可以在我们的VCSamples GitHub存储库中找到示例代码源。

当ConsoleApplication项目引用SharedPCH项目时,构建将自动链接SharedPCH的静态库,但是也需要改变几个项目属性。

  1. C / C ++附加包含目录 / I)应包含共享的stdafx.h目录
  2. C / C ++预编译头文件输出文件(/ Fp)应设置为共享pch文件(由SharedPCH项目生成)
  3. 如果您的项目使用/ Zi/ ZI编译(请参阅最后有关这些开关的更多信息),使用共享pch的项目需要将共享pch项目生成的.pdb.idb文件复制到其特定位置以便 最终的pdb文件包含pch符号。

由于这些属性需要针对所有项目进行类似的更改,我创建了SharedPCH.propsCustomBuildStep.props文件,并使用Property Manager工具窗口将它们导入到我的项目中。

SharedPch.props帮助#1和#2,并在所有项目中导入。CustomBuildStep.props帮助#3并导入到 重要的pch项目,但不导入生产项目。如果您的项目使用/ Z7,则不需要CustomBuildStep.props

在SharedPch.props中,我为共享的pchpdbidb文件位置定义了属性:

 

我们希望将所有构建输出都放在一个根文件夹下,与源分开,因此我重新定义了输出和中间目录。这对于使用共享pch不是必需的,因为它只是使实验更容易,因为如果出现问题可以删除一个文件夹。

调整后的C / C ++'附加包含目录' '预编译头文件输出文件'属性 

在CustomBuildStep.props中,我定义了自定义构建步骤,以便在ClCompile目标之前运行,并复制共享的pch .pdb.idb文件(如果它们比项目的.pdb.idb文件更新)。请注意,我们在这里讨论的是编译器中间pdb文件,而不是链接器生成的最终文件。

如果项目中的所有文件都使用了一个pch,那就是我们需要做的全部,因为当pch被更改时,所有其他文件也需要重新编译,所以在构建结束时我们将拥有完整的pdbidb文件。

如果您的项目使用多个pch或包含完全不使用pch的文件,则需要更改这些文件的pdb文件位置(/ Fd),以便共享pch pdb不会覆盖它。

我使用命令行属性编辑器来定义命令。每个'xcopy'命令应该在它自己的行上:

或者,您可以将所有命令放在脚本文件中,并将其指定为命令行。

背景资料

/ Z7,/ ZI和/ Zi编译器标志

当使用/ Z7时,调试信息(主要是类型信息)存储在每个OBJ文件中。这包括头文件中的类型,这意味着在共享头文件中存在大量重复,并且OBJ大小可能很大。

使用/ Zi/ ZI时,调试信息存储在编译器pdb文件中。在一个项目中,源文件通常使用相同的pdb文件(这由/ Fd编译器标志控制,默认值为$IntDirvc $PlatformToolsetVersion.pdb),因此调试信息在它们之间共享。

/ ZI还将生成一个IDB文件来存储与增量编译相关的信息,以支持编辑和继续。

编译器PDB与链接器PDB

如上所述,编译器PDB/ Zi/ ZI生成以存储调试信息。稍后,链接器将通过在链接期间组合来自编译器PDB的信息和其他调试信息来生成链接器PDB.链接器还可以删除未引用的调试信息。链接器PDB的名称由/ PDB链接器标志控制。默认值为$OutDir$TargetName.pdb

给我们您的反馈! 

您的反馈是确保我们提供有用信息和功能的关键部分。如有任何问题,请通过@visualc上的Twitter或发送电子邮件至 visualcpp@microsoft.com 与我们联系。如有任何问题或建议,请通过帮助>发送反馈>IDE报告问题告诉我们。

在Visual Studio中宣布C ++仅在我的代码中单步执行

$
0
0

[原文发表地址]Announcing C++ Just My Code Stepping in Visual Studio

[原文发表时间]2018626

[作者]Marian Luparu [MSFT]

在Visual Studio 2017发行版15.8预览版3中,我们宣布在C ++中支持逐步执行我的代码。 除了以前支持的回调过滤之外,Visual Studio调试器现在还支持单步执行非用户代码。 当您“进入”时,例如在带有自定义谓词的标准库中的算法或具有用户回调的Win32 API,调试器将方便地进入您提供的谓词或回调 ,而不是使用的库代码。 最终将调用你的代码。

去年在CppCon 2017上宣布的std :: function调用中的调试改进得到了非常热烈的接受后,该团队一直在为这个调试挑战开发通用解决方案,不需要在库代码中进行任何注释。 15.8预览3今天可以获得此支持,我们期待您的反馈

如何启用逐步执行我的代码(JMC

  • 您的程序使用新的MSVC编译器开关编译:/ JMC 默认情况下,JMC在所有调试配置中都是MSBuild项目,因此请确保在15.8 Preview 3或更高版本的最新MSVC编译器重新编译项目
  • 调试器加载包含用户代码的二进制文件的PDB
  • 工具>选项>调试>常规>启用我的代码(默认设置)中启用了JMC。

新的“单步执行”行为

启用JMC后,调试器将跟踪哪些代码是用户代码或系统/库代码。 当进入具有PDB信息的函数时,执行将继续进行,直到到达另一个标记为用户代码的函数或当前函数完成其执行。 这在实践中意味着,为了获得你的代码,你不必花时间逐行执行你不感兴趣的无数行库代码,更常见的是,你可以停止在你的代码库中遍布大量断点。

例如,在下面的代码片段中,如果没有JMC,如果您有足够的雄心壮志“单步执行”,直到您到达作为参数传递给标准库算法的谓词,您将必须按F11(单步执行)140次! 使用JMC,它只是一个“单步执行”命令调用。

STL算法

另一个例子是进入Win32 API回调。 如果没有JMC,调试器无法判断某些用户代码最终会执行,因此它会完全跳过Win32 API调用而不会进入用户定义的回调。 JMC正确地将回调标识为用户代码并适当地停止调试器。

Win32 API回调

单步执行到特定处

要明确地进入可能是非用户代码的调用,您可以利用编辑器上下文菜单中提供的”单步执行到特定处命令。 这允许您选择要进入的特定功能(用户代码与否):

为其他第三方库配置仅我的代码

C ++调试器认为非用户代码的默认模块和源文件集在%VSInstallDir% Common7 Packages Debugger Visualizers下的default.natjmc文件中编码,它指定WinSDK,CRT,STL和ATL / MFC等 其他事情

您可以通过以下任一方式自定义这组模块和源文件:

  • 修改%VSInstallDir% Common7 Packages Debugger Visualizers default.natjmc中的中央列表或
  • 在%USERPROFILE% Documents Visual Studio 2017 Visualizers文件夹下创建任意数量的用户特定的.natjmc文件

例如,要将所有Boost库视为非用户代码,您可以使用以下内容在上述文件夹中创建boost.natjmc

您无需重建用户代码即可启动这些更改。在下一个调试会话中,单步执行使用Boost的代码将跳过Boost库代码,并且只有在调用栈上找到某些用户代码时才会停止执行。

有关.natjmc文件格式的更多详细信息,请参阅仅我的代码文档页面。 请注意,.natjmc格式还支持基于函数名称将代码标记为非用户代码,但出于提高性能原因,我们不建议将此功能用于经常调用的函数或用于大型函数组的函数('函数'规则 比“模块”或“文件”规则慢得多)

第三方库

隐藏

如上所述,JMC功能仅适用于使用新MSVC编译器开关/ JMC编译的用户代码。 默认情况下,此新开关已在调试配置中用于MSBuild项目。 如果您使用的是其他构建系统,则需要确保手动将默认的/ JMC开关添加到项目的调试版本中。

/ JMC仅支持链接到CRT的二进制文件。

要显式关闭JMC,可以使用/ JMC-开关。

给我们您的反馈!

此版本是第一个支持逐步执行仅我的代码Visual Studio 2017预览版。您的反馈是确保我们提供愉快的调试体验的关键部分。 如有任何问题,请通过@visualc上的Twitter或发送电子邮件至visualcpp@microsoft.com与我们联系。 如有任何问题或建议,请通过帮助>发送反馈>IDE报告问题告诉我们。

Visual Studio 2017 15.8 Preview 3中新的实验性代码分析功能

$
0
0

[原文发表地址] https://blogs.msdn.microsoft.com/vcblog/2018/06/26/new-experimental-code-analysis-features-in-visual-studio-2017-15-8-preview-3/

[原文发表时间]2018/06/26

Visual C ++团队一直致力于在Visual Studio中提高我们的代码分析体验。 我们的目标是使这些工具更加实用和自然,并希望无论您是什么样式或项目类型的工作流程,它们都将使您受益。

 

尝试新功能

预览版频道 中提供的Visual Studio 2017 15.8 preview 3中,我们介绍了一些新的正在进行的代码分析功能。 在默认情况下,这些功能是关闭的,但您可以在工具>选项>文本编辑器> C ++>实验>代码分析下启用它们。 我们鼓励您对其进行测试,并提供您的任何反馈或意见

 

背景分析

启用代码分析功能后,现在就可以在通过打开或保存C ++文件来在后台运行代码分析! 我们的目标是将代码分析警告带入到编辑体验中,以便可以更早地修复错误,而不仅只是在运行时才会发现缺陷。 一旦对文件进行代码分析后,警告将显示在错误列表中,而在编辑器中则会显示为波形。

 

编辑内警告

与背景分析一起,代码分析警告现在在编辑器中显示为相应源代码下方的绿色波形。 如下图,如果通过更改文件来修复警告,则波形不会的自动刷新。 如果文件已经保存或者重新运行当前文件的分析(Ctrl + Shift + Alt + F7),则形形和错误列表会被更新。通过提供给您在同一位置进行代码编写和编辑的功能,我们希望这些视觉提示器能对您有帮助。

 

错误列表

代码分析警告将会一直显示在错误列表中,对此我们也在试图改善这种体验。 错误列表中的过滤应该更快。 我们鼓励使用“当前文档”过滤器去查看正在编辑的文件的错误, 这是与背景分析功能互相配合的。 警告详细信息也会在错误列表中以内联方式显示,而不是在单独的弹出窗口中显示。 我们相信错误附近的细节可以更容易地挖掘警告。 新的错误列表体验仍在进行中,因此请告知我们应该考虑的任何“必备”功能。

 

未来展望

我们很高兴能够展示未来会发生什么,但是在现在您可能会遇到一些已知问题。 首先,运行后台分析时仅使用“推荐的本机规则”规则集。 然后,并非所有项目类型都支持背景分析。 您可以尝试通过菜单运行代码分析来强制信息刷新。 最后,清除信息的最佳方法是“清理”构建或关闭实验功能。

随着背景分析运行的改进,突出显示多线警告、改变曲线以显示警告何时过时、自动修复等在以后都会进行考虑。 这些类似IntelliSense的选项能够使您直接在编辑器中快速更正或更改代码,并准确查看将要更改的内容

 

提供反馈

感谢所有帮助Visual Studio能为所有人提供更好体验的人。 您的反馈对于确保我们提供最佳代码分析体验至关重要,因此请在下面的评论中告诉我们Visual Studio 201715.8 preview 3是如何为您服务的。 您也可以通过Visual Studio中的报告常规问题来进行反馈,同时可以通过UserVoice提供任何建议,或者也可以在Twitter上找到我们(@VisualC)。

Dashboard Updates in Public Preview

$
0
0

We’re excited to announce updates to the new dashboard experience. This new experience lets you:

  • Easily switch between your team’s dashboards
  • Fine tune team permissions on individual dashboards
  • Find and favorite the dashboards you need

It is now available in public preview for VSTS customers and coming to TFS in the next major version.

Get to your dashboards fast

We’ve updated the dashboard picker based on customers’ biggest piece of feedback: make it easy to switch between a team’s dashboards. The updated picker now contains two pivots: Mine and All.

The Mine pivot makes it easy to find your team dashboards and your favorited dashboards.

The All pivot continues to show you all of the dashboards within the project. You can additionally favorite any of the dashboards and they’ll appear under Favorites in the Mine pivot.

Individual dashboard permissions

Team admins can now assign team permissions to individual dashboards. To set individual dashboard permissions, click on the Settings gear on the upper right corner.

Then, click on the Manage permissions for this dashboard link. From here, a team admin can set a team’s permissions for that specific dashboard.

Team admins can also set global team dashboard permissions. By going to the Dashboard settings, under Project Settings for a team, a team admin can set permissions for their team dashboards. Whichever permissions are set here the team’s dashboards will inherit.

Dashboard Directory pages

You can now easily search and favorite any dashboard in the project by using the new dashboard directory pages. The directory page contains a Mine and an All page.

These pages highlight dashboard metadata:

  • Name of the dashboard
  • Which team owns the dashboard
  • Description of the dashboard

The Mine directory page displays the dashboards that you have favorited, and the teams that you belong to and their corresponding dashboards. The All directory page displays all of the dashboards within the project.


Additionally, these directory pages contain the following capabilities:

  • Filtering by team, or by entering a keyword in the filter box
  • Favoriting of a dashboard
  • Permission management and dashboard deletion, via the context menu

Other improvements

Besides the big changes listed above, here are some other improvements:

  • Descriptions for each dashboard for easier searching
  • Full screen mode support for dashboards
  • Streamlined dashboard editing experience

To learn more about the new dashboard experience visit the updated documentation on dashboards.

Download your copy of “Cloud Application Architecture Guide” ebook


Windows and SQL Server 2008 and 2008 R2 End of Support – free security updates in Azure

$
0
0

OK folks, let's talk about that stage in a Database Server or Windows Server's lifecycle in EDU that I'll call Purgatory for the purpose of this blog post.  Mainstream support has ended, the DB/OS vendor is no longer rolling out security updates, but you haven't been able to either kill off, replace or upgrade this particular server because of a host of reasons.

This will happen sooner than you might think for some servers in your environment, SQL Server 2008/2008 R2 support ends July 9, 2019 and Windows Server 2008/2008 R2 systems ends on January 14, 2020.  That's not too far away when you think about it.  But how did we get here?  Some reasons that I recall from my time in EDU include:

  • The Application vendor doesn't support a newer OS so I can't upgrade it.
  • The Application vendor no longer exists, I can't even find the media to re-install it!
  • I don't want to touch that thing, it's been nothing but headaches since day one!
  • I'm going directly to Containers/Service Fabric and don't want to touch these VM's for a couple of years while I build the shiny new stuff.
  • Why can't I just keep the lights on for another year?

You may have other legitimate reasons why you could find yourself running important services/applications on Windows Server without security updates - It's OK, nobody's judging here!  Well maybe your CISO or Compliance folks might . . . but let's not go there.  While we still have time to plan to avoid Server Purgatory, let's see if there's an option that won't break the bank - we are in EDU after all.  If only there were a way to buy some additional time, maybe using Azure . . .

Migrate that Physical Server or VM into Azure with free security updates for three additional yearsHere's the place to go for more details including a snackable video outlining the program and a PDF to share with the compliance folks!  The key benefits are:

  • Rehost Windows Server/SQL Server 2008 and 2008 R2 workloads to Azure
  • Get three years of Extended Security Updates at no additional charge, and upgrade to a current version when ready
  • Use existing licenses and save up to 80 percent on Azure Virtual Machines with Azure Hybrid Benefit and Reserved Instances

That last one is HUGE because you can bring your own Windows Server and SQL licensing (that you purchase at EDU rates) into Azure and combine that savings with Reserved Instances to bring that OpEx cost way down.

OK, sounds good, but how to I get there (Azure) from here (On-premises VMWare/Hyper-V/Physical)?  Microsoft offers a free migration tool via Azure Site Recovery and assessment tooling via Azure MigratePartner migration solutions are available as well and Microsoft Services or a regional consulting partner can help if needed.

So take a deep breath - you have time to plan for this next phase in your 2008/2008 R2 server's lifecycle.  Just remember that line from Freewill by Rush: "If you choose not to decide, you still have made a choice."

If you're still reading this post hoping to see Windows Server 2003 extended support options, sorry - nothing to report here, that ship has sailed.  If you absolutely still have Server 2003 needs, you can run it in Azure if you bring your own VHD as described and caveated here.

 

 

[Post Invitado] Gestionando grupos de personas con Azure Face API

$
0
0

Este artículo de Alberto Guerra Estévez, consultor de Ilitia Technologies, explica cómo gestionar los límites que posee Face API a la hora de agregar personas al servicio.

Introducción

Face Api es un conjunto de herramientas pertenecientes a los servicios cognitivos de Microsoft Azure ideales para poder identificar a las distintas personas dentro de un grupo. Podemos reconocer, a fecha de hoy, hasta un máximo de 1000 personas por grupo para la subscripción gratuita  y hasta 10.000 para la subscripción estándar[i].

Con los límites actuales podremos añadir de manera sencilla a un volumen suficiente de personas como para cubrir la mayoría de casos que se nos presenten, pero… ¿y si necesitamos superar estos límites?

Face API proporciona las herramientas para poder superar los límites actuales del servicio, simplemente necesitaremos un poco más de lógica en nuestra aplicación para distribuir las distintas personas en nuevos grupos. Si antes el límite que teníamos era de 1.000 personas por grupo para la subscripción gratuita (10.000 en el caso de la subscripción estándar), ahora podremos multiplicar ese techo hasta por 1.000.000, 1.000 usuarios en 1.000 grupos disponibles por servicio, si sabemos gestionar adecuadamente ambos recursos. La ganancia es indudable y conseguirlo es una tarea relativamente sencilla.

1.    Requisitos antes de empezar

Para aprender a gestionar los grupos de personas de Face Api asumimos que conoces y tienes listos los siguientes puntos:

  • Tienes correctamente desplegado un servicio de Face Api en tu portal de Azure con una subscripción válida, sea gratuita o estándar y su clave correspondiente.
  • Conoces los fundamentos básicos y usos de Face API. No deben resultarte extraños conceptos asociados como Face, Person o Group.

2.    Conectando nuestra aplicación a Face API

Para integrar el ejemplo de este artículo vamos a elegir una aplicación UWP, ya que existen librerías que integran perfectamente toda la funcionalidad de la API y nos pueden facilitar el trabajo sin tener que implementar las llamadas mediante un cliente HTTP directamente (como cualquier otra API Rest).

En primer lugar, crearemos un nuevo proyecto Universal Windows (Blank App) desde Visual Studio:

Añadimos al proyecto el NuGet Microsoft.ProjectOxford.Face, que nos dará las librerías de cliente que conecte con Face Api:

Ya tendemos un punto de partida sobre el que empezar a implementar nuestro ejemplo.

3.    Inicializaciones

Necesitaremos los siguientes namespaces del SDK para las funciones que vamos a crear:


using Microsoft.ProjectOxford.Face;
using Microsoft.ProjectOxford.Face.Contract;

Para poder hacer todas las llamadas a la Api, debemos tener en cuenta las limitaciones que ofrece nuestra subscripción. Para ello haremos uso del siguiente algoritmo que gestiona la cola de peticiones con respecto a los límites de nuestra subscripción, esperando, si es necesario, hasta que el servicio vuelva a admitir nuevas llamadas, ya que, si superamos el límite de peticiones por segundo, nos disparará una excepción. Este algoritmo está disponible en la documentación de Microsoft:

https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces

const int PersonCount = 10000;
const int CallLimitPerSecond = 10;
static Queue _timeStampQueue = new
Queue(CallLimitPerSecond);

private static async Task WaitCallLimitPerSecondAsync()
   {
     Monitor.Enter(_timeStampQueue);
     try
     {
        if (_timeStampQueue.Count >= CallLimitPerSecond)
        {
          TimeSpan timeInterval = DateTime.UtcNow - _timeStampQueue.Peek();
          if (timeInterval < TimeSpan.FromSeconds(1))
          {
              await Task.Delay(TimeSpan.FromSeconds(1) - timeInterval);
          } 
          _timeStampQueue.Dequeue();
        }
        _timeStampQueue.Enqueue(DateTime.UtcNow);
     }
     finally
     {
        Monitor.Exit(_timeStampQueue);
     }
   }

Como haremos varias llamadas al servicio,  automatizaremos un poco más la gestión de estas llamadas mediente el siguiente método donde pasaremos nuestro código con la llamada a FaceApi que  invocará automáticamente al gestor de colas:

private async Task ExecuteFaceApiFunction(Func<Task> action)
        {
           try
           {
                await WaitCallLimitPerSecondAsync();
                await action();
           }
           catch (Exception e)
           {
               Debug.WriteLine(e.Message);
           }
        }

Por último en nuestra clase definimos e instanciamos nuestra variable de nuestro servicio de Face Api y una variable que indique el límite de personas por grupo que permita nuestra subscripción:

private FaceServiceClient faceService = new FaceServiceClient("my_api_key", @"https://westeurope.api.cognitive.microsoft.com/face/v1.0");
private int maxPersonsPerGroup = 1000;

 

4.    Añadiendo a las personas en sus grupos correspondientes

 

Ten lista  una carpeta con imágenes de caras individuales, distintas entre ellas, que será nuestro ejemplo de partida.

Para poder añadir cada una de las caras que vayamos procesando en nuestro grupo, usaremos alguna de las características de esa cara para poder generar un grupo de asignación. Una buena estrategia puede ser definir el grupo al que pertenece la cara por género.

Crearemos para ello la siguiente función que, para una nueva cara  y una lista de grupos ya existentes en nuestro servicio,  le asignará a nuestra cara un id de grupo donde debería insertarse que no esté lleno, y, en caso de estar todos llenos, creará un nuevo grupo incrementando el índice):

private async Task GetOrCreateProperGroupId(Face face, List grupos)
        {
               int subGroupIndex = 0;
               string resultGroupId;
               string subgroupName = "subgroup" + face.FaceAttributes.Gender;
               var compatibleGroups = grupos.Where(g => g.PersonGroupId.Contains(subgroupName));

               PersonGroup existentGroup = null;
               if (compatibleGroups != null && compatibleGroups.Any())
               {
                  //si hay grupos compatibles con la cara dada (por genero) buscamos en el primero que este vacio
                  //y vamos incrementando el índice por cada grupo lleno que vayamos encontrando
                  foreach (var group in compatibleGroups)
                  {
                       PersonGroup[] personsInGroup = null;

                       await ExecuteFaceApiFunction(async () =>;
                       {
                          personsInGroup = await faceService.ListPersonGroupsAsync();
                       });
                       if (personsInGroup?.Count() >= maxPersonsPerGroup)
                       {
                           subGroupIndex++;
                       }
                       else
                       {
                           existentGroup = group;
                           break;
                       }
                  }
        }
        //el grupo donde insertar la nueva cara sera el resultado del género + el indice calculado
          resultGroupId = existentGroup?.PersonGroupId ?? (subgroupName + subGroupIndex);

       //si el grupo es nuevo para esta cara (no existe ningun grupo existente para ese groupID) generamos uno nuevo con el indice donde lo dejamos
         if (existentGroup == null)
         {
              Debug.WriteLine("No hay grupos libres para el subgrupo, Se genera nuevo grupo" + resultGroupId);

              await ExecuteFaceApiFunction(async () =>;
              {
                     await faceService.CreatePersonGroupAsync(resultGroupId, resultGroupId);
               });
               grupos.Add(new PersonGroup() { PersonGroupId = resultGroupId });
         }
         
          return resultGroupId;
      }

Con esta función ya podremos crear nuestro algoritmo para que inserte una serie de imágenes de un directorio en nuestro servicio de manera correcta, teniendo en cuenta los límites de capacidad por grupo:

            private async void LoadFotosFromDirectory()
         {
              var filePicker = new Windows.Storage.Pickers.FileOpenPicker()
              {
                   SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.PicturesLibrary,
                   ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail
               };

               var files = await filePicker.PickMultipleFilesAsync();

               Debug.WriteLine("Cargando fotos...n");
               PersonGroup[] existentGroups = null;
               await ExecuteFaceApiFunction(async () =>
                    {
                        existentGroups = await faceService.ListPersonGroupsAsync();
                    });
               foreach (var file in files)
               {
                    using (Stream imageFileStream = File.OpenRead(file.Path)) 
                    {
                          Face[] faces = null;
                          await ExecuteFaceApiFunction(async () =>
                          {
                                faces = await faceService.DetectAsync(imageFileStream, true, true, new FaceAttributeType[] { FaceAttributeType.Gender});
                          });

                          foreach (var face in faces)
                          {
                               string properGroupid = await GetOrCreateProperGroupId(face, existentGroups.ToList());
                               CreatePersonResult newPerson=null;
                               await ExecuteFaceApiFunction(async () =>
                               {
                                      newPerson = await faceService.CreatePersonAsync(properGroupid, face.FaceId.ToString());
                                });
                                await ExecuteFaceApiFunction(async () =>
                                {
                                      await faceService.AddPersonFaceAsync(properGroupid, newPerson.PersonId, File.OpenRead(file.Path));
                                 });
                                 Debug.WriteLine("Persona " + newPerson.PersonId + " agregada a grupo " + properGroupid);
                          }
                     }
                }
                Debug.WriteLine("TODAS LAS FOTOS CARGADAS)");

                await TrainAllGroups();
          }

          private async Task TrainAllGroups ()
          {
                  PersonGroup[] grupos = null;
                  await ExecuteFaceApiFunction(async () =>;
                  {
                        grupos = await faceService.ListPersonGroupsAsync();
                   });

                   foreach (var grupo in grupos)
                   {
                         await ExecuteFaceApiFunction(async () =>;
                         {
                              await faceService.TrainPersonGroupAsync(grupo.PersonGroupId);
                          });
                   }
          }

Cuando el proceso de entrenamiento haya concluido, tendremos todas las fotos cargadas en nuestro servicio superando el número de caras que el servicio nos permite por grupo.

Conclusiones

Con la técnica expuesta en este artículo podrás ir más allá de los límites que un grupo de Face Api permite. Sólo te queda aplicar las ideas y combinarlas con las diversas funcionalidades que el servicio cognitivo proporciona. Además, podrás mejorar y adaptar esta solución a tus necesidades concretas (permitir imágenes con varias fotos, identificar a las personas para no añadir dos personas repetidas al sistema, poder paralelizar varios de estos procesos para mejorar rendimiento…).

Referencias

Documentación oficial de Face Api:

https://docs.microsoft.com/es-es/azure/cognitive-services/face/overview

Notice about the upcoming release for Lifecycle Services

$
0
0

We are not releasing a new version of Lifecycle Services (LCS) on Monday, July 23. The next release will be Monday, August 6.

University of Washington Data Visualization Course special lecture/industry showcase

$
0
0

I must admit i was pretty honored when asked to host the last semester of the Data Visualization Course for the University of Washington.   In addition to meeting some really cool people I will be the first to admit I probably learned more from the experience than the students in preparing for the lectures!   (For instance i now know a lot about multi variate charting, how memory encoding impacts data visualization, when the first "choose your chart" flow chart was created etc etc etc).

Taking advantage of the Business Applications Summit being in town next week I have very cool lecture for the students next week...An industry showcase of how some of the leading Data Visualization people are using these tools at the companies they are working at:

  1. Phil Seamark explains how 70% of all Trademe employees, the largest etailer in New Zealand is using BI Tools (That is the highest % I have seen)
  2. Treb Gatte will walk through how Pandora makes their data actionable with BI Tools
  3. Luc Labelle How the National Bank of Canada rolled out BI Tools to their branches
  4. How Vivek Patel and the AECON Group is using Supply Chain Data Analytics to Drive Performance with Power BI
  5. Kelly Kaye will walk through how Microsoft Finance uses BI Tools to keep track of our assets and make sure business practices are being followed

Digital_Sinage_KellyK_422x169

Kelly Kaye, a senior finance manager, is passionate about Power BI. Kelly is the founder and a monthly co-host of the Power BI Geeks user group.  She actively participates the Power BI design process and one of the most active contributors in the Microsoft Power BI Yammer Group​

Philip Seamark

Phil is an author, Microsoft Data Platform MVP and an experienced database and business intelligence (BI) professional with a deep knowledge of the Microsoft B.I. stack along with extensive knowledge of data warehouse (DW) methodologies and enterprise data modelling.  He has 25+ years experience in this field and an active member of Power BI community.  Super User with 1,945 answers with 590 kudos on the community website http://community.powerbi.com 

http://radacad.com/philip-seamark 

Treb Gatte

Treb Gatte

Treb Gatte is the Managing Partner at TumbleRoad.com, where using his techniques, he's helped well-known companies like Wells Fargo and Starbucks achieve project management and business intelligence success.

He is an internationally recognized project management expert, author, speaker and a Microsoft MVP. He's used his Project and SharePoint expertise to author two books on the software. He has a B.S. from Louisiana State University and an M.B.A. from Wake Forest University.

Treb's personal interests include hiking, softball and a passion for coffee. He is a former Starbucks Certified Coffee Master. Treb also loves to sing, sometimes well, and is always open to trying new Karaoke venues.

The opinions expressed in this blog are those of Treb Gatte and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

 

Luc Labelle

Luc Labelle has been working in IT for the past 15 years. With close to a decade of experience as a consultant, playing different roles, he's been involved in countless SharePoint and office 365 projects. The context of intervention he has worked in has allowed him to recognize how to avoid pitfalls when it comes down to deploying SharePoint from both business and technical perspectives. Passionate about his work, Luc founded his own consulting firm in 2014, Kabesa. He tries to stay as implicated as possible in the SharePoint and Office 365 community by attending and speaking different events. He also contributes in the promotion of the SharePoint user groups of Montreal by helping with the organization of local events.

Vivek Patel

SAP Implementation, Business Application Integration, Big data analytics, Business analytics · Business intelligence, Computer programming, Cortana, Data analytics, Data integration, Data mining, Data modeling, Data quality, Data scientist, Data visualization, Machine learning, Advance application development using Microsoft Excel, Microsoft Office 365, Microsoft Power BI, Microsoft SharePoint, Natural language query, Predictive analytics, Project management, Public speaking, SAP Business Objects, SAP ERP, Supply chain management, Technical Presentation

7/19/18 Webinar: Next Generation Location and Data Analysis using Mapbox and Power BI

$
0
0

Join me as I connect with Sam Gehret and walk through how Mapbox and Power BI can use location data to tell your story using next generation maps.

 

Wherehttp://community.powerbi.com/t5/Webinars-and-Video-Gallery/Next-Generation-Location-and-Data-Analysi...

 

When: 7/19/18 10am PST

 

If you are not familiar with Mapbox, it is the location data platform for mobile and web applications. They provide building blocks to add location features like maps, search, and navigation into any experience you create.

 

SamGehret.jpegSam Gehret

Sam Gehret is a Solutions Engineer for BI and Data Viz at Mapbox. He currently manages the development and roadmap for the Mapbox custom visual for Power BI. Sam has over 7 years of Business Intelligence experience working in both product and sales at another large BI company. He holds a BA from Dartmouth College and is a graduate of the General Assembly Javascript bootcamp.

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>