Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

API Management: Quota versus Rate Limits

$
0
0

Azure API Management provides really good capabilities for usage throttling. This is of course helping in scenarios such as defending against a denial of service attack. But these are also important in protecting your back-end services against a huge influx of requests to your API management layer. Or sometimes even providing tier based access restrictions to your customers as a feature.

To implement such throttling capabilities, API management provides abilities to either limit the rate or limit the overall quota for a given subscription. The options you have for implementing these limits are -

Now at a first glance, both the call rates and usage quotas seem to be for controlling the number of calls over a given period of time. For instance, look at the examples of setting call rate and usage quota by subscription, both of them need a specific timeframe.

Call Rate:

<rate-limit calls="number" renewal-period="seconds">
<api name="name" calls="number" renewal-period="seconds">
<operation name="name" calls="number" renewal-period="seconds" />
</api>
</rate-limit>

 

Usage Quota:

<quota calls="number" bandwidth="kilobytes" renewal-period="seconds"> <api name="name" calls="number" bandwidth="kilobytes"> <operation name="name" calls="number" bandwidth="kilobytes" /> </api> </quota>

So when do we use quota versus the call rates?

 

As a guideline, consider that call rates are usually used to protect against short intense volume bursts. For instance, if you know your backend service gets choked on its database with a high call volume, you would set your API management to not allow high call volume by using this setting. In such cases, this setting can be set to not allow say more than 100 calls every minute.

On the other hand, usage quotas are more for controlling call rates over a longer period of time. Usage quotas for instance can determine the total number of calls in a given month. For monetizing your API, this can also be set to tier based subscriptions where a Basic tier for instance can make no more than 10,000 calls a month but a Premium tier can go up to 100,000,000 calls each month.

From the API management implementation standpoint, the rate limits information is understood to be for a shorter duration (less than 5 minutes) and hence any changes in rate limits are propagated faster across the nodes to protect against spikes. The usage quota information on the other hand is expected to be used over a longer term and hence its implementation is different.

In summary, call rates protect against short intense volume bursts and usage quotas for longer duration access restrictions and tier based monetization scenarios.

 

Hope this helps.

 


Azure Serverless ワークショップ Deep Dive : モジュール 6

$
0
0

前回に続いて、今回はモジュール 6 の Deep Dive です。

GitHub: https://github.com/Azure-Samples/azure-serverless-workshop-team-assistant/tree/lang/jp/6-scheduler-bot

概要

このモジュールでは Azure ファンクションと Logic Apps を使って Google カレンダーより複数メンバーの共通空き時間を取得する機能を実装します。

Azure ファンクション部分

取得した予定の共通の空きを検索するロジックを実装しています。

Logic Apps 部分

Google カレンダーより予定を取得するロジックを実装しています。

中身の解析

Azure ファンクションに渡すデータ

Azure ファンクションでは最終的に以下のコードで予定を解析しています。つまり body にカレンダーが配列で入り、各カレンダーにスケジュールのイベントがさらに配列で入っています。この形式がポイントで、Logic Apps ではこのデータを作って Azure ファンクションを呼び出すことになります。

image

Logic Apps でデータの作成

Logic Apps では以下の手順で期待されるデータを作っています。

1. 変数として schedules 配列を作成。これが実際にファンクションに引き渡される。

2. Google カレンダーを取得した後、各イベントを schedules の配列アイテムとして追加。これでファンクションで期待されるデータが完成。

Foreach ループのポイント

Logic Apps で Foreach ループを使う際、いくつかポイントがあります。

- ループの元となるアイテムを作成する。今回は split を利用。
- item() でループされているオブジェクトが取得可能。今回はメールアドレスの文字列。

わたっているデータや型を正しくイメージできることが大事です。

注意点

最後に注意点をいくつか。

- Google カレンダーで複数ユーザーの予定を見るためには、事前に Logic Apps で認証したユーザーのカレンダーに、対象カレンダーを追加しておく必要がある。
- ボットにユーザーの一覧を渡す際、スペースを入れない。これは Split 時にスペースの削除を考慮していないため。
- 時刻の扱い。日本時間で計算したいので元のコードから別途考慮が必要。

まとめ

この章でようやく Serverless のキーテクノロジーである Azure ファンクションと Logic Apps 同時に活用しましたが、このような連携が多くなるのが Serverless の特徴かなと思っています。重要な点はどのようにデータを受け渡すかですが、ストレージやイベントハブなど疎結合にできる技術を使えばより柔軟に構成が可能です。是非細かく機能を実装して、多くのモジュールをうまく連携してみてください。

中村 憲一郎

Power BI Custom Authentication in ISV applications (Custom Data Connector)

$
0
0

Sorry for my long interval for posting because of my private matters. This blog is alive. (My wife is having trouble with her leg and I'm now changing my work time...)

In this post I describe how ISVs can implement their own Power BI Data Connector with custom authentication experience.
When users select "Get Data" in Power BI, your custom connector will be appeared. Your connector can provide custom authentication UI and custom data.
Even when some ISVs create (and submit) custom template content pack, this connector's flow is needed when using custom OAuth authentication.

In this post I explain straightforward about this flow with developer's view.

Before staring...

Before starting, please enable the custom connector feature in your Power BI Desktop for development as follows.

  • Launch Power BI Desktop
  • Select "File" - "Options and settings" - "Options" menu
  • In the dialog window, select "Preview features" and enable "Custom data connectors"

Note : Currently, Data Connectors are only supported in Power BI Desktop. (You cannot also publish to Power BI service.)

Next you must install Power Query SDK in your Visual Studio. After installed, you can see the data connector project template in your Visual Studio.

Overall Flow

For developing custom authentication, you must understand how OAuth works with API applications.
You can use your own favorite OAuth provider (Google account, your own custom IdP, etc) for building OAuth supported Data Connector, but here we use Azure Active Directory (Azure AD) for implementation. (Here we use Azure AD v1 endpoint. See the note below.)

Step1 ) Beforehand you must register your api application (A) and your client application(=Power BI Data Connector) (B) in Azure AD. This operation is needed once for only ISV. (The user doesn't need to register.)

Step2 ) When the user uses your Power BI Data Connector, your data connector (B) shows the following url in web browser or browser component for login.

https://login.microsoftonline.com/common/oauth2/authorize
  ?response_type=code
  &client_id={app id of B}
  &resource={app uri of A}
  &redirect_uri={redirected url for B}

Step3 ) With Power BI Data Connector, the previous "redirected url for B" is the fixed url, "https://preview.powerbi.com/views/oauthredirect.html".
After the user has logged-in, code is returned by the query string as follows. Then your data connector (B) can retrieve the returned code value.

https://preview.powerbi.com/views/oauthredirect.html?code={returned code}

Step4 ) Next your data connector (B) posts the following HTTP request against Azure AD (without UI). Then the access token is returned as http response as follows.

POST https://login.microsoftonline.com/common/oauth2/token
Content-Type: application/x-www-form-urlencoded

grant_type=authorization_code
&code={returned code}
&client_id={app id of B}
&redirect_uri=https%3A%2F%2Fpreview.powerbi.com%2Fviews%2Foauthredirect.html
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "token_type": "Bearer",
  "scope": "user_impersonation",
  "expires_in": "3599",
  "ext_expires_in": "0",
  "not_before": "{start time of this token}",
  "expires_on": "{expired time of this token}",
  "resource": "{app uri of A}",
  "access_token": "{returned access token}",
  "refresh_token": "{returned refresh token}",
  "id_token": "{returned id token}"
}

Step5 ) Your data connector (B) calls your api application (A) with the previous access token as follows.

GET https://contoso.com/testapi
Authorization: Bearer {access token}

Step6 ) Your api application (A) can verify the passed token and return the result dataset if it's valid. (See "Develop your API with Azure AD" for verification.)
The result dataset is shown in your Power BI client.

Note : Currently the native application in Azure AD v2 endpoint cannot have the redirect urls with "http" or "https" protocols. You cannot use "https://preview.powerbi.com/views/oauthredirect.html" as redirected urls.
Therefore we're using Azure AD v1 endpoint in this post.

Here I explain how to setup or implement with corresponding above steps.

Register your app in Azure AD (Step 1)

First you must register your api application (A) and your client application(=Power BI Data Connector) (B) with Azure Portal.

When you register your api application (A), you must define the scope (permission or role) in manifest. (See below.)
You can use the appropriate scope values like "read" and "write" along with your API requirements. In this post we use the default settings as follows (scope value is "user_impersonation"), and your data connector (B) must use the scope "{app uri}/user_impersonation" (ex: https://test.onmicrosoft.com/235426d2-485d-46e0-ad0d-059501ab58a4/user_impersonation) for accessing your api application.

When you register your data connector (B), the app type must be "native application", and you must include "https://preview.powerbi.com/views/oauthredirect.html" (common for all data connectors) as the redirect urls.

After you've done, you must open the required permission setting of data connector application (B) and add the scope (here "user_impersonation" scope) for accessing api application (A).

Note : If it's production application, you must set these applications (A and B) as multi-tenanted applications and the user must consent your applications in the user's tenant before using your data connector. In this post we skip these settings.

Implement your API - Create OData Feed service compliant with Power BI (Step 6)

Before implementing your data connector, now we shall implement the api application (B) beforehand.

You can create your api as simple rest api (consumed as Web.Contents() in your data connector) or odata-compliant rest api (consumed as OData.Feed() in your data connector) with your favorite programming languages.
Here we implement as OData v4 feed service with ASP.NET (.NET Framework) as follows. (See "OData.org : How to Use Web API OData to Build an OData V4 Service without Entity Framework" for details.)

First, create your "ASP.NET Web Application" project and please select [Empty] and [Web API] in the creation wizard as follows.

Add Microsoft.AspNet.OData package with NuGet. (Not Microsoft.AspNet.WebApi.OData, because it's not v4.)

Note : You cannot use .NET Framework 4.6.2 with latest Microsoft.AspNet.OData 6.1.0 because of the dll confliction.

Add and implement your MVC controller. Your controller must inherit ODataController as follows.
Here I created the following simple controller which returns the data (3 records) of SalesRevenue class.

...
using System.Web.OData;
...

public class TestController : ODataController
{
  [EnableQuery]
  public IQueryable<SalesRevenue> Get()
  {
    return new List<SalesRevenue>()
    {
      new SalesRevenue { Id = 1, Country = "US", Value = 200 },
      new SalesRevenue { Id = 2, Country = "Japan", Value = 80 },
      new SalesRevenue { Id = 3, Country = "Germany", Value = 120 }
    }.AsQueryable();
  }
}

public class SalesRevenue
{
  public int Id { get; set; }
  public string Country { get; set; }
  public int Value { get; set; }
}
...

Edit your WebApiConfig.cs as follows.

...
using System.Web.OData.Builder;
using System.Web.OData.Extensions;
using Microsoft.OData.Edm;
...

public static class WebApiConfig
{
  public static void Register(HttpConfiguration config)
  {
    // Web API configuration and services

    // Web API routes
    config.MapHttpAttributeRoutes();

    config.Routes.MapHttpRoute(
      name: "DefaultApi",
      routeTemplate: "api/{controller}/{id}",
      defaults: new { id = RouteParameter.Optional }
    );

    // Add here
    ODataModelBuilder builder = new ODataConventionModelBuilder();
    builder.Namespace = "Demos";
    builder.ContainerName = "DefaultContainer";
    builder.EntitySet<Controllers.SalesRevenue>("Test");
    IEdmModel model = builder.GetEdmModel();
    config.MapODataServiceRoute("ODataRoute", "osalesdat", model);
  }
}
...

Now your api is consumed as https://{your api hosted url}/osalesdat with Power BI Desktop. (Please try with OData feed connector !)

Finally you must add the authentication in your api application.
As I mentioned earlier, your api will receive the following HTTP request from Power BI Data Connector. Your api application must verify the access token (the following Authorization header value), retreive several claims, and return data as you like.
For example, your api application can retrieve tenant-id from token, and can return each tenant's data with your programming code. (i.e, you can easily implement multi-tenancy.)

GET https://contoso.com/api/test
Accept: */*
Accept-Encoding: gzip, deflate
Authorization: Bearer eyJ0eXAiOi...

For this implementation, please see my earlier post "Build your own Web API protected by Azure AD v2.0 endpoint with custom scopes", but with ASP.NET you just select "Configure Azure AD Authentication" with the right-click on your project, or you can configure in the project creation wizard.

Note : If you select "new app" instead of "existing app" in the configuration dialog, your api application is automatically registered in Azure AD. In this case, you don't need to register your api app manually in step 1.

When you want to protect api, please add Authorize attribute as follows.

public class TestController : ODataController
{
  [Authorize]
  [EnableQuery]
  public IQueryable<SalesRevenue> Get()
  {
    return new List<SalesRevenue>()
    {
      new SalesRevenue { Id = 1, Country = "US", Value = 200 },
      new SalesRevenue { Id = 2, Country = "Japan", Value = 80 },
      new SalesRevenue { Id = 3, Country = "Germany", Value = 120 }
    }.AsQueryable();
  }
}

Implement your Data Connector (All)

Now let's create your data connector (B) in turn.
First you start to create your project with "Data Connector Project" template in Visual Studio.

Power BI Data Connector should be written by Power Query M formula language. (See M function reference for details.) All logic is written in "PQExtension1.pq" file. (We're assuming our project named "PQExtension1".)
Here I show you overall sample code (PQExtension1.pq) as follows. (Here I'm skipping the code for exception handling, etc.)
Now let's see what this code is doing step by step.

section PQExtension1;

[DataSource.Kind="PQExtension1", Publish="PQExtension1.Publish"]
shared PQExtension1.Contents = (optional message as text) =>
  let
    source = OData.Feed(
      "https://contoso.com/osalesdat",
      null,
      [ ODataVersion = 4, MoreColumns = true ])
  in
    source;

PQExtension1 = [
  Authentication = [
    OAuth = [
      StartLogin = StartLogin,
      FinishLogin = FinishLogin
    ]
  ],
  Label = "Test API Connector"
];

StartLogin = (resourceUrl, state, display) =>
  let
    authorizeUrl = "https://login.microsoftonline.com/common/oauth2/authorize"
      & "?response_type=code"
      & "&client_id=97f213a1-6c29-4235-a37b-a82dda14365c"
      & "&resource=https%3A%2F%2Ftest.onmicrosoft.com%2F235426d2-485d-46e0-ad0d-059501ab58a4"
      & "&redirect_uri=https%3A%2F%2Fpreview.powerbi.com%2Fviews%2Foauthredirect.html"
  in
    [
      LoginUri = authorizeUrl,
      CallbackUri = "https://preview.powerbi.com/views/oauthredirect.html",
      WindowHeight = 720,
      WindowWidth = 1024,
      Context = null
    ];

FinishLogin = (context, callbackUri, state) =>
  let
    query = Uri.Parts(callbackUri)[Query],
    tokenResponse = Web.Contents("https://login.microsoftonline.com/common/oauth2/token", [
      Content = Text.ToBinary("grant_type=authorization_code"
        & "&code=" & query
        & "&client_id=97f213a1-6c29-4235-a37b-a82dda14365c"
        & "&redirect_uri=https%3A%2F%2Fpreview.powerbi.com%2Fviews%2Foauthredirect.html"),
      Headers = [
        #"Content-type" = "application/x-www-form-urlencoded",
        #"Accept" = "application/json"
      ]
    ]),
    result = Json.Document(tokenResponse)
  in
    result;

...

Note : Use Fiddler for debugging HTTP flow.

Implement your Data Connector - Signing-in (Step 2)

When you collaborate with OAuth authentication flow with your Power BI Data Connector, first you define your connector as follows. The following "Label" text will be displayed in the wizard window in Power BI. (See the screenshot in following "Run !" section.)

PQExtension1 = [
  Authentication = [
    OAuth = [
      ...
    ]
  ],
  Label = "Test API Connector"
];

When you want to navigate to the login UI with OAuth, you must add the following StartLogin.
Here 97f213a1-6c29-4235-a37b-a82dda14365c is application id (client id) for your data connector (B), and https%3A%2F%2Ftest.onmicrosoft.com%2F235426d2-485d-46e0-ad0d-059501ab58a4 is the url-encoded string for app uri of your api app (B). Please change for your appropriate values.

PQExtension1 = [
  Authentication = [
    OAuth = [
      StartLogin = StartLogin,
      ...
    ]
  ],
  Label = "Test API Connector"
];

StartLogin = (resourceUrl, state, display) =>
  let
    authorizeUrl = "https://login.microsoftonline.com/common/oauth2/authorize"
      & "?response_type=code"
      & "&client_id=97f213a1-6c29-4235-a37b-a82dda14365c"
      & "&resource=https%3A%2F%2Ftest.onmicrosoft.com%2F235426d2-485d-46e0-ad0d-059501ab58a4"
      & "&redirect_uri=https%3A%2F%2Fpreview.powerbi.com%2Fviews%2Foauthredirect.html"
  in
    [
      LoginUri = authorizeUrl,
      CallbackUri = "https://preview.powerbi.com/views/oauthredirect.html",
      WindowHeight = 720,
      WindowWidth = 1024,
      Context = null
    ];

Implement your Data Connector - Get auth code (Step 3)

After the user has succeeded login with UI, code is returned to callback uri (https://preview.powerbi.com/views/oauthredirect.html) as part of query strings. When your data connector receives this code value, you must write as follows.
The following FinishLogin is invoked by connector framework after the login is succeeded.

PQExtension1 = [
  Authentication = [
    OAuth = [
      StartLogin = StartLogin,
      FinishLogin = FinishLogin
    ]
  ],
  Label = "Test API Connector"
];
...

FinishLogin = (context, callbackUri, state) =>
  let
    query = Uri.Parts(callbackUri)[Query],
    code = query
    ...
  in
    result;

Implement your Data Connector - Get access token (Step 4)

With code value, your data connector can retrieve access token as follows.
Here M function Web.Contents() posts http-request without UI (silently).

PQExtension1 = [
  Authentication = [
    OAuth = [
      StartLogin = StartLogin,
      FinishLogin = FinishLogin
    ]
  ],
  Label = "Test API Connector"
];
...

FinishLogin = (context, callbackUri, state) =>
  let
    query = Uri.Parts(callbackUri)[Query],
    tokenResponse = Web.Contents("https://login.microsoftonline.com/common/oauth2/token", [
      Content = Text.ToBinary("grant_type=authorization_code"
        & "&code=" & query
        & "&client_id=97f213a1-6c29-4235-a37b-a82dda14365c"
        & "&redirect_uri=https%3A%2F%2Fpreview.powerbi.com%2Fviews%2Foauthredirect.html"),
      Headers = [
        #"Content-type" = "application/x-www-form-urlencoded",
        #"Accept" = "application/json"
      ]
    ]),
    result = Json.Document(tokenResponse)
  in
    result;

As you can see, FinishLogin returns OAuth JWT (json web token) to Power BI. (The access token and refresh token are included in this JWT.)

Implement your Data Connector - Call your api application (Step 5)

Finally you call your api application (previously generated OData feed service) as follows.
As you can see, here you don't need to specify Authorization header in your M code. The framework will automatically set access token in the request.

[DataSource.Kind="PQExtension1", Publish="PQExtension1.Publish"]
shared PQExtension1.Contents = (optional message as text) =>
  let
    source = OData.Feed(
      "https://contoso.com/osalesdat",
      null,
      [ ODataVersion = 4, MoreColumns = true ])
  in
    source;

Run !

Now let's start to build, deploy, and run !

Build your project in Visual Studio and copy the generated .mez file (which is the zip file format) into %UserProfile%DocumentsMicrosoft Power BI DesktopCustom Connectors folder.
Launch Power BI Desktop and select "Get Data". As you can see below, here's your data connector in the list.

When you select your custom data connector, the following dialog with "Sign In" button is displayed.

When you push "Sign In", the login-UI (in this case, Azure AD sign-in) is displayed.

When you successfully logged-in and push "Connect" button, the following navigator is displayed along with your OData feed metadata.
In this case, we defined only "Test" entity in the previous code, but you can also include functions (not only entities), if needed.

When you select entities and proceed, you can use dataset in Power BI as follows.
As you see here, the end-user doesn't concern about your implementation (OAuth, OData, etc). The user just uses the given connector and can securely get all required data with custom authentication.

Note : Once you connect the data source, the connection is cached in Power BI Desktop. Please clear connection cache using "File" - "Options and settings" - "Data source settings" menu.

The following is the sample code which is handling exceptions and others (which is accessing Microsoft Graph), and please refer for your understanding.

[Github] MyGraph Connector Sample
https://github.com/Microsoft/DataConnectors/tree/master/samples/MyGraph

 

Reference : [Github] Getting Started with Data Connectors
https://github.com/Microsoft/DataConnectors

 

Azure Serverless ワークショップ Deep Dive : モジュール 7,8

$
0
0

前回に続いて、今回はモジュール 7 と 8 の Deep Dive です。

GitHub モジュール 7: https://github.com/Azure-Samples/azure-serverless-workshop-team-assistant/tree/lang/jp/7-photo-mosaic-bot
GitHub モジュール 8: https://github.com/Azure-Samples/azure-serverless-workshop-team-assistant/tree/lang/jp/8-coder-cards

概要

どちらも以下の点で共通したモジュールとなっています。

- 写真の解析
- Cognitive Service
- C#

コード自体は Serverless とは直接関係がなく、機能の実装を主としているため今回は解説しませんが、利用しているサービスなどを見ていきます。

各種サービス

Azure Storage

写真の置き場所として利用 Blob を利用。またモジュール 8 では処理のトリガーとしてキューも利用。

Custom Vision Service

このサービスを使うと、通常の Vision サービスでは解析されないものを独自に対応できます。既に多くのブログで紹介されています、例えば吉野家と松屋の牛丼を見分けるということが可能です。今回はランドマークを登録して見分けるデモをしていますが、是非独自のカテゴライズに利用してください。

Bing Search API

Bing 検索の機能を API として利用できます。今回はイメージ検索とその URL 取得に使っています。一気に画像を取得するのに良さそうです。

Emotion API

Cognitive Service の中でも初期に公開されたものであるため、ご存知の方も多いと思いますが、表情の感情分析が行えるサービスです。モジュール 8 ではこのサービスで表情を分析した結果より返すカードの種類を変えています。

なぜ C# か

ここまで Node メインでしたが急に C# になったのはバリーションを持たせるためだと思いますが、画像処理などは C# のほうが書きやすいのかもしれません。

まとめ

今回でワークショップの Deep Dive シリーズは終わりますが、このワークショップで学んだことが多くありました。

- Serverless では機能ごとに実装して公開していく
- 各サービスは疎結合でも密結合でもできる
- 言語には依存しずらいためより柔軟に開発やコラボができる

明日はいよいよ Serverless Conf のセッションがあります。楽しみましょう!

中村  憲一郎

Adding analytical reports to Dynamics 365 for Operations

$
0
0

INTRODUCTION

This article provides a walk-thru for Application Developers seeking to add an analytical report to a Dynamics 365 application.  For this scenario, we will extend the Reservation Management Workspace in the Fleet Management application to include a direct link to an analytical report authored against the Entity Store using Power BI Desktop.

OVERVIEW

Whether you are extending an existing application Workspace or introducing one of your own, analytical reports can be used to deliver insightful and interactive views of your business data.  The process for adding an analytical reports is broken down into the following actions:

  • TASK #1 - Add the PBIX file as a resource to your model
  • TASK #2 - Introduce a Menu Item to control to access the report

PRE-REQUISITES

  • Access to a Dynamics 365 developer environment running on Platform Update 8 or later
  • Analytical report (.PBIX file) authored using Power BI Desktop with a data model sourced from the Dynamics Entity Store Database
  • IMPORTANT:  Use the steps described here to enable Analytical solutions in a 1Box environment

Note:  For detailed instructions on authoring Analytical Workspaces & Reports using Power BI Desktop take a look at the article Authoring Analytical Workspaces & Reports.

WALK-THROUGH

TASK #1 - Add the PBIX file as a resource to your model

To begin, you'll need to author or obtain the Power BI Report to embed in the workspace.  For more information on creating analytical reports, review the Getting started with Power BI Desktop literature.

Use the following steps to add a PBIX file as an Operations Resource artifact:

  • Create a new project in the appropriate model
  • Select the Project in the Solution Explorer, then right + click and select Add > New Item
  • In the Add New Item form, select the Resource template under Operations Artifacts
  • Provide a Name to use when referencing the resource then click Add
  • Now, locate the PBIX file containing the Analytical report definition and then click Open

Now that you've added the file named AnalyticalReport.PBIX as a model resource called 'PowerBIReportName', you can begin adding menu items that reference the report.

TASK #2 - Adding link to Analytical Report from Reservation Management Workspace

Use the following steps to extend the form definition for the Reservation Management Workspace

  • Select the Project in the Solution Explorer, then right + click and select Add > New Item
  • In the Add New Item form, select the Display Menu Item template under User Interface
  • Provide a name to use when referencing the report in X++ metadata then click Add
  • Open the Menu Item designer for the new item
  • In the Properties window, set the Object Type value to Class
  • In the Properties window, set the Object value to PBIReportControllerBase
  • In the Properties window, set the Parameters value to the name of the new Resource (note: this item was created in Task #1 'PowerBIReportName')
  • Now, rebuild the project and open the application

That's it. You can now access the Analytical Report directly using the new application menu item. Either add the Menu Item to an existing Application Menu or business logic to embed the solution into your application.

Visual Studio 2013 が 異常終了 (クラッシュ) する

$
0
0

こんにちは、Visual Studio サポートチームです。
今月に入り、Visual Studio 2013 が起動直後に異常終了する、サインインができなくなったという報告をいただいています。
詳細については開発部門と協力し現在調査中ですが、Visual Studio 2013 をご使用いただく際のライセンス認証を行うサーバー側の処理での何らかの問題により発生しているものと認識し、調査を進めています。
調査状況に進捗があり次第、本 Blog も更新いたします。

なお、本問題は Visual Studio 2013 のみで確認されており、Visual Studio 20152017 や、オンラインでのライセンス認証を行わない Visual Studio 2012 以前のバージョンでは問題は発生していません。
また、この問題は発生する環境では必ず発生しますので、現時点で問題なく Visual Studio 2013 をご使用いただけていたり、サインインが完了している場合には、今回の問題には合致しませんのでご安心ください。

 

現時点での対処策について
--------------------------------------------
現在までの調査状況では、Visual Studio 2013 に最新の Update 5 をご適用いただくことで正常に動作することを確認しています。
もし、Visual Studio 2013 Update 5 をご適用いただいていない場合には、ご適用をお試しください。

 

備考:
Visual Studio のサポート ライフサイクルでは、最新の Update (サービス パック) がリリースされた場合、前バージョンのサポート ライフサイクルは最新バージョンのリリースから一年間となります。
 
製品のライフサイクルの検索 - Visual Studio 2013
https://support.microsoft.com/ja-jp/lifecycle/search?alpha=Visual%20Studio%202013
 
Visual Studio 2013 については 2015 7 20 日に Update 5 がリリースされたため、Update 4 など旧バージョンの環境はサポートされていないバージョンとなります。
これらサポート ライフサイクルの観点からも、最新の Update 5 をご適用くださいますようお願いいたします。

Visual Studio 2013 Update 5 での新機能や修正された不具合などの詳細については、以下の技術情報をご確認ください。
 
Visual Studio 2013年更新 5 の説明
https://support.microsoft.com/ja-jp/help/3021976
 
Visual Studio 2013 Update 5 (2013.5) RTM
https://www.visualstudio.com/ja-jp/news/releasenotes/vs2013-update5-vs
 

 

インストーラーについては Visual Studio サブスクリプションのダウンロードのページでご入手いただけます。
 
Visual Studio サブスクリプションのダウンロードのページ
https://my.visualstudio.com/downloads
 

なお、すぐに Update 5を適用することが難しい場合には、以下の暫定対処手順をお試しください。
 
1) PC をオフラインにし、Visual Studio 2013 を起動する。
 
2) Visual Studio 2013 が起動した場合は、[ヘルプ] ー [製品の登録] を選択します。
 
3) すでに「サインイン」されている場合は、「サインアウト」ボタンを押下しサインアウトします。
 
4) サインアウトが完了した後に、Visual Studio 2013 を起動してください。
サインアウト後にはオンラインに戻します。
 
※ サインアウトした場合、Visual Studio 2013 でのライセンス情報の更新ができないため、ご利用状況によってはライセンスの更新を依頼するダイアログが表示されることがあります。
※ ライセンスの更新が必要となった場合、上記でご案内した my.visualstudio.com サイトなどでプロダクト キーをご確認いただき、プロダクトキーでの認証をお願いいたします。
 
なお、上述の通り、本問題については、現在ライセンス認証サーバー側での処理などに関しての調査が進められております。
今回の問題がサーバー側での問題と特定され、修正が行われた場合には、皆様のお手元の環境での対処は不要となることも考えられます。
大変恐縮ですが、ご了承くださいますようお願いいたします。

Converting PCL (Portable Class Libraries) to .NET Standard Class Libraries – Part 2

$
0
0

In Part 2 of this 3 part series, App Dev Manager, Herald Gjura covers upgrading the continuous delivery and Build/Release pipeline in VSTS.



In order for the new .NET Standard Packages to build and release, we will need modify the build definitions.

Here is a step by step guide on all the changes using an existing build definition, and highlighting all the important changes along the way.

VSTS Build Definition

Setting the build variables

The only custom build variables I use for these packages is the $(ProjectName) variable, set at the name of the project of the package. In the case of the package I am using for this example it looks like this:

image1

Setting the build tasks

Process tab: When creating the new build definition, or if you are modifying an existing one, make sure you select a hosted agent of type Hosted VS2017. This is found in the Process tab at the top of the tasks list. This is very important as this will not build if it is not in a VS 2017 environment.

clip_image004

Get Sources task: This task remains unchanged. You can chose the default, or you can configure it to get the source code from wherever it is hosted. In my case this is form a Git repo in VSTS.

clip_image006

dotnet restore task: The old NuGet restore task it is not used in this scenario. The dotnet restore command is used instead. Configure it to pull the NuGet packages the solution needs in order to build successfully.

In my case the packages come from two sources: the main nuget.org package feed and a private VSTS package feed that I use to host all my packages.

clip_image008

dotnet build task: Here also the task to build a VS solution is replaced by the dotnet build task. Note the arguments: -c takes the build configuration we pass as a variable (in this case it is Debug), --no-restore means that we do not want to restore the packages again, as we did it in the step above, and –no-incremental means we are forcing the build to ignore any incremental builds.

clip_image010

dotnet test task: This task replaces the VSTest task that would compile a test project. It uses now the dotnet core to accomplish this.

The usage is rather simple: chose the command test and set the path to the project. In my case I use the $(ProjectName) custom variable. Since I have many of these packages to build, using a custom variable allows me to use the same tasks in a task group and reuse the task group across all the packages. For the sake of this exercise and in order to provide the inner details of each task I am not using the task group.

Also, to accomplish this successfully I make use of certain naming convention, as I mentioned in the Part 1 of this blog, that sees the test projects be named as <Project Name of the Package>.Tests.csproj and in the case of .NET Core tests as <Project Name of the Package>.Tests.Core.csproj.

clip_image012

Publish Test Results task: Unlike the previous VSTest task, the dotnet core test compiles and runs the test, but does nothing else. You will need to publish test results in order to be picked up by VSTS for reporting and analysis. You will do that with the Publish Test Results task. Configuration is as shown:

clip_image014

dotnet pack task: Now that we were able to build the projects and successfully run all the tests, we should package it into a NuGet package. The previous Nuget Packager task is of no use here. As of now it does not work with .NET Standard packages. We will use the dotnet core command line instead, and calling its pack command. There are 4 properties you will need to set carefully:

- Select pack in the Command field

- Chose the proper path to the csproj file. Note that the project file has now all the properties you would need, that were previously stored in .nuspec file. Our package does not have a nuspec file anymore.

- Chose a path were you want the package artifacts to be stored

- Check the Do not build checkbox. Since we have already build this project before and run its tests, we do not want to repeat that, but rather only output the package artifacts.

clip_image016

Package versioning (optional): As some of the screenshots do show this build definition it is for a Dev/Continuous Integration build. I have 4 types of build for each package: Dev/CI, Release/QA/UAT, Hotfix and Prod. As a Dev/CI build, this one runs very frequently. It does so every time a make any changes to the code.

The Dev/CI release pipeline is also set up a CI manner, so it will run and publish the package at the same time the build succeeds. At this frequency it is very difficult to manage the versioning of the package.

Because a package with the same version would fail when published into the feed, a mechanism for upgrading the version is needed. The dotnet pack task offers such mechanism. In the task you can set to override the version number in the .csproj file with an environment variable, a build number, or date/time. I have chosen to use date/time.

The settings for the automatic package versioning is in the Pack Options of the task.

As you can see I have chosen a predefined Major/Minor/Patch version, and during the build this task will add also another set of numbers depending on the date/time of the build. The result is a package stamped with a version as follows: <PackageName> 0.0.2-CI-2017731-23658, where the part -CI-2017731-23658 it has been added by the task.

This works very well for Dev/CI builds and scenarios, but it is not appropriate for UAT and Prod scenarios. In those situations you will need to set the PackageVersion and PackageReleaseNotes tags in the .csproj file and manage them appropriately.

clip_image018

Copy Publish Artifact task: This is the task that simply will copy all the outputted package artifacts in the artifacts staging directory ready for the release definition to pick them up for publication.

clip_image020

This completes all the needed tasks to build, test, package the NuGet artifacts and make them ready for the release. I will move now into completing the Release definition and the publication of the package into the private VSTS feed for my organization.

VSTS Release Pipeline Definition

The release pipeline definition for this package it is rather simple: it has only two tasks. Yet, it has some properties and details that I would like finalize properly.

Setting the release variables

In this release definition I do set one custom variable, and that is the physical path where the build artifacts are. It is as follows:

clip_image022

Setting the release tasks

Run on agent tab: For this one I take the default values.

clip_image024

.net core installer task. .NET Core 2.0 is not installed by default in the hosted agents. So you will need to install it manually before going any further. Eventually this task will become obsolete when the .NET Core 2.0 will be part of the agent. Make sure to choose to install the SDK including the runtime, and not only the runtime.

clip_image026

.net core push task. This task will push the newly created NuGet package into the private NuGet feed for the organization. Apart for the setting of the Path to NuGet Package via a variable, the rest of the settings are self-explanatory.

clip_image028

This completes the tasks for the release manager to publish the NuGet package into your organizations private feed. Next will look into how to use the .NETStandard packages into your applications.

Coming Soon - Part 3: Upgrading the Continuous Delivery and Build/Release pipeline in VSTS

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Microsoft Edu Moments

$
0
0

2018 is rapidly approaching, autumn leaves are falling and winter is coming! It is by far the best time of the year. The education team have had quite a week with 'Future Decoded', Microsoft Training academy sessions have been inspiring educators and our incredible MIEE community sharing their best practice in the classroom. Let's celebrate our successes from this week and say thank you to our educators who are committed to ensure that every student achieves more!

This week we will focus on:

  1. Future Decoded
  2. Microsoft Training Academy
  3. MIEE Celebration
  4. Roadshow Events

 

Microsoft Future Decoded offered an inspiring vision of the digital business of tomorrow, with the tangible advice to drive your businesses forward today . The brightest decision makers, developers and IT pros to learned how digital transformation, artificial intelligence, cloud for good and digital skills that can pave the way for business success.

 


Microsoft Training Academy Paddington:


Over the last few years, hundreds of school leaders, educators, and students have passed through the Microsoft Showcase Classroom, originally in Victoria, and more recently in our London Paddington offices.

In the last few months this space has gone through an overhaul, and we are pleased to announce the opening of the new Microsoft Training Academy: Paddington.

Designed with school leaders and educators in mind, the Microsoft Training Academy is an experience that empowers digital transformation within everyday teaching, and at a school or multi-academy trust level. Delivered in person by Microsoft Learning Consultants, sessions in the Paddington MTA provide hands-on, tailored training on how to best utilise Microsoft technology in education. All attendees are given their own device for the duration of the day, allowing them to explore, collaborate and learn.

We have created agendas suitable for Primary, Secondary, and Further and Higher Education audiences, including all the best ways to utilise Office 365 and Windows in learning. However, we recognise that every institution will have its own unique challenges and goals as part of its journey of digital transformation, and we can create a bespoke day suited to where you are on your journey.

žA Microsoft Training Academy experience is free-of-charge, and refreshments will be provided in the morning & lunch at midday. Below we will explore a little further what a typical day looks like, meet the presenters, and provide information about how to book your own session.


MIEE Celebration:


Roadshow Events: 

Promoting our exciting, first ever #MicrosoftEDU UK Roadshow launching this month, with an aim to travel all around the UK over this year! Sign up, come along and unlock your potential of using Microsoft learning tools in the classroom! Take a look at our dedicated Microsoft Educator Community #MicrosoftEDU UK Roadshow page to find events near you now! 



So that wraps up this week's Microsoft Edu Moments. If you would like to share your successes and feature in this blog please contact Jose Kingsley via Twitter!


Utilize Data Export Service as data replication service in Dynamics CRM

$
0
0

It is always crutial to do not degradate the performance in CRM and frustrate users by slowly loading pages. Data Export Service is add-on service as Dynamics 365 (online) solution which replicates data to Azure SQL Database and this is fairly good solution however not perfect.

In case you need to work in nearly real-time and there are a lot of plugins running against CRM online instance you need to consider another scenario described in this post.

Below is the solution design diagram where you can find Azure components:
Dynamics CRM integration solution design

Obviously Dynamics CRM is the ultimate source system which shares some data across over Data Export Service and signals over Service Endpoint to Azure Service Bus. The logic app picks the message up and proceed which is resulted in Dynamics CRM.

Troubleshooting Logic Apps X12, EDIFACT schema not found issues

$
0
0

As you attempt to encode an outgoing B2B message with Logic Apps, you sometimes get a 400 Bad Request response with the error message:

Message cannot be serialized since the schema http://Contoso.X12810.MSLI.Schemas.AP.X1200401810#X12_00401_810 could not be located. Either the schema is not deployed or multiple copies are deployed.

Which we are improving for cases that there is no schema reference to:

Message cannot be serialized since the schema 'http://Contoso.X12810.MSLI.Schemas.AP.X1200401810#X12_00401_810' could not be located. Schema reference could not be found in the agreement send settings.

Especially if you edit the agreement settings in the raw JSON view, it is easy to mistakenly add the schema references in the agreement receive settings because the structure of receive and send are identical (the hierarchy is not).

Now you may have a schema reference in the proper send settings location of the agreement, and indeed that schema is not present at the given address, in which case we are improving the error message to:

Message cannot be serialized since the schema 'http://Contoso.X12810.MSLI.Schemas.AP.X1200401810#X12_00401_810' could not be located at 'foobar'. Either the schema is not deployed or multiple copies are deployed.

This may happen if the schema is deleted from the integration account by accident after the reference is added in the agreement. Removal of a schema or other artifact in the integration account will not remove the reference agreements or other artifacts have to the schema or other removed artifact. When you remove an artifact, it is your responsibility to ensure no reference to it remain. It case you get the message above, there is remaining processing in your integration solution that depend on the schema and you likely should upload it again to the integration account.

What’s new in Microsoft Social Engagement 2017 Update 1.10

$
0
0

Microsoft Social Engagement 2017 Update 1.10 is ready and will be released in November 2017. This article describes the fixes, and other changes that are included in this update.

New and updated features

Microsoft Social Engagement 2017 Update 1.10 introduces the following features:

Updated user experience for activity maps

To make activity maps more accessible to everyone, we introduced shapes as an additional means of displaying information on the maps. Sentiment values and the age of posts are now expressed in color and shape.

Update to alert emails

Alert emails now provide a link to the Analytics area in the Social Engagement app, where you can see the posts that match the data set that triggered the alert. Post content is no longer delivered as part of the email message. These links now also reflect changed alert configurations and deleted alerts.

New video training on Microsoft Virtual Academy

Learn how to build search topics and navigate Social Engagement to get the most out of it. Plus, examine different strategies for managing your social presence. Explore the social engagement circle and social strategy, and look at brand reputation and social business opportunities. Take a look at post consumption and analytics, configure automation options, and much more.

Learn more in the Microsoft Social Engagement course on MVA.

New social post packs for Microsoft Social Engagement

Next to the 10,000 monthly post package for Microsoft Social Engagement, new packages with 100,000 and 1,000,000 monthly posts are now available as part of the Microsoft Products and Services Agreement (MPSA). The MPSA is a transactional licensing agreement for commercial, government, and academic organizations that have 250 or more users or devices.

Resolved issues

In addition to the new features, Update 1.10 addresses the following issues:

  • Updated and translated UI text for several languages throughout Microsoft Social Engagement.
  • Improvements on the backend for acquisition stability and reliability.
  • Improved rate limiting on Facebook Pages, which will result in less delay on acquiring posts and comments from Facebook Pages, as well as unblocking publish actions.
  • Fixed an issue with truncated text strings in previous Microsoft Social Engagement versions.
  • Fixed an issue on Analytics > Overview where history charts displayed the first month of a custom time frame incorrectly.
  • Fixed an issue with keyword filters where the input field was limited by mistake.

IP Restriction for App Services on Linux

$
0
0

In order to restrict access to clients based on IP Address in App Services on Linux we need to add entries in the .htaccess file. For App Services on Windows click here.

In App Services on Linux, the visitor/client IP is made available to the web app through “X-Client-IP” environment variable. The log format is available here: /etc/apache2/apache2.conf.

NOTE: .htaccess is relevant for Apache based web apps only

In .htaccess file, we will use this field to allow or deny access to clients.

Allow specific IP-Address

Deny specific IP-Address

In the above configuration replace xxx.xxx.xxx.xxx with the corresponding client IP address.

Line Messaging API SDK update to v1.5

$
0
0

先日の LINE Developer Day 2017 での発表を受け、以下の機能をサポートするように LineMessagingAPI.CSharp をサンプルをアップデートしました。

GitHub: https://github.com/kenakamu/line-bot-sdk-csharp/releases/

- DateTime Picker

- Image Carousel

- リッチメニュー

- Group および Room のユーザー一覧および詳細取得 ※開発者アカウントのためテストできていません。

Visual Studio テンプレートでも、以下のメッセージを送ることでテスト可能です。

carousel: カルーセルに DateTime Picker が付いてきます。
imagecarousel : 5つの画像カルーセルが返ります。
addrichmenu : リッチメニューを現在のユーザーに追加します。

deleterichmenu: 追加したリッチメニューを削除します。
deleteallrichmenu: すべてのリッチメニューを削除します。

 

LINE Developer Day 2017 was the other day, and I updated LineMessagingAPI.CSharp to support following features.

GitHub: https://github.com/kenakamu/line-bot-sdk-csharp/releases/

- DateTime Picker

- Image Carousel

- Rich Menu

- User list and detail who joined to Group and Room

You can send following text to test new features in Visual Studio template.

carousel: Returns carousel with DateTime picker options.
imagecarousel : 5 image columns returns.
addrichmenu : Add Rich menu to current user.

deleterichmenu: Delete added rich menu.
deleteallrichmenu: Delete all rich menus.

 

Ken

The 10 command(ment)s of Docker (NAV on Docker #6)

$
0
0

I recommend that you read this blog post before reading this.

In this blog post I will describe the 10 docker commands I use most frequently and what I use them for. The commands can be executed in a Command Prompt, PowerShell or PowerShell ISE on a machine with Docker installed. In any case, you need to be running as administrator.

docker images

docker images will show a list of the current images available on the host:

In this list, you can see, that I only have one image on my machine, which is the w1 version of the devpreview.

microsoft/dynamics-nav:devpreview

As you probably know, docker is using a layering technology, meaning that when I have this image I also have all the components for the images, on which this relies. I just haven't combined the tags of these versions. If I perform a docker pull of microsoft/windowsservercore (which is one of the base images of the NAV on Docker image) you will see that Docker says that the layers already exists and now we have another image in the list:

docker pull

docker pull is used to pull images from a docker repository. Pulling from the public Docker Hub (for NAV on Docker images, that is images starting with microsoft/dynamics-nav) you do not need credentials to pull an image.

If you are trying to pull an image from a private registry (like navdocker.azurecr.io), you will need to authenticate to that Docker registry before being able to pull images.

If you are pulling an image, where you already have all layers (from other images), then the pull will just map the tags with the layers and you are done. This is what happened in the sample provided under docker images.

If you however pull the financials US version of the developer preview, you will see that Docker reuses a number of layers and proceeds by downloading the missing pieces. The financials US version is of course built on top of the W1 version which we already had:

docker rmi

docker rmi will remove an image:

Note, that you specify the image id as a parameter and it is only necessary to specify as many characters until the id is unambiguous (docker rmi 5 would have been enough in this example).

Note, that Docker only actually deletes the layers that are unused. Like when you pull the windowsservercore and it discovers that all layers already exists, then deleting the windowsservercore will only remove the tag, but not delete any layers as they are still used by the devpreview image.

docker run

docker run is described in a little more detail in this blog post.

One important thing to notice is, that docker run automatically pulls the image if the image doesn't exist. If the image already exists, the image is reused. That means that even if you remove a container and issue a new docker run microsoft/dynamics-nav:devpreview after the next update is available, it will NOT download the new version. You have to issue a docker pull specifically in order to pull a new version of an image.

docker ps

docker ps shows currently running containers.

docker ps -a shows all containers on the host (running or exited).

Lets say you have issued a docker run and forgot to set the accept_eula to Y (I am sure you will never make that mistake), then you might see something like this:

The f5 container (pensive_roentgen) exited immediately with an error specifying that you need to accept the EULA, but the container still exists. It is just in an exited state. Docker ps without the -a would not include the f5 container.

docker rm

docker rm will remove a container. If the container is exited (stopped), then you can just issue the command directly. If the container is running you need to specify -f to docker rm:

docker inspect

docker inspect is used to inspect a container or an image:

When inspecting an image, one of the interesting pieces are the Labels:

"Labels": {
    "country": "W1",
    "created": "201710262345",
    "cu": "october",
    "eula": "https://go.microsoft.com/fwlink/?linkid=861843",
    "legal": "http://go.microsoft.com/fwlink/?LinkId=837447",
    "maintainer": "Freddy Kristiansen",
    "nav": "devpreview",
    "osversion": "10.0.14393.1770",
    "tag": "0.0.3.1",
    "version": "11.0.18712.0"
 }

Here you will find some useful information like:

  • country is the localization of NAV (if this starts with fin, then you are running a Financials database)
  • osversion is the version of the microsoft/windowsservercore which was used to build this image
  • tag is the tag of the generic nav image (the base image for all NAV images)
  • version is the version number of the NAV executables
  • nav and cu is the version and cumulative update of the image
  • eula is the NAV on Docker EULA
  • legal is the legal link for the installed version of Microsoft Dynamics NAV

If you inspect a container instead of an image, you will have other things to inspect like healthcheck, environment variables and network:

"Networks": {
    "nat": {
    "IPAMConfig": null,
    "Links": null,
    "Aliases": null,
    "NetworkID": "cd2b370039b6d44e35db95971ce7f120eed0b3509bdb6d3b1e1c0c36f8ea6a9d",
    "EndpointID": "f094d82a29797ef17c3537b6047232981512ca6cfc4417a9759f6bc9cb16db57",
    "Gateway": "172.19.144.1",
    "IPAddress": "172.19.154.78",
    "IPPrefixLen": 16,
    "IPv6Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "MacAddress": "00:15:5d:f4:8d:56",
    "DriverOpts": null
}

Note: every issue created on the github repository: http://www.github.com/microsoft/nav-docker should be accompanied with a docker inspect output of the container not working.  If it isn't included, it is probably the first things you will be asked for.

Note also: if you have provided your password in clear text to docker run (insecure), then the password will be visible in docker inspect. If you have provided your password through securepassword (which is what navcontainerhelper always does), then you will only see an encrypted string without any chance of getting to the key to decrypt it.

docker logs

If docker inspect is the first thing you will be asked for, then docker logs is the second.

docker logs will write the logs from your container - much like the startup text. If you are running your containers with -d (detached), docker logs is the only way to inspect the installation phase/output of your container:

Note: docker logs will never display the password you provided whether you did so securely or insecurely.

docker start/stop/restart

So these ones are very very simple and I almost do not want to describe them, but what the heck...

  • docker stop <containername> stops a running container
  • docker start <containername> starts a stopped container
  • docker restart <containername> restarts a running container

Stopping a container means that it will stop using CPU and memory, but will still reside on disk, ready to be started again.

docker commit

docker commit is by far the easiest way to create your own docker image.

What it does is, to save the current state of a stopped container as a new image - and now you can use docker run to run the new image.

Start out by inspecting the running images and stop the one you want to clone:

Now simply use docker commit, to create a new image called myimage:

and last, but not least, run the new image:

As you can see, no need to set the Accept_eula environment variable, as you have already accepted the eula when running the container in the first place.

Now, you might be wondering why it has to do so many things on startup? Create NAV Web Server instance - wasn't that already created?

Well yes, it was - but the NAV on Docker image will discover that the hostname changed, and therefore, it will rerun all the setup routines which is depending on the hostname. If instead you start the NAV on Docker image without WebClient and without Http file download site, then your image will start in approx. 10 seconds:

Don't blink or you will miss the match...:-)

 

Enjoy

Freddy Kristiansen
Technical Evangelist

Supporting TLS 1.2 in SharePoint 2013

$
0
0

In March of 2014 the .NET framework 4.6 was updated to provide support for TLS 1.1 and 1.2.  This is made it possible through a series of registry entries to enable this more secure method.

An issue with REST calls using this version of the .NET Framework still exists because of the interfaces based on WCF which can cause Event ID 5566 to be recorded in the servers application log.

For example take an InfoPath form that has been configured with a button making a REST call to another SharePoint list to retrieve data.  If you have followed the guidance on configuring TLS 1.2 in the registry you'll be presented with this error:

In the ULS log you'll want to look for an entry similar to this:

"The following query failed: <NameOfRESTQuery>"..."The underlying connection was closed: An unexpected error occurred on a receive"

The tip off would be the name of the query you're trying to execute and knowing that it's making a REST call.

The fix is fairly straight forward:

  1. Install the .NET Framework 4.7 - Starting with the .NET Framework 4.7, WCF allows you to configure TSL 1.1 or TLS 1.2 in addition to SSL 3.0 and TSL 1.0 as the default message security protocol.
  2. Make necessary registry entries to support TLS 1.2
  3. In the Web.Config file for the web application you intend to use TLS 1.2 on, make the following entry:

<runtime>

<AppContextSwitchOverrides value="Switch.System.ServiceModel.DisableUsingServicePointManagerSecurityProtocols=false;Switch.System.Net.DontEnableSchUseStrongCrypto=false" />

</runtime>

Once these changes have been made REST calls can be made without any errors being returned.

NOTE:  This approach only works on SharePoint 2013 because it uses the .NET Framework 4.x.  It will not work on SharePoint 2010 since it uses the .NET Framework 3.x


Line Messaging API SDK update to v1.6

$
0
0

さっき v1.5 にしたばかりでしたが、別のレポジトリでアドバイスをもらった HttpClient のシングルトン化および Serverless Conf に参加した機会に Azure Function 対応しようと思ったので、ライブラリを .NET Standard 2.0 対応にしました。それに伴い、各種 VS テンプレートも更新済。

.NET Standard 2.0 にしたことでこれまでのテンプレートで SDK だけ上げると動かない場合があり、以下対応が必要です。

  • .NET Framework 4.6.1 以上
  • 依存 NuGet の更新

I just released v1.5, but I forget to implement the advice I got from the issue from another repository about HttpClient singleton for performance. Another reason is that I joined Serverless Conf today, and I decided to implement Azure Function version of this, thus converted the PCL library into .NET Standard 2.0. I updated VS templates as well…

This introduce a braking change if you are using old VS templates. In such case, you need to:

  • Target to .NET Framework 4.6.1 or later.
  • Update dependent NuGet packages.

Ken

How to setup Microsoft R Server One-box configuration to support AD account via LDAP

$
0
0

One of my customers was asking how to setup Microsoft R Server, basically there are two options:

  1. One-box Configuration
  2. Enterprise Configuration

In this step-by-step POC tutorial, I will show you how to setup One-box configuration to support AD account via LDAP, it covers:

  1. Setup One-box configuration on Server side
  2. Configure On-box configuration to support AD via LDAP
  3. Setup Client side to connect to R Server and test it

One-box Configuration Lab Environment:

  • Microsoft R Server 9.1.0/SQLLite3.7 - SQL2016N2
  • RStudio + Microsoft R Client 3.4.1 - SQL2014Client
  • DC: SQL2014DC.Features2014DC.local
  • All same password for test only: Corp123!

Steps tested in the Lab environment:

R Server configuration (SQL2016N2):

  1. Install R 9.1.0 in en_microsoft_r_server_910_for_windows_x64_10324119.zip (From <https://docs.microsoft.com/en-us/machine-learning-server/install/r-server-install-windows> )
  2. You can find the setup log (naming convention like this: Microsoft_R_Server_20171018143405.log) in %temp% to make sure installation are completed successfully
  3. Connect and validate R Server installation locally
  • R Server runs on demand as a background process, as Microsoft R Engine in Task Manager. Server startup occurs when a client application like R Tools for Visual Studio or Rgui.exe connects to the server.
  • As a verification step, connect to the server and execute a few ScaleR functions to validate the installation.

          1) Go to C:Program FilesMicrosoftR ServerR_SERVERbinx64.
           2) Double-click Rgui.exe to start the R Console application.
           3) At the command line, type search() to show preloaded objects, including the RevoScaleR package.
           4) Type print(Revo.version) to show the software version.
           5) Type rxSummary(~., iris) to return summary statistics on the built-in iris sample dataset. The rxSummary function is from  RevoScaleR.

      4.  At this point, we know the R server installation are verified locally, let's configure the server, firstly, start Admin Utility -

          CD C:Program FilesMicrosoftR ServerR_SERVERo16nMicrosoft.RServer.Utils.AdminUtil

          dotnet Microsoft.RServer.Utils.AdminUtil.dll

          From <https://docs.microsoft.com/en-us/machine-learning-server/operationalize/configure-use-admin-utility>

          Note: Local 'admin' account might be sufficient when trying to operationalize with a one-box configuration since everything is running within the trust boundary, it is insufficient for enterprise configurations.

      5.  Configure Server/Configure R Server for Operationalization

           A. One-box (web+compute nodes

           B. Set admin password: Corp123! ("admin" is the default admin account for One-box configuration, once you configure to use AD account, this admin account will not be used any more)

      6.  Run Diagnostic Tests to test the configuration - https://docs.microsoft.com/en-us/machine-learning-server/operationalize/configure-run-diagnostics

           A. Test configuration for a 'health report' of the configuration including a code execution test. Result:

clip_image001

  • Review the test results. If any issues arise, a raw report appears. You can also investigate the log files and attempt to resolve the issues.
  • After making your corrections, restart the component in question. It may take a few minutes for a component to restart.
  • Rerun the diagnostic test to make sure all is running smoothly now.

Client Configuration (SQL2014Client):

      Microsoft R Client overview:

  • R Client allows you to work with production data locally using the full set of ScaleR functions, but there are some constraints. On its own, the data to be processed must fit in local memory, and processing is capped at two threads for RevoScaleR functions. +
  • To benefit from disk scalability, performance and speed, push the compute context using rxSetComputeContext() to a production instance of Microsoft R Server (or R Server) such as SQL Server Machine Learning Services and Machine Learning Server for Hadoop. Learn more about its compatibility.+
  • You can offload heavy processing to Machine Learning Server or test your analytics during their developmentYou by running your code remotely using remoteLogin() or remoteLoginAAD() from the mrsdeploy package.

           From <https://docs.microsoft.com/en-us/machine-learning-server/r-client/install-on-windows>

     Client Install:

  1. Rstudio - https://www.rstudio.com/products/rstudio/download/

           Or Visual Studio 2015 + R Add-on - https://docs.microsoft.com/en-us/visualstudio/rtvs/installation

      2.  Download and install Microsoft R Client at http://aka.ms/rclient/

      3.  Taking Rstudio as example, set up Rstudio to use Microsoft R, go to Tools -> Global Options -> General -> R version to point to the Microsoft R Client: [64-bit] C:Program FilesMicrosoftR ClientR_SERVER

           Reference: https://support.rstudio.com/hc/en-us/articles/200486138-Using-Different-Versions-of-R

      4.  Now you should be able to remotelogin the R Server SQL2016N2 and test it Test remote connection in R Studio from the client machine SQL2014Client:

           Refer to https://docs.microsoft.com/en-us/machine-learning-server/operationalize/how-to-connect-log-in-with-mrsdeploy#authentication

> # EXAMPLE: LOGIN, CREATE REMOTE R SESSION, GO TO REMOTE PROMPT

> remoteLogin("http://SQL2016N2:12800")

# here it will prompt you to enter the username: admin and its password you defined

REMOTE> x <- 10 # Assign 10 to "x" in remote session

REMOTE> ls() # List objects in remote session

[1] "x"

REMOTE> pause() # Pause remote interaction. Switch to local

> y <- 10 # Assign 10 to "y" in local session

> ls() # List objects in local session

[1] "y"

> putLocalObject(c("y")) # Loads local "y" into remote R session's workspace

> resume() # Resume remote interaction and move to remote command line

REMOTE> ls() # List the objects now in the remote session

[1] "x" "y"

REMOTE> exit # Destroy remote session and logout

>

The below flow chart shows you how the client/server interacted with local and remote R Sessions using pause and resume:

clip_image002

From <https://docs.microsoft.com/en-us/machine-learning-server/operationalize/how-to-connect-log-in-with-mrsdeploy>

Now, the client can remotelogin the R server and write R code, train a model, score a model and publish a model as a web service, it is all good, but we have give out the admin password to all R Contributors, for POC it may be ok, but in real world, we need this to support AD account, not just admin/password pair. We also need ability to categorize the permissions in different groups. Below will explain you how to do this. At high level, Microsoft R Server 9.1 supports three security roles:

  1. Administrator/Owner - this role will be the owner of the R Server who have full control of the R server, can manage any service
  2. Contributor - this role will be the contributor of the R Server who can publish web services, such as R programmers, data scientists etc
  3. Reader - this role will be the reader of the R Server who can consumes the web services, such as application developers etc

Now, let's continue to configure this One-box R Server to support AD account via LDAP.

  1. Firstly, in order to configure the R Server SQL2016N2 to use AD/LDAP, you need to add this role to the server:

clip_image003

      2.  After the feature/role installed, you need to setup it via the Setup Wizard below: More details in - https://blogs.msdn.microsoft.com/microsoftrservertigerteam/2017/04/10/step-by-step-guide-to-setup-ldaps-on-windows-server/

clip_image004

clip_image005

  • You don't have to setup SSL/TLS at this moment, because it will need appropriate certificate, I will put together another blog to configure LDAP-S, in this demo, it it just LDAP.

    3.  Now, back to AD, For demo purpose, I created the following Roles/Groups in AD (features2014dc.local) to use the R Server, you can refer to more details in https://blogs.msdn.microsoft.com/mlserver/2017/04/10/role-based-access-control-with-mrs-9-1-0/, in this demo, I created below AD groups and users:

Group: MRSAdmins - this group will be the owner of the R Server who have full control of the R server

Member: RAdmin1

Group: Rprogrammers - this group will be the contributor of the R Server who can publish web services

Member: RProgrammer1, RProgrammer2, …

Group: AppDevelopers - this group will be the reader of the R Server who consumes the web services

Member: AppDev1, AppDev2

        After you created the groups and demo users in AD, it will look like this:

clip_image006

   4.  Once this is completed, you need to setup R Server Role Configurations, you can reference to https://docs.microsoft.com/en-us/machine-learning-server/operationalize/configure-roles (this is for Enterprise Configuration, but it is similar), or just simply follow the below steps.

   5.  Now we will need to make some changes in appsettings.json file for the web node, go to the folder (by default) C:Program FilesMicrosoftR ServerR_SERVERo16nMicrosoft.RServer.WebNode

clip_image007

   6.  Find the file appsettings.json, make a backup (I made a backup file called appsettings-backup.json_for_admin as the above screenshot) of it before you making changes, just in case, so you don't need to re-configure previously completed steps, you can easily revert back to admin/password mode if you want to. Open the json file in notepad and find the section and make changes as below as the highlighted section according to your environment, I will explain it here:

  • Host - it is the IP of your DC
  • QueryUserDn - is a domain account that has permission to query Active Directory using LDAP, I am thinking a service account will be preferred here, so you don't need to update this when a user leaves the company.
  • QueryUserPassword - the domain account's password, which can be encrypted, if so, change to "QueryUserPasswordEncrypted": true
  • SearchBase - can be the parent directory where you have the groups/users created in the above step 3, in this example, all users and groups created for R Server are under the CN=Users directory.
  • In the Authorization section, add the AD groups indicating the different R Server roles.

"LDAP": {

"Enabled": true,

"Description": "Enable this section if you want to enable authentication via LDAP",

"Host": "<your_host_ip>",

"Port": 389,

"UseLDAPS": false,

"QueryUserDn": "CN=RDeployAmin,CN=Users,DC=FEATURES2014DC,DC=LOCAL",

"QueryUserPassword": "P@$$w0rd!",

"QueryUserPasswordEncrypted": false,

"SearchBase": "CN=Users,DC=FEATURES2014DC,DC=LOCAL",

"SearchFilter": "cn={0}",

"UniqueUserIdentifierAttributeName": "userPrincipalName",

"DisplayNameAttributeName": "name",

"EmailAttributeName": "mail"

},

"Authorization": {

"Owner": [ "MRSAdmins" ],

"Contributor": [ "RProgrammers" ],

"Reader": [ "App developers" ]

},

  7.  Now, go back to Admin Utility, we will need to stop web node and start web node to make the change affected.

clip_image008

      8.  Now, we can do Diagnostic Test, since it is not using AD Authentication, the original Admin would be disabled automatically, so you will need to enter the AD account to validate it, for example, you can enter the AD account: RAdmin1/Corp123! - which is the owner role of the R Server, it should work.

      9.  Local Diagnostic test passed. Great! Now, let’s test it from the Client machine SQL2014Client, in RStudio:

          Tested Owner - RAdmin1/Corp123! - it is in the MRSAdmins group in AD

clip_image009

          Tested Contributor - RProgrammer1/Corp123! - it is in Rprogrammers group in AD

clip_image010


Now you know how to make Microsoft R Server with One-box configuration to support AD account via LDAP.

References:

Role Based Access Control With MRS 9.1.0 - https://blogs.msdn.microsoft.com/mlserver/2017/04/10/role-based-access-control-with-mrs-9-1-0/

How to publish and manage R web services in Machine Learning Server with mrsdeploy - Owner/Contributors (Administrators or R Programmers) can publish R script as web service - https://docs.microsoft.com/en-us/machine-learning-server/operationalize/how-to-deploy-web-service-publish-manage-in-r

App Developers - usually are the MSR Server Readers role, who can consume the service:

https://docs.microsoft.com/en-us/machine-learning-server/operationalize/how-to-consume-web-service-interact-in-r

https://docs.microsoft.com/en-us/machine-learning-server/operationalize/how-to-consume-web-service-asynchronously-batch

Thanks for reading, next, I will put up together a tutorial to configure One-box to support LDAP-S, which is LDAP over SSL/TLS.

Embedded Power BI: Interactive integration with Dynamics 365 for Finance and Operations

$
0
0

Introduction

In this blog post we’ll look at the integration between Dynamics 365 for Finance and Operations and its embedded Power BI reports, specifically regarding drill-through and callbacks from Power BI to AX.

Note

This post will not cover authoring, embedding and securing the reports.

This is not intended as full documentation of this feature, but rather a pointer to get developers on the right track. All content is subject to change as Dynamics 365 evolves.

Prerequisites

For the sake of saving time, it is recommended reading through the whole post first, before starting to do any practical experiments, as some design considerations for the Power BI report will be clarified down the road.

Target

Information embedded in this post should help to achieve the following result - clicking on a visual which represents the customer and opening the appropriate customer form. Note that the screenshot is a mock-up, normally the customer form opens in an own window:

 

Hooking in

After we’ve fulfilled the prerequisites by securing a Dynamics 365 instance with enabled embedded Power BI, creating a simple Power BI report, adding it to metadata and ensuring it is visible on a form, we might develop a keen urge to somehow interact with the newly embedded PBIX file .

The starting point for this interaction will be an event handler which subscribes to the delegate exposed on the PowerBIReportControl class – buildReportDrillThru().

This event handler will be triggered each time the users select any meaningful data on a report. All the necessary context information is encapsulated within the PBIReportSelectedData object passed as a parameter to the even handler.

Next is the tricky part… We need to make sense of the data provided by Power BI as context.

Where am I?

The first thing we need to do is ensure that our logic is only executed for our report. Unfortunately, so far there is no strongly typed way to do this, so we will have to use the report name for this purpose. PBIReportSelectedDataReport.report().displayName() will return the name of the current report. The following sample shows how to subscribe to the delegate and how to check which report the user is interacting with:

    [SubscribesTo(classStr(PowerBIReportControl), delegateStr(PowerBIReportControl, buildReportDrillThru))]
    public static void PowerBIReportControl_buildReportDrillThru(PBIReportSelectedData _data)
    {
        if (!_data)
        {
            return;
        }
        
        switch (_data.report().displayName())
        {
            case 'PfePowerSampleReport':                                
                PfeEmbeddedPowerBIHander handler = PfeEmbeddedPowerBIHander::construct();
                handler.handleClick(_data);
                break;
 
            default:
                break;            
        }        
    }

After we have successfully identified the report, it would be useful to check which page and visual the user is interacting with. Methods PBIReportSelectedData.page().name() and PBIReportSelectedData.visual().title() can be utilized for these purposes:

    void handleClick(PBIReportSelectedData _data)
    {
        switch(_data.visual().title())
        {
            case 'CustomerCreditLimitList':
                this.handleCustCreditLimitListClick(_data);
                break;

            default:
                break;              
        }
    }

Who are these people?

Now that we’ve figured out in which report the action happens and what the user is clicking on, we need to understand the data context of the click. There are several classes which help us with this task:

Class Description 
PBIReportSelectedData Contains all the context of the drill-through.
PBIReportSelectedDataPoint A class representing one selected data point. Each PBIReportSelectedData instance can contain multiple instances of PBIReportSelectedDataPoint; they are stored as a List and can be accessed via PBIReportSelectedData.dataPoints(). If the user has selected multiple elements, such as several slices of a pie chart, there will be several data points in the list.
PBIReportSelectedDataPointValue Data point values store the report measures selected by the user. For example, in a Power BI table with multiple columns one of which is a credit limit measure and the rest denote customer information – the credit limit will be returned as a value and the other fields as data point identities.

Consider that, depending on the visualization, the value might differ, for example, in a pie chart of currencies, it can return the sum of all the related transactions.

Data point values are stored in a list and are accessible via PBIReportSelectedDataPoint.values()

PBIReportSelectedDataPointIdentity Data point identity provides additional context on the value. In a case of a table with multiple columns described above, all the columns, which are not measures will be stored in the in the data point identity, such as, customer number and location.

Data point identities are stored in a list and are accessible via PBIReportSelectedDataPoint.identities()

PBIReportSelectedDataPointValueTarget The value target contains the name of the measure and the data source from the Power BI report.
PBIReportSelectedDataPointIdentityTarget The identity target contains the name of the data source and the field from the Power BI report.

 

To sum everything up, the user can select one or multiple data points, each of which can consist of multiple measures and fields. For the Dynamics-minded people such as me, a data point would represent a row and the combination of identities and values all the columns in that row. If it’s a pie chart, think of the row as slightly bent.

And finally, the targets show which report elements map to identities and values.

Points, Identities, Values, Targets and a Browser

The best way to understand what visuals return it is to browse their contents. A method like this can help with that by pushing the information to the infolog:

    public static void dataPointBrowser(PBIReportSelectedData _data)
    {
        ListEnumerator dataPointEnumerator = _data.dataPoints().getEnumerator();

        while (dataPointEnumerator.moveNext())
        {
            PBIReportSelectedDataPoint dataPoint = dataPointEnumerator.current();

            ListEnumerator identityEnumerator = dataPoint.identities().getEnumerator();           
            while (identityEnumerator.moveNext())
            {
                PBIReportSelectedDataPointIdentity pointIdentity = identityEnumerator.current();
                PBIReportSelectedDataPointIdentityTarget identityTarget = pointIdentity.target();
                info(strFmt("FIELD: %1.%2 Value %3", 
                            identityTarget.table(), 
                            identityTarget.column(), 
                            pointIdentity.identityEquals()));
            }

            ListEnumerator valueEnumerator = dataPoint.values().getEnumerator();
            while (valueEnumerator.moveNext())
            {
                PBIReportSelectedDataPointValue pointValue = valueEnumerator.current();
                PBIReportSelectedDataPointValueTarget pointTarget = pointValue.target();
                info(strFmt("MEASURE: %1.%2 Value %3", 
                            pointTarget.table(), 
                            pointTarget.column(), 
                            pointValue.formattedValue()));
            }
        }
    }

The above example loops thought all the values and identities of all the data points and uses the formattedValue() and identityEquals() methods to get the values out of the respective data points.

Mission (almost) accomplished

By combining the visual, data source and value information it is possible to understand what the user has clicked on and enables us to react accordingly, such as opening a form.

The following code snippets read out the Customer Id from the Power BI Report and call a menu item based on that:

    void readData(PBIReportSelectedData _data) 
    {        
        ListEnumerator dataPointEnumerator = _data.dataPoints().getEnumerator();
        while (dataPointEnumerator.moveNext())
        {
            PBIReportSelectedDataPoint dataPoint = dataPointEnumerator.current();

            ListEnumerator identityEnumerator = dataPoint.identities().getEnumerator();
            
            while (identityEnumerator.moveNext())
            {
                PBIReportSelectedDataPointIdentity pointIdentity = identityEnumerator.current();
                PBIReportSelectedDataPointIdentityTarget identityTarget = pointIdentity.target();
                switch (identityTarget.column())
                {
                    case 'CustAccount':
                        custAccount = pointIdentity.identityEquals();
                        break;

                    default:
                        break;
                }
            }
        }        
    }

    void openCustTable()
    {
        Args args = new Args();

        CustTable custTable = CustTable::find(custAccount);

        args.record(custTable);

        MenuFunction menuFunction = new MenuFunction(menuItemDisplayStr(CustTable), MenuItemType::Display);
        menuFunction.run(args);
    }

 

Design considerations / Tips

If the Power BI Report data source, field and measure names would match the names of respective metadata elements of the business database, one could use DictTable class to instantiate records. That can be achieved by appropriately renaming the elements in the Power BI Report, which brings us to the next point:

Give meaningful names to the Power BI report elements to be able to properly track them in Dynamics.

I suggest using constants (const) and/or macros to track the abovementioned elements names. The code samples in this post deliberately use text constants for purposes of readability.

Use resourceStr() function to reference the resource.   

Further reading / other examples

For more intricate examples on how to utilize this functionality see the standard CustCollectionsBIReportsHandler class.

The well known Fleet Management module also has a embedded Power BI sample - FMClerkWorkspaceform.

DISCLAIMER

Microsoft provides programming examples for illustration only, without warranty either expressed or implied, including, but not limited to, the implied warranties of merchantability or fitness for a particular purpose.

This post assumes that you are familiar with the programming language that is being demonstrated and the tools that are used to create and debug procedures.

How we did it: PASS 2017 Summit Session Similarity using SQL Graph and Python

$
0
0

I had previously shared a sneak preview of our upcoming session on Graph data processing in SQL Server. The talk is at the PASS Summit 2017. In that post, I had promised to share more details closer to the session. And here it is!

Inferring Graph Edges using SQL ML Services

In many cases, the edges in a graph are deterministic and ‘known’ to the application. In other cases, edges have to be ‘inferred’ or ‘discovered’ by some code:

  • In some cases, node attributes can be used to detect similar nodes and create an edge
  • In other cases, an ETL process could use fuzzy lookups etc.
  • But, for more complex situations, ML Services in SQL Server 2017 and Azure SQL DB can be used as well! sp_execute_external_script can be used to invoke an R / Python script and get back a list of keys to build edges

In this walkthrough we will use ML Services in SQL Server 2017 to invoke a Python script to infer similar edges in a graph.

Approach

The nodes in this graph will be the sessions at PASS 2017 (with the data imported as per this previous post) and then we will use Python to invoke some language processing code to compute the measures of similarity between pairs of sessions, based on their Title and Abstract fields. In summary here is what we will do:

  • Our database has a node table with all the sessions from PASS Summit 2017
  • Sessions are saved as a Node table in SQL Graph
  • Session node has attributes like Session Id, Title, Abstract, Speaker Names and Track
  • Hypothesis: similar themed sessions have similar keywords in their Title / Abstract
  • Using NLP libraries in Python we can break down these sessions into underlying keywords and their frequency counts
  • Construct a “similarity matrix” and then return for each session, those sessions which have at least 15% similarity
  • Construct edges in SQL Graph for these related session pairs

Pre-requisites

We will be leveraging two powerful Python libraries: NLTK and Gensim, to help us analyze the text and derive a measure of similarity for pairs of sessions. While NLTK comes pre-installed with SQL Server 2017 ML Services, you have to install Gensim using PIP:

pip install stop_words
pip install gensim

We will then need to install a "corpus" of stop words for NLTK. This will help eliminate some common "noise" words from text to help improve the accuracy of the analysis. To do this we first create a folder for NLTK data:

md "C:Program FilesMicrosoft SQL ServerMSSQL14.SQL20171000PYTHON_SERVICESLibnltk_data"

Then we use nltk.download() to download and install the stopwords corpus as shown below. The important thing to note is to correctly escape the backslash characters in the path when providing it to the NLTK download GUI. In my case I used:

C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\lib\nltk_data

Here's a screenshot of the above step in case you are wondering:

Once the stopwords corpus is downloaded, we proceed to create the necessary SQL node table, and then convert the previously prepared "regular" table into a Node table using INSERT...SELECT:

CREATE TABLE [dbo].[Session](
	[Index] integer IDENTITY(1,1) NOT NULL,
	[SessionID] integer NULL,
	[Abstract] [nvarchar](max) NULL,
	[SessionLevel] [int] NULL,
	[Speaker1] [nvarchar](100) NULL,
	[Speaker2] [nvarchar](100) NULL,
	[Speaker3] [nvarchar](100) NULL,
	[Title] [nvarchar](4000) NULL,
	[Track] [nvarchar](50) NULL
) AS NODE
GO

INSERT INTO Session (SessionID, Abstract, SessionLevel, Speaker1, Speaker2, Speaker3, Title, Track)
SELECT SessionID, Abstract, SessionLevel, Speaker1, Speaker2, Speaker3, Title, Track FROM dbo.PASS2017Sessions;
GO

We then proceed to create an empty edge table:

CREATE TABLE SimilarSessions
(
SimilarityMetric float
)
AS EDGE

This table is implicitly going to hold the "from" and "to" nodes in the graph and additionally it holds a similarity measure value for that relationship.

Using Python (NLTK and Gensim) to compute session similarity

Now, that we have the tables in place, let's dig in and do the heavy lifting of text processing and analytics. Given below is the entire code which does the processing, but given it is complex, let me give you a high level flow prior to actually presenting the code. Here is what is happening in the below code:

  • The session data (titles, session ID, abstract, track and an incremental index number) are provided to the Python script from a T-SQL query (that query is at the very end of this code block)
  • Then NLTK is used to break down the title and abstract into words (a process called tokenization)
  • We then stem and remove stop words from the tokenized words
  • We then proceed to build a corpus of these words, taking only those words which have occurred at least 3 times
  • Then we proceed to use TF-IDF to prepare a document matrix of these words and their frequencies in various documents
  • Then, Gensim is used to compute "Matrix Similarity" which is basically a matrix of documents and how similar they are to each other.
  • Once the similarity matrix is built up, we then proceed to build the output result set which maps back the SessionId values and their similarity measures
  • In the above step, one interesting thing to note is that in SQL, graphs are directed. So we have to exclude situations where Session1 'is similar to' Session2 AND Session2 'is similar to' Session1.
  • Once this list of unique edges is built up, it is written back into SQL as edges in the SimilarSessions graph (edge) table by using a function called rxDataStep.

A small but important nuance here with rxDataStep and specifically SQL Graph edge tables, is that you need to exactly match the $from_id and $to_id column names with the actual values (including the GUID portions) that are in the edge table. Alternatively, you can avoid using rxDataStep and insert the output of the sp_execute_external_script into a temporary table / table variable and then JOIN back to the node tables to finally insert into the graph edge table. We will look at improving this experience going forward.

Take your time to understand the code! Here we go:

exec sp_execute_external_script @language = N'Python',
@script = N'
####
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.snowball import SnowballStemmer
from gensim import corpora, models, similarities
import gensim
import pandas as pd
from revoscalepy import RxSqlServerData, rx_data_step

# read data back in
pdDocuments = InputDataSet

tokenizer = RegexpTokenizer(r"w+")
en_stop = get_stop_words("en")
stemmer = SnowballStemmer("english", ignore_stopwords=True)

def iter_documents(pdSeries):
    """Iterate over all documents, yielding a document (=list of utf8 tokens) at a time."""
    for (idx, docrow) in pdSeries.iterrows():
        concatsessionattributes = list()
        concatsessionattributes.append(docrow.Title.lower())
        concatsessionattributes.append(docrow.Abstract.lower())

        concatsessionattributesstr = " ".join(concatsessionattributes)

        tokens = tokenizer.tokenize(concatsessionattributesstr)
        # Remove stop words from tokens
        stopped_tokens = [i for i in tokens if not i in en_stop]
        final = [stemmer.stem(word) for word in stopped_tokens]

        yield final

class MyCorpus(object):
    def __init__(self, pdSeriesInput):
        self.series = pdSeriesInput
        self.dictionary = gensim.corpora.Dictionary(iter_documents(self.series))
        self.dictionary.filter_extremes(no_below=3)
        self.dictionary.compactify()

    def __iter__(self):
        for tokens in iter_documents(self.series):
            yield self.dictionary.doc2bow(tokens)

corp1 = MyCorpus(pdDocuments)
tfidf = models.TfidfModel(corp1,id2word=corp1.dictionary, normalize=True)

train_corpus_tfidf = tfidf[corp1]
corpora.MmCorpus.serialize("train_ssd_corpus-tfidf.mm",train_corpus_tfidf)
train_corpus_tfidf = corpora.MmCorpus("train_ssd_corpus-tfidf.mm")

index = similarities.MatrixSimilarity(train_corpus_tfidf)

tfidf_sims  = index[train_corpus_tfidf]
# print (tfidf_sims)

similaritylist = []

def similarsessions(inputindex):
    print("Selected session: " + pdDocuments.loc[inputindex].Title)
    print()
    print("Most similar sessions are listed below:")
    print()
    topNmatches = tfidf_sims[inputindex].argsort()[-10:][::-1]
    for matchedsessindex in topNmatches:
        if (inputindex != matchedsessindex and round(tfidf_sims[inputindex][matchedsessindex] * 100, 2) &amp;amp;gt; 20.0):
            rowdict = {}
            rowdict["OriginalSession"] = pdDocuments.loc[inputindex].SessionId
            rowdict["SimilarSession"] = pdDocuments.loc[matchedsessindex].SessionId
            rowdict["SimilarityMetric"] = round(tfidf_sims[inputindex][matchedsessindex] * 100, 2)

            # this graph effectively being a "Undirected Graph" we need to
            # only add a new row if there is no prior edge connecting these 2 sessions
            prioredgeexists = False

            for priorrow in similaritylist:# only add a new row if there is no prior edge connecting these 2 sessions
                # only add a new row if there is no prior edge connecting these 2 sessions
                if (priorrow["SimilarSession"] == rowdict["OriginalSession"] and priorrow["OriginalSession"] == rowdict["SimilarSession"]):
                    prioredgeexists = True

            if (not prioredgeexists):
                similaritylist.append(rowdict)

            print(str(matchedsessindex) + ": " + pdDocuments.loc[matchedsessindex]["Title"] + " ("  + str(round(tfidf_sims[inputindex][matchedsessindex] * 100, 2)) + "% similar)")

for sessid in range(len(pdDocuments)):
    similarsessions(sessid)

print(similaritylist.__len__())

finalresultDF = pd.DataFrame(similaritylist)

# rename the DF columns to suit graph column names
finalresultDF.rename(columns = {"OriginalSession":"$from_id_C19A274BF63B41359AD62328FD4E987D", "SimilarSession":"$to_id_464CF6F8A8A1406B914D18B5010D7CB1"}, inplace = True)

sqlDS=RxSqlServerData(connection_string = "Driver=ODBC Driver 13 for SQL Server;Server=.\SQL2017;Database=PASS-Demo;trusted_connection=YES"
, table="dbo.SimilarSessions")

rx_data_step(finalresultDF, output_file = sqlDS, append = ["rows"])
', @input_data_1 = N'SELECT CAST((ROW_NUMBER() OVER (ORDER BY (SELECT NULL))) - 1 AS INT) as RowIndex, Abstract, SessionLevel, Speaker1, Speaker2, Speaker3, Title, Track, $node_id AS SessionId FROM Session'

Once the above code is executed, the SimilarSessions table is populated with edges! Then we can query that table using regular T-SQL and the new MATCH predicate in SQL Graph. For example below we look at sessions similar to my colleague Denzil's session:

SELECT TS.SessionId, TS.Title, SimilarityMetric
FROM SimilarSessions SS, [Session] OS, [Session] TS
where MATCH (OS-(SS)-&amp;amp;gt;TS)
AND (OS.SessionId = 69503)
UNION ALL
SELECT OS.SessionId, OS.Title, SimilarityMetric
FROM SimilarSessions SS, [Session] OS, [Session] TS
where MATCH (OS-(SS)-&amp;amp;gt;TS)
AND (TS.SessionId = 69503)

Here is the output of that query:

Sessions similar to Denzil's session

I'm sure you will agree, looking at the above, these are greatly correlated sessions and would be a great recommendation for anyone already viewing Denzil's session!

Visualization - GraphML

Now, the last part is how to visualize the above graph in some capable tool. SQL does not ship with native visualization for graphs, and the main reason for this is that preferences on the visualization are hugely varied and we do not want to enforce anything specific from our side. Instead, we recommend using standard tools like d3.js, Gephi etc. In my case, I chose to use a very powerful tool called Cytoscape. Now, many of these tools understand a standard format for representing graphs, called GraphML. This format is XML and hence it is easy to use T-SQL to generate GraphML corresponding to our graph! Here's the code to do this:

CREATE OR ALTER PROCEDURE CreateGraphML
AS
BEGIN
    DECLARE @prolog AS NVARCHAR (MAX) = N'&amp;amp;lt;?xml version=''1.0'' encoding=''utf-8''?&amp;amp;gt;
&amp;amp;lt;graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd"&amp;amp;gt;
  &amp;amp;lt;key attr.name="weight" attr.type="long" for="edge" id="d3" /&amp;amp;gt;
  &amp;amp;lt;key attr.name="SessionId" attr.type="string" for="node" id="d2" /&amp;amp;gt;
  &amp;amp;lt;key attr.name="Track" attr.type="string" for="node" id="d1" /&amp;amp;gt;
  &amp;amp;lt;key attr.name="Title" attr.type="string" for="node" id="d0" /&amp;amp;gt;
  &amp;amp;lt;graph edgedefault="undirected"&amp;amp;gt;
';
    DECLARE @epilog AS NVARCHAR (MAX) = N'
  &amp;amp;lt;/graph&amp;amp;gt;
&amp;amp;lt;/graphml&amp;amp;gt;
';
    DECLARE @nodeXML AS NVARCHAR (MAX) = (SELECT   *
                                          FROM     (SELECT 1 AS Tag,
                                                           0 AS Parent,
                                                           S.SessionId AS [node!1!id],
                                                           NULL AS [data!2!!element],
                                                           NULL AS [data!2!key]
                                                    FROM   dbo.[Session] AS S
                                                    UNION ALL
                                                    SELECT 2 AS Tag,
                                                           1 AS Parent,
                                                           S.SessionId,
                                                           CONCAT(S.Title, CHAR(13), CHAR(10), CONCAT('(', S.Speaker1, IIF (S.Speaker2 IS NULL, '', CONCAT(',', Speaker2)), IIF (S.Speaker3 IS NULL, '', CONCAT(',', Speaker3)), ')'), ' [', ((SELECT COUNT(*)
                                                                                                                                                                                                                                               FROM   SimilarSessions AS SS
                                                                                                                                                                                                                                               WHERE  SS.$FROM_ID = S.$NODE_ID) + (SELECT COUNT(*)
                                                                                                                                                                                                                                                                                   FROM   SimilarSessions AS SS
                                                                                                                                                                                                                                                                                   WHERE  SS.$TO_ID = S.$NODE_ID)), ' connections]'),
                                                           'd0'
                                                    FROM   dbo.[Session] AS S
                                                    UNION ALL
                                                    SELECT 2 AS Tag,
                                                           1 AS Parent,
                                                           S.SessionId,
                                                           S.Track,
                                                           'd1'
                                                    FROM   dbo.[Session] AS S
                                                    UNION ALL
                                                    SELECT 2 AS Tag,
                                                           1 AS Parent,
                                                           S.SessionId,
                                                           CAST (S.SessionId AS NVARCHAR (200)),
                                                           'd2'
                                                    FROM   dbo.[Session] AS S) AS InnerTable
                                          ORDER BY [node!1!id], [data!2!!element]
                                          FOR      XML EXPLICIT);
    DECLARE @edgeXML AS NVARCHAR (MAX);
    WITH   Edges
    AS     (SELECT OS.SessionId AS source,
                   TS.SessionId AS target,
                   CAST (SS.SimilarityMetric AS INT) AS data
            FROM   SimilarSessions AS SS, [Session] AS OS, [Session] AS TS
            WHERE  MATCH(OS-(SS)-&amp;amp;gt;TS))
    SELECT @edgeXML = (SELECT   *
                       FROM     (SELECT 1 AS Tag,
                                        0 AS Parent,
                                        source AS [edge!1!source],
                                        target AS [edge!1!target],
                                        NULL AS [data!2!!element],
                                        NULL AS [data!2!key]
                                 FROM   Edges
                                 UNION ALL
                                 SELECT 2 AS Tag,
                                        1 AS Parent,
                                        source,
                                        target,
                                        data,
                                        'd3'
                                 FROM   Edges) AS InnerTable
                       ORDER BY [edge!1!source], [edge!1!target], [data!2!!element]
                       FOR      XML EXPLICIT);
    SELECT CONCAT(@prolog, @nodeXML, @edgeXML, @epilog);
END
GO

EXEC CreateGraphML;
GO

/* Run from CMD prompt:
bcp "EXEC CreateGraphML" queryout PASS2017.xml -T -S.SQL2017 -dPASS-Demo -C65001 -c
*/

And that's it! When you run the BCP command line from CMD prompt, it will create a PASS2017.xml file, which is internally in the GraphML format. That's easily imported into Cytoscape or other such graph visualization tools. And that is how we created the fun visualization that you saw in the "sneak preview" blog post!

Disclaimer

This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment.  THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.  We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code. This posting is provided "AS IS" with no warranties, and confers no rights.

Service Fabric Customer Profile: Societe Generale and Qarnot Computing

$
0
0

Authored by Stéphane Bonniez from Societe Generale; Grégoire Sirou, Nicolas Duran, and Erik Ferrand from Qarnot Computing; in conjunction with Eric Grenon from Microsoft.

This article is part of a series about customers who’ve worked closely with Microsoft on Service Fabric over the last year. We look at why they chose Service Fabric and take a closer look at the design of their application.

In this installment, we profile Societe Generale and Qarnot Computing, their grid computing application, and how they designed the architecture.

Societe Generale provides financial services to 31 million individuals and professionals worldwide, placing innovation and digital technology at the heart of its activities. Its corporate and investment banking business, SG CIB, offers global access to markets through solutions for equities, fixed income and currencies, commodities, and alternative investments. Their global markets platform is recognized for its worldwide leadership in equity derivatives, structured products, euro fixed income markets, and cross-asset solutions.

Societe Generale partnered with Qarnot Computing, and the Microsoft Azure team to build a new financial simulation platform. Market activities require complex financial simulations that run on large-scale grid computing infrastructures. The new platform is flexible, scalable, environmentally responsible, and designed to support the growth of Societe Generale’s business in a rapidly changing economy.

Founded in Paris in 2010, Qarnot Computing is a pioneer in distributed cloud and smart-building technologies. They invented an innovative computing heater, the first of its kind, that uses the heat generated by the CPUs to heat buildings for free. Since 2014, more than 100 French homes, schools, hotels, and offices are heated with Qarnot Q.rads heaters. Their ingenuity has garnered several awards, including the 2015 Cloud Innovation World Cup Award.

Qarnot provides cloud computing through a distributed infrastructure where computing power is no longer deployed in concentrated datacenters but spread throughout the city in the form of heaters and boilers. Their remote cloud computing powers private and public companies, including major banks, 3D animation studios, and research labs. But when Societe Generale contacted Qarnot with their game-changing request for more compute power, Qarnot needed help from another cloud provider.

A financial simulation platform

Financial simulations are computationally intensive. They typically involve several thousand calculation tasks, taking from a few seconds to several minutes each to compute. They can also require hundreds of megabytes of data such as the historical values of equity shares over several years. But each task usually uses only a small portion of that data.

Simulation jobs are triggered by users at any time during working hours. Since Societe Generale has offices all around the world, that means at any time during the day, any day. Some of the simulations also have strong computation time constraints.

Societe Generale and Qarnot designed a solution that:

  • Exposes a simple REST API to client applications within Societe Generale.
  • Handles calculation jobs ranging from a few tasks to several thousands (from seconds to hours).
  • Provides caching of financial data for efficient dispatching of tasks.
  • Scales with the number of jobs and tasks.
  • Is available around the clock.

These achievements take place in a context where new software is delivered frequently, because simulation libraries evolve continuously. Service Fabric provides a store to manage versioning and serve as repository of all binaries for the microservices and related configuration files. In addition, infrastructure costs must be kept as low as possible, although thousands of CPUs may be required to perform some simulations.

To meet these requirements, the new platform provides the following key components:

  • A HTTPS web gateway exposing simulation services as a REST API.
  • A collection of microservices handling data caching and the orchestration of simulation jobs, from the dispatching of tasks to the retrieval of the results.
  • Several grid computing providers. Currently, Azure Batch and Qarnot Computing’s platform are targeted, but new providers can be added very easily, and internal dispatching guarantees that a job will always find room to run at the best possible price.

The web gateway and the microservices are native Service Fabric applications, all deployed in a scalable cluster in the Azure cloud.

“With Service Fabric, we were able to build a robust, stateful microservice architecture in no time, giving us more time to focus our efforts on our product.”

Nicolas Duran, Senior Software Engineer, Qarnot Computing

Figure 1. High-performance financial calculations are broken into discrete jobs and tasks by microservices running on Service Fabric, then distributed to available cloud computing environments.

Service Fabric implementation

The Service Fabric part of the application is written in C#, with mix of services and actors, both stateless and stateful.

The web gateway is a stateless reliable service. As the unique entry point of the application, the service must be highly scalable so multiple client applications within Societe Generale can run simulations concurrently. When the load increases, it’s simple to add new nodes to the cluster, and Service Fabric automatically launches more gateways and balances the load across the cluster.

Calculation jobs and tasks are implemented with stateful reliable actors. For instance, each task that is dispatched to the Azure Batch or Qarnot Computing platforms is materialized as an actor. Actors are easy to write, and they have several useful properties:

  • Their state is replicated on several instances across the cluster, so they are reliable, highly available, and persistent.
  • They are automatically distributed across the Service Fabric cluster, which provides scalability and load balancing.
  • If they have been inactive for some time, actors are automatically unloaded from memory to disk, then automatically rehydrated in memory when called again. This feature saves memory and helps scale to more actors (so more simulation jobs and tasks are supported).
  • Their threading model guarantees that their state will always be consistent.

“With Service Fabric, developers can focus on business needs and rely on the platform for resiliency, load balancing, and scalability. We can deliver better software, and do it faster.”

Stéphane Bonniez, Project Manager, Societe Generale

Figure 2. The solution uses both the Service Fabric Reliable Services and Reliable Actors frameworks.

Advantages of Service Fabric

With a tight schedule, the joint Societe Generale and Qarnot team needed to ramp up fast. Service Fabric offered a complete toolset with its sophisticated runtime for building distributed microservices and its complete application management package for provisioning, deploying, monitoring, upgrading, and deleting deployed applications.

The fully platform as a service (PaaS) cluster in the Azure cloud leaves the deployment and patching burden of the underlying software to Azure.

Given the deadline, the following Service Fabric benefits proved especially helpful:

  • Speed of development: The powerful programming models provided by Service Fabric made it very easy for the developers to concentrate on business logic. Service Fabric managed the critical technical details—replication, resiliency, deployment systems, and more.
  • Self-healing: The calculation solution required high resilience and availability. Service Fabric’s ability to provide self-healing was a big benefit. For example, if a node or a process fails, the system automatically starts new instance.
  • Reliability: Financial simulations involve many calculation tasks that depend on the same data, so an easy way to optimize the application was to hold a copy of this data. A client application can send all the data it will need, then the tasks it wants to compute. A cache like this wouldn’t be of much use if it had to be rebuilt each time a node in the cloud is lost. Fortunately, Service Fabric makes it easy to write and manage reliable services. The developers used the Reliable Collections to handle data replication, so application code doesn’t have to deal with data management. Developers simply specify how many times to replicate a state across nodes for reliability. In case of failure, Service Fabric automatically fails over to another consistent replica, and the calculation does not lose progress. This enables the client to avoiding restarting the whole financial calculation.
  • Programming model: Societe Generale and Qarnot took advantage of the productive programming models in Service Fabric to develop key components of their solution, from the gateway stateless service to the stateful calculation task service to the reliable actors used for task distribution.
  • Scalability: Service Fabric provided the scale needed for the calculations, from one actor to thousands of actors. The developers saved countless hours—there was no need to manage the scale at the application level.
  • Application lifecycle: The team can easily deploy a new version of the application with no downtime or deploy multiple instances of the same application. The flexibility of the cloud and of Service Fabric development tools allowed the team to fully integrate build, test, and deployment into Societe Generale’s continuous integration pipeline. Code is built and packaged, then tested on a local Service Fabric cluster. If all goes well, it is automatically deployed to a development cluster in Azure. The same tests can run against the local cluster and the development cluster in Azure, which allows the team to spot bugs very early in the chain. When a version has been validated, it can be deployed to the production cluster the same way it was deployed to the development cluster, and the same tests can be used to check that deployment went well.

“With Service Fabric, Societe Generale and Qarnot were able to speed up debugging and scaling, thanks to the on-premises deployment and the perfect integration with the development tools.”

Grégoire Sirou, CTO, Qarnot Computing

Summary

The challenge for Qarnot Computing and Societe Generale was to deliver a secure, modular, scalable, and resilient application in a very short timeframe.

Service Fabric was the right choice for the job. Its powerful toolbox handled the mechanics, so the development team could concentrate on business logic. The result is an innovative and high-quality solution that required just the right amount of effort to develop.

Now that the simulation platform is in production, the team can focus on integrating new types of simulations and scaling the platform to handle them. The goal is to enable more client applications to move away from legacy systems. The current platform lets clients use the computational capacities they want. The next stage is centered on client management. Future versions will integrate per-client capacity management and billing.

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>