Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

【LUIS】在 10 分鐘內創造您的第一個 LUIS APP

$
0
0

此 Quickstart 可以在短短的幾分鐘內幫助您創建您的第一個語言理解智能服務(LUIS)應用程序。完成後,您將有一個 LUIS 終端在雲端運行。在範例中,您將創建一個旅遊應用程序,可幫助您預訂航班,並檢查目的地的天氣。“how-to” 是指這個應用程序,並建立在它上面。

在開始之前

為了使用 Microsoft Cognitive Service APIs,您需先在 Azure 儀表板創立一個 Cognitive Services API account.

如果您沒有 Azure 訂閱,先註冊 free account

製作一個新的 APP

您可以在 My Apps 頁面上創建和管理您的應用程序。您可以透過點擊 LUIS web page 頂端導航欄中的 My Apps 來進入此頁面。

  1. My Apps 頁面,點擊 New App.
  2. 在對話框中,將您的應用程序命名為 “TravelAgent”。
  3. 選擇您應用程式的語言 (在 TravelAgent 中,我們選擇英文),並點選 Create.

>[!注意]
>語言一旦選定就無法更動.

LUIS 創建了 TravelAgent 應用程序,並打開其主頁面,如下圖所示。使用左側面板中的導航鏈接移動您的應用頁面,以定義數據並處理您的 app。

添加意圖 (intents)

您的第一個任務是添加意圖 (intents)。意圖 (intents) 是用戶的話語傳達的意圖或請求的動作。它們是您的應用程序的主要模塊。您現在需要定義應用程序所需要偵測到的意圖(例如,預訂航班)。單擊側面菜單中的 Intents 來進入 intents 頁面,通過單擊 Add Intent 按鈕來創建您的意圖 (intents)。

更詳細添加 intents 的教學,請閱讀 Add intents.

添加語句 (utterances)

現在您已經定義了意圖,您可以開始建立範例,以便讓機器學習模型學習到不同的模式(例如,“預訂6月8日起飛的西雅圖航班”)。選擇您剛剛添加的 intent 並將您的 utterances 保存在此 intent 內。

添加實體 (entities)

現在你有 intents,你可以繼續添加 entities。entities 描述與 intents 相關的信息,有時對於您的應用程序執行其任務至關重要。這個應用程序的一個例子是預訂航班的航空公司。在您的 TravelAgent 應用程序中添加一個名為 “Airline” 的 entities。

更多 entities 的詳細資訊,請詳閱 Add entities.

在語句 (utterances) 中標註實體 (entities)

接下來,您需要在例子中標註 entities 來訓練 LUIS 定義此 entities。請在您添加的話語中,特別標出 entities。

添加預建構實體 (prebuilt entities)

添加一個預先存在的實體可能會很有用,我們稱之為 prebuilt entities。這些類型的 entities 可以直接使用,不需要標註。轉到 Entities 頁面添加與您的應用程序相關的 prebuilt entities。 將 ordinaldatetime 這兩個 prebuilt entities 添加到您的應用程序。

訓練您的 APP

在左側面板中選擇 Train & Test,然後單擊 Train Application 並根據前面步驟中定義的 intents,utterances 和 entities 來訓練您的應用程序。

測試您的 app

您可以通過鍵入測試 utterance 並按 Enter 鍵來測試。結果顯示與每個 intent 相關的得分。檢查最高評分的 intent 是否符合每個測試 utterance 的 intent。

發布您的 app

從左側菜單中選擇 Publish App,然後單擊 Publish

使用您的 app

Publish App 頁面複製端點URL,並將其黏貼到瀏覽器中。在URL結尾附加一個查詢,如"預訂波士頓航班",並提交請求。包含結果的 JSON 應該顯示在瀏覽器中。

下一步

  • 嘗試通過繼續添加和標註 utterances 來提高 app 的性能。
  • 試著添加 Features 來豐富您的模型並增進您 LUIS 的表現。Features 幫助您的應用識別可互換的詞/短語,以及在您的領域中較常用的用語。

 

 

 

 

 

 

 

 

 

王項 Hsiang Wang

專長:C、C++

GitHub:Victerose
翻譯至:Create your first LUIS app in ten minutes


【.NET Core】.NET Core 從入門到反向

$
0
0

.NET 系列

  • .NET 以下各式各樣的變種,這裡簡單介紹一下

.NET Core

  • 這是這次主要要介紹的重點。
  • 主要特點是能跨平台,但是不支援一些 GUI 的介面,適合拿來做 CLI 介面或是網路等無介面的應用。

.NET Framework

  • 主要是用來開發 Windows PC 上面的應用程式,但是跨平台能力較差,但是可以開發出各式各樣 PC 上面的應用程式。
  • 支援使用多種語言,如 C#、C++、VB…

Xamarin

  • 主要的功能是開發跨平台的行動裝置原生應用程式,可以寫一份 code 就能在多個行動裝置上執行

Mono

  • 一個在 .NET Core 出來之前跨平台的 C# 執行環境&編譯器
  • 其專案也支援用 Gtk 開發桌面環境的程式
  • 後來成為了 Xamarin 的前身

.NET Standard

  • 如前面所述,.Net 有許許多多的分支,但是這些分支之間的 API 卻不一致,因此這個專案是為了讓開發人員可以使用一組共同的可攜式程式庫,使得一些和作業系統不相關的程式碼可以被簡單的共用。

官方網頁

入門

安裝開發環境

撰寫第一個 .NET Core 程式

  • 在電腦上打開終端機,輸入以下的指令
dotnet new console -o hwapp
cd hwapp
  • 在那個資料夾中加入一個名為 Program.cs 檔案,並包含以下的內容
using System;

namespace hwapp
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}
  • 這個程式的功能是在螢幕上印出 Hello World! 字樣
  • 使用以下的指令執行他,就能看到螢幕上印出 Hello World!
dotnet run

簡單小練習

  • 利用 .NET Core 輸出九九乘法表
using System;

namespace hwapp
{
    class Program
    {
        static void Main(string[] args)
        {
            for(int i=1;i<=9;i++)
                for(int j=1;j<=9;j++)
                    Console.WriteLine(i+"*"+j+"="+i*j);
        }
    }
}

跨平台執行

  • 可以使用以下的指令把他 build 起來,之後到其他平台上就可以輕鬆地使用
dotnet build
  • 如果是想要輸出成 Release 版本的話,就使用以下的指令
dotnet build -c Release
  • 然後你會看到他輸出了一行類似於 XXX -> <Path>/XXX.dll 的東西,而那個路徑就是你的目標檔案
  • 把那個路徑下的 XXX.dllXXX.deps.json 拿到其他裝有 .NET Core Runtime 的環境,並且執行
dotnet XXX.dll
  • 就會發現他成功地在其他環境中跑起來了

反向

工具

入門

  • 首先我們把我們之前寫好的 .NET Core 應用程式的 dll 拿出來,拖曳到dnspy的視窗裡面,會發現到畫面左邊出現的樹狀目錄可以點,點一點之後就能找到我們寫的應用程式的程式內容
  • 這時你會發現我們的程式居然被 dnSpy 反編譯成原本的樣子了
  • 接下來我們試著去修改這隻程式
  • 對著要修改的程式按右鍵後選擇 Edit Method (C#)...
  • 接著你就會看到一個視窗,你可以在裡面把原本的函數改成你想要的樣子

  • 接著按下 compile 按鈕
  • 然後從上面選單中的 File 裡選擇 Save Module... 然後按下 OK 後就修改完成了
  • 使用 .NET Core 執行看看,看看他是否真的改變了

CTF中的實例

  • 這次的目標是 D-CTF Quals 2017 裡面的 Don't net, kids! 這一題
  • 把題目給的 zip 載下來後會發現其實是一個 .NET Core 的 網頁程式
  • 把裡面的 DCTFNetCoreWebApp.dll 丟到 dnSpy 裡面分析
  • 觀察到 DCTFNetCoreWebApp.Controllers 這個 namespace 裡面的東西,發現這個是負責做路徑解析的功能,然後再看到裡面的 Post 函式
  • 發現到以下的內容
string expr_69 = JsonConvert.SerializeObject(this._executor.Execute(request.Command));
  • 連點兩下 Execute 跳轉進去那個函式的內容
  • 然後觀察到這行
bool flag = command.GetType().get_Name().Equals(typeof(AdminCommand).get_Name());
  • 仔細想想那行所代表的意思,發現其意思是: 如果 Command 的型態是 AdminCommand 的話,則代表這個 Command 可以執行被包含在 _amdinActions 內的 Command
  • 這時我們的目標變成,如何讓輸入的 Command 會被轉型成 AdminCommand
  • 這時我們回想起了 Post 函式裡的某個行程式碼
Request request = JsonConvert.DeserializeObject<Request>(value, new JsonSerializerSettings
{
    TypeNameHandling = TypeNameHandling.Auto
});
  • 因為 TypeNameHandling 被設定成了 TypeNameHandling.Auto,因此我們可以構造一個 json,而其內容中含有"$type",使得他會被轉型成想要的型態
  • 最後我們使用以下的 json 來得到 Flag
{
    "Command": {
        "$type": "DCTFNetCoreWebApp.Models.AdminCommand, DCTFNetCoreWebApp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
        "UserId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
        "Action": "Readflag"
    }
}

接續

  • 有時程式碼會被經過大量的混淆,那麼這時 dnSpy 出來的東西會變得很難看,這時就會需要花大量的耐心去仔細的觀察資料的流動和程式的邏輯,把裡面程式碼裡面重要和不要的部分分清楚,了解程式的核心,最後再找到目標,並且做到想要做的事情

額外

  • 以下是幾個我覺得和 .NET 有關,而且蠻有趣的專案

peachpie

SSH.NET

 

 

 

 

 

 

 

 

周逸 Yi Chou

Microsoft Student Partners in Taiwan
專長:C/C++、C#'PHP
GitHub:qazwsxedcrfvtg14

The Keys to Effective Data Science Projects – Part 10: Project Close-Out with the TDSP

$
0
0

Data Science projects have a lot in common with other IT projects in general, and with Business Intelligence in particular. There are differences, however, and I’ve covered those for you here in this series on The Keys to Effective Data Science Projects. One of those areas where general projects and Data Science projects are similar is in the project close-out – and not for a good reason.

IT projects can take a long time – weeks or months. And in that time, there’s a lot of planning, euphoria, then obstacles, politics, changes, unexpected time and money issues, and a lot more drama than should be necessary. By the time the project is done – success or failure – most people really want to be done with it. Projects start with a lot of fanfare, then slowly trail off into a lack of communication. But that’s a bad thing, and not in keeping with the “Science” part of Data Science.

One of the most important keys you can remember to follow is to properly learn from and document the project itself. I see so many projects repeat the same errors as earlier projects, or ignore the success factors of previous endeavors. It causes even more waste, time, money and drama.

Happily, there’s a fix. Right at the start of your project, emphasize the last phase of the Team Data Science Process. Set aside time, budget, and personnel to document what worked, what didn’t, where things are, and why you did things the way you did.

The Team Data Science Process has a handy document template you can use – find it here: https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/project-execution

Now, let’s get started working on those projects. Remember these keys as you go:

What’s New for Dynamics 365 Resource Scheduling Optimization v1.5.17284.2 Release

$
0
0

Applies to: Dynamics 365 Organization July Update with Project Service Automation and/or Field Service Solution

With the goal of continuously improving quality, performance, usability, and responding to some customer feature feedbacks, we recently released Resource Scheduling Optimization v1.5.17284.2 update, below are the new features and capabilities introduced in this release.

NOTE: Resource Scheduling Optimization v1.5.17284.2 update is backward compatible with Dynamics 365 v8.2 organization, can be deployed on either Dynamics 365 v9.0 organizations or Dynamics 365 v8.2 organizations, but ‘Schedule Board Integration’ feature is only available on Dynamics 365 v9.0 organization

 

Contents

Required Configuration Post Resource Scheduling Optimization Deployment

How to Setup Demo Data for Resource Scheduling Optimization

New features for Resource Scheduling Optimization July 2017 Update Release

 

Required Configuration Post Resource Scheduling Optimization Deployment

  1. Go to ‘Resource Scheduling Optimization’ -> ‘Administration’-> ‘Resource Scheduling Parameters’, set ‘Enable Resource Scheduling Optimization’= ‘Yes’

NOTE: Only user with ‘System Administrator’ role has permission to enable this

Set Enable Resource Scheduling Optimization to Yes

  1. Go to ‘Settings’ -> ‘Security’ -> ‘Users’, Navigate to ‘Application Users’ view, assign ‘Field Service - Administrator’ security role to ‘Resource Scheduling Optimization’ application user
  2. Go to ‘Settings’ -> ‘Security’ -> ‘Field Security Profiles’, open ‘Field Service – Administrator’, add ‘Resource Scheduling Optimization’ into this Field Security Profile
  3. Go to ‘Resource Scheduling Optimization’ -> ‘Settings’-> ‘Booking Statuses’, configure ‘Scheduling Method’ for Booking Status

 

Configure scheduling method for booking status

 

How to Setup Demo Data for Resource Scheduling Optimization

Need some sample data to get familiar with resource scheduling optimization? Check this blog post

 

New features for Resource Scheduling Optimization July 2017 Update Release

 

Schedule Board Integration

NOTE: This feature is only available when you have Dynamics 365 organization with version 9.0+, if you are still using old Dynamics 365 organization with version 8.2, you won’t be able to see schedule board integration feature

 

Feature Overview: with Schedule Board integration this new capability, user can:

  • better understand optimization scope
  • View optimization results in a visualized way
  • Easier analyze failed optimization requests
  • Create a new schedule on the fly

 

How to get there: Navigate to your Optimization Schedules, select your schedule, click ‘SCHEDULE BOARD’ button on the grid view, or open Optimization Schedule record form, click ‘SCHEDULE BOARD’ button on the form

 

Open schedule board

 

Feature Details:  After click ‘SCHEDULE BOARD’ button, able to see below view
Schedule Board view

 

  • On filter section, system pre-populated data for Territories which matches scope territory selection
  • ‘Open and Eligible for Optimization’ tab shows all eligible unscheduled requirements which matches scope territory, requirement range, and requirement state setting
  • ‘Eligible for Optimization’ tab shows all eligible unscheduled requirements as well as any eligible bookings to be re-optimized which matches scope territory, requirement range, and requirement state setting
  • ‘Excluded from Optimization’ tab shows any eligible requirements or bookings fail to be optimized due to certain reason (e.g.: invalid longitude/latitude)
  • Icon and tooltip indicates resources not in optimization scope

 

Icon and tooltip indicates resources not in optimization scope

 

  • Lock icon and tooltip indicate that booking has been locked

 

Lock icon and tooltip indicate that booking has been locked

 

  • Yellow lines indicate the start and end time for optimization range

 

Yellow lines indicate the start and end time for optimization range

 

  • From / To date and time matches the time range defined on optimization scope, user can continue modify and then save the changes back to original scope, if the same scope being referenced by multiple optimization schedules, change will apply to all these schedules with the same scope.

 

From and To date and time matches the time range defined on optimization scope

 

 

  • Select a goal, click ‘Run Now’ button to on demand trigger an optimization request

 

Click Run Now to trigger an optimization request

 

  • Optimization Request shows the status and details if any exception happens, click to open specific optimization request, user can view booking details as well as analytic charts showing how many hours of travel time vs how many hours of working hours scheduled for this run

 

Optimization request shows the status and details of exception

 

 

Introduced Simulation status for Resource Scheduling Optimization

 

Feature Details:  If any exception/error happens when optimization schedule still running, user might see overlapping on schedule board because there are some bookings created/updated from latest run while there are still some bookings belong to previous run which supposed to be deleted by latest run but failed to be deleted due to exception.  To avoid this issue, we are making optimization process atomic and transactional by introducing a Simulation status.  During optimization process, create, update and deleted operations are visible to the user now, all new/updated/to be deleted bookings are in a staging stage called ‘Simulation’, only if the whole optimization request completed and correctly, these simulation bookings will be flipped into real bookings ,  before optimization request complete, user can see some simulation status(transparent) bookings moving around schedule board until run completed , then all simulation bookings will be flipped into real bookings (solid blue color). If exception happens, optimization request failed, these simulation booking will remain Simulation status for troubleshooting purpose, unless user manually deletes them or system job will delete them as well automatically every 2 weeks.

 

Simulation status for resource scheduling optimization

 

Resource Scheduling Optimization Deployment App Enhancement

More secure and reliable oAuth authentication for Resource Scheduling Optimization deployment app, also reduce the administrative tasks for maintaining Dynamics 365 user credentials

 

Resource Scheduling Optimization Deployment App Enhancement

 

 

Other Resource Scheduling Optimization Feature Enhancements

  • Show booking statuses in the schedule optimization scope so that user can easily identify if any booking status accidentally set the wrong value

 

Show booking statuses in the schedule optimization scope

 

  • Modify the status of optimization schedule to indicates setup not in sync

 

Modify the status of optimization schedule to indicate setup not in sync

 

  • Add time zone setting on schedule filters so that user can easily configure their local time which is processed from an UTC referential

 

Add time zone setting on schedule filters

 

 

  • Improve detection rules of invalid Bookable Resource
    1. Location Agnostic Resource will not be scheduled by RSO as of today even though Optimize Schedule=Yes

 

Location agnostic resource not be scheduled

 

 

Reason not being scheduled

 

  • Set default scheduling method based on metadata record
  • Display message in the optimization request if a route falls back to as the crow flies

 

Display message in optimization request

 

For more information:

 

Feifei Qiu

Program Manager

Dynamics 365, Field Project Service Team

Getting started with Java Azure Function Apps

$
0
0

Introduction

 

I had a great opportunity to work with a partner to pilot the use of Java Azure Functions for a Java based PDF report generator. I'm more of a C#/.NET, C++, and Python dev and my partner mostly builds on LAMP bar the Java Based PDF generator, so day to day is not Java Development.

 

If this sounds like you and Java dev is not your core then read on as this post augments existing documentation for the preview of Java based Azure Functions and may save you time.

 

The beta inclusion of Java joins Azure Functions' existing support of JavaScript, C#, F#, Python, PHP, Bash, Batch, and PowerShell to the Azure Function App Serverless model.

 

Resources to get started

 

The following are useful resources to get started.

 

  1. Announcing the preview of Java support for Azure Functions
  2. Azure Functions Java developer guide
  3. Create your first function with Java and Maven (Preview)
  4. Microsoft Azure Functions for Java (JavaOne Demo)

 

What to install

 

Everything listed here will install cross platform – Windows, Linux and MacOS and is free of charge.

 

  1. Dot Net Core 2.0
  2. Zulu 8.0 (openjdk version "1.8.0_144"). Don't be tempted as I was to install version 9. It will not work.
  3. Azure CLI 2.0
  4. Azure Functions Core 2.0 Tools (npm install -g azure-functions-core-tools@core)
    1. Installation on my Ubuntu 16.04 system failed. After much research I ran "npm config set unsafe-perm true", then was able to successfully install the Azure Function Core 2.0 Tools
  5. Apache Maven, version 3.0 or above
  6. Node.js, version 8.6 or higher.
  7. Optional
    1. Visual Studio Code and the Java Expansion Pack Extension
    2. Azure Storage Explorer
    3. Azure Service Bus Explorer
    4. Azure Storage Emulator

 

Environment Variable Setup

 

You need to add the following Environment Variables to your system

 

  1. JAVA_HOME that points to your Zulu 8.0 installation
  2. Update your Path variable to include the directory of your Apache Maven installation eg C:Softwareapache-maven-3.5.0bin
  3. On Windows press Start then type Environment and run "Edit System Environment Variables

  4. On Linux, edit the /etc/environment file, update PATH and add JAVA_HOME variable and reboot.

 

Creating your first Project

 

  1. Create your first HTTP Triggered Function by following the example at Create your first function with Java and Maven (Preview)
  2. Add another Azure Function trigger with the mvn azure-functions:add command.

 

Run the "mvn azure-functions:add" command from your functions directory.

 

 

Select the trigger type. In this case 2 for a Storage Queue Trigger

 

 

Open up your Java IDE of choice in this case Visual Studio Code with the Java Expansion Pack Extension.

 

 

Add the StorageQueueConnectionString to the local.settings.json

 

 

Build the solution with the MVN Package. In this case I ran from the Terminal windows in Visual Studio Code.

 

 

Now start up the function locally by running

mvn azure-functions:run

 

 

 

Debugging

 

Follow notes in the Create your first function with Java and Maven (Preview) to debug from Visual Studio Code. You can step through your Java code, inspect the state of variables etc.

 

 

Deploying to Azure

 

Follow notes in the Create your first function with Java and Maven (Preview)

But in summary

  1. Package up the app with "mvn azure-functions:package"
  2. Authenticate with your Azure Subscription with "az login"
  3. Deploy the app with "mvn azure-functions:deploy"

 

Useful MVN (Maven) Commands

 

Build Maven Package

mvn package

Run maven Function App

mvn azure-functions:run

Package ready for deployment to Azure

mvn azure-functions:package

Deploy Package to Azure

az login

mvn azure-functions:deploy

Add new Azure Function App Trigger

mvn azure-functions:add

  • HttpTrigger
  • BlobTrigger
  • QueueTrigger
  • ServiceBusQueueTrigger
  • ServiceBusTopicTrigger
  • EventHubTrigger
  • TimerTrigger

 

ANNOUNCING: Dynamics 365 DevOps on GitHub

$
0
0

I’ve recently switched back to focusing on Dynamics 365 after some time focusing on Azure again.  Back to the future.  My team and I have been working on an effort to move all of our Dynamics 365 (and in some cases Azure) based demos, Proof of Concepts (POCs), samples, etc. into Visual Studio Team Services (VSTS), and “rub a little devops on them”.  In the process, we’ve learned a lot (and continue to learn) how to best apply build automation and deployment (amongst other DevOps concepts) to Dynamics 365 applications.  We’ve had a chance to use great community projects like https://github.com/WaelHamze/xrm-ci-framework which provide (amongst other things) Dynamics 365 VSTS tasks.  In the process, we’ve had the opportunity to contribute back to https://github.com/WaelHamze/xrm-ci-framework

What became apparent, very quickly, is that we needed to share what we learn with the rest of the community.  In an effort to do so, we started https://github.com/devkeydet/dyn365-ce-devops.  I welcome you to review what we’ve put together so far.  Even better provide us feedback.  Try it out and file bugs.  Suggest better ways to document what we’ve done.  Tell us how to improve the base templates.  Suggest additional templates / scenarios.  Really, tell us anything you think that will make what we’ve started better for others.  Please do so by submitting issues.  We will triage the issues and prioritize accordingly.

We’re intentionally *not* trying to boil the ocean here or over engineer the idea.  We’re building small and evolving as the community of people who are interested in the project grow and provide feedback.  The project will head in the direction that the community takes it.  Thanks!

@devkeydet

Azure AD and Group-based authorization

$
0
0

"Hello World!"

In my previous post I talked about how to use Azure AD to secure an Asp.Net Core web API project. If we want to go further than just protect our web API, we can use groups to further customize the access. A typical example is to restrict the access only for users belonging to a specific group.
Each directory user can be part of one or more groups, so we can leverage this membership to allow or deny the access to our API based on the calling user attributes.
This is quite easy to implement, as Asp.Net Core uses the same authorization attribute we are used to:

[Authorize(Policy = "Admins")]
public IActionResult Get(int id)
{
   return Json(id);
}

As you can see we need to use the Policy property to specify the rules that apply to the decorated member. This obviously requires defining our access policies (the rules) during the startup phase:

services.AddAuthorization(options => {
   options.AddPolicy("Admins",
           policyBuilder => policyBuilder.RequireClaim("groups",
           "f761047b-2f49-4d8e-903c-1234567890cc"));
});

We simply check the presence of a specific group in the claims set of the calling user. In other words, means the user must belong to this particular group in order to access. Note that we need to use the GUID of the group and not the group name as the access_token we receive from Azure Ad uses a list of GUIDs to describe the user membership.

Because by default the list of groups the user belongs to is not sent by Azure AD, we need to manually edit the manifest:

"groupMembershipClaims":"SecurityGroup"

A small step for man kind but a giant leap for us! (aka NAV on Docker)

$
0
0

Some months ago, a colleague tried to convince me that Docker was the new black and I really had to look at that.

Today, we have started deploying Official Microsoft Docker Images to the Docker Hub - https://hub.docker.com/r/microsoft/dynamics-nav/ and over the next days, all CUs and all country versions for NAV 2016 and NAV 2017 will be published on Docker Hub (Please read the description at the Docker Hub site)

Going forward, we will release images on the Docker Hub for every public version we ship.

We will also be creating Docker Images for preview builds but access to these will be given through the various partner programs we have and will be through MS Collaborate.

So how did we get from A to B?

Denial

At first I was in denial.

What could this thing possibly do, which we couldn't do with our Azure VMs?

What is the difference really between Docker and a Virtual Machine?

Isn't it just a virtual machine in disguise? How do I connect to the Remote Desktop inside that Docker Image?

But usage scenarios kept coming up, which point towards Docker as being the solution and not the problem, so, I decided to have a look.

Partner collaboration

Not surprisingly, some of our partners already started looking at Docker as well and they actually had succeeded in running NAV on Docker. I set up a call with Tobias Fenster from Axians Infoma and Jakub Vañák from Marqués Olivia and listened to their ideas and recommendations on what a Microsoft supported Docker image could do for out partners.

So, the journey started and I want to express my thanks to the help and support I got from Tobias and Jakub, but also from other partners, who along the way joined in and started testing the Docker images.

Hard work... - some stats

3 - the number of projects created for this journey:

6500 - the number of PowerShell lines in these three projects

2400 - the number of Docker images created in the "private" Docker registry: navdocker.azurecr.io

143 - the number of Partners with access to the "private" docker registry, who has provided feedback and recommendations (not counting NAV Developer Preview deployments)

4 - the number of sessions already held on NAV on Docker at Directions NA and Directions EMEA

90 - the number of minutes Tobias, Jakub and I will be talking about Docker at NAV Tech Days 2017 (friday 11:00)

1 - the number of partners who have expressed concerns about using Docker

740 - the number of NAV images which will be on the Docker Hub within the next week or so

The future

In the coming days, I will post at least a blog post every day on how to use the Docker images - right from the most simple usage scenarios to the more advanced usage scenarios and i hope this will help the partners who are coming on to Docker to get a smooth onboarding process. (it hasn't all been fun and games)

 

I'll be back - enjoy

Freddy Kristiansen
Technical Evangelist


Little Law of Queuing Theory and How It Impacts Load Testers

$
0
0

Load testing is all about queuing, and servicing the queues. The main goals in our tests are parts of the formula itself. For Example: the response times for a test is equivalent to service times of a queue, load balancing with multiple servers is the same as queue concurrency. Even when we look at how we are designing a test we see relationships to this simple theory.

I will walk you through a few examples and hopefully open up this idea for you to use in amazing ways.

How to determine concurrency during a load test? (Customer Question What was the average number of concurrent requests during the test? What about for each server?)

Little's Law -> Adjusted to fit load testing terminology

L = λW

L is the average concurrency

λ is the request rate

W is the Average Response Time

For Example: We have a test that ran for 1 hour, the average response times were 1.2 seconds. The average requests sent to the server by the test per second was 16.

L = 16 * 1.2

L = 19.2

Average number of concurrent request processing on the servers was 19.2 averaged per second.

Let’s say we have perfect load balancing, of 5 servers.

M = Number of Servers/Nodes

L = Concurrent requests per Server/Node

L = 19.2/5

So, each server is processing ~3.84 concurrent requests per second on average

This explains why, when we reach our maximum concurrency on a single process, response times begin to grow and throughput levels off. Thus, we need to optimize the service times, or increase the concurrency of processing. This can be achieved by either scaling out or up depending on how the application is architected.

Also, this is not all entirely true, there will be time spent in different parts of the system, some of which have little to no concurrency, two bits cannot exist concurrently on the same wire electrically, as of writing this article.

Can we achieve the scenario objectives given a known average response time hypothesis?

For tests that use “per user per hour pacing”, we can determine the minimum number of users necessary to achieve the target load given a known average response time under light load. You can obtain the average response times from previous tests or you can run a smoke test with light load to get it.

W = Average Response Times in seconds for the test case you are calculating

3600 seconds in an hour

Number of users needed U = 3600/W

G = Throughput Time Goal

Let’s say we have an average response time of 3.6 seconds obtained from a shakeout test, and our goal is to run this test case 2600 times per hour. What would be the minimum number of users to achieve this?

U = ceil(2600/(3600/3.6))

U=3

Pacing = U / G

Pacing = 866

So the scenario would be 3 users and the test case would be set to execute 866 times per user per hour. Personally I like to add 20% additional users to account for response time growth, just in case the system slows down as the load is increased. For example: I would run with 4 vUsers at 650 Tests per User per Hour, or 5 vUsers at 520 Tests per User per Hour.

There are endless possibilities to using this formula to understand and predict behavior of the system, post some examples of how you would use queuing theory in the comment section below.

Have a great day and happy testing!

Test with

Experiencing Data Access Issue in Azure Portal for Many Data Types – 10/27 – Resolved

$
0
0
Final Update: Friday, 27 October 2017 14:32 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/27, 02:04 UTC. Our logs show the incident started on 10/27, 12:55 UTC and that during the 25 minutes that it took to resolve the issue, around 6% of customers experienced data access for data in West Europe and also latency outside of SLA for export data in the same.
  • Root Cause: The failure was due to issue with the back end storage service in the west Europe region.
  • Incident Timeline:  0 Hours & 25 minutes - 10/27, 12:55 UTC through 10/27, 01:20 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Anmol

Posting messages to the Graph API from Microsoft Flow

$
0
0

I needed to POST a message to Microsoft's Graph API in order to find meeting slots in a calendar within my organisation. I wanted this wrapped in an API so that I could call it from Cortana. Read my previous post about how I got this working from Postman as this will cover off the Azure Active Directory pre-requisite. (https://blogs.msdn.microsoft.com/softwaresimian/2017/10/05/using-postman-to-call-the-graph-api-using-azure-active-directory-aad/)

I first looked at Flow as it had the following Action, it looked great as I could log into an Active Directory resource and call the API with delegated authority and then expose that as an API for Cortana to call without having to authenticate.
Http with Azure AD

Create Custom ConnectorHowever, this first party action only allows you to use the GET verb whilst authenticating with an Azure Active Directory secured resource. However, Flow allows you to create your own Custom Connectors. From the previous blog you would have a POST message to the Graph API from Postman which you can create a Postman Collection with your working method call and then export it is a V1 Postman Collection. You will need this to create a Custom Connector in Flow without a Swagger file.
Log in to flow and then click the Cog in the top right corner and select Custom Connector.

 

 

The Graph API does not provide a Swagger File for the API's that are exposed, however you can use a Postman Collection to do that for you.

 

 

 

 

 

 

Once you have imported your V1 Postman Collection into Flow, it will appear as generatedApiDefinition.swagger.json.

 

 


 

 

On the first tab called General, you will notice that it has pulled in the resource Url that was defined in your Postman Collection and populated this in the Host. Give your API an icon and a useful description if you want to change it from what is pulled in from Postman. Click Continue

 

 

 

 

 

 

 

 

The second tab is Security, where you will have to fill in your Application ID (Client id) from the previous blog and your secret. This means the Flow will run with the permissions defined in that application. You will also have to fill in the ResourceURL which is https://outlook.office.com, all the other values can be default.

The third tab is where we define the message that we are going to send, skip over this for the time being and go to the Test tab to check everything is working. It will ask you to "Create Connector", go ahead and do this.

After you have created the connector, it would have registered this in flow; now go back to the second tab "Security", you will notice that it has now supplied you with a RedirectURL, copy this

And then go back into your "App Registration" in the Azure portal and allow your application to be called from this URL. The Azure Portal calls these "Reply URLs"

Go back to the last "Test" tab and click the "+ New Connection", if everything is wired up correctly with Azure Active Directory, Office 365 and Flow, then you should see your mail box appear in the box above the "+ New Connection" button. If not, there is something wrong with your AAD Application.

Let's just send a payload to Office 365 and see if we get a response. For the time being use Raw Body and paste in the test message that you used in Postman to get it working. Hopefully you should get a valid response back. If you don't, then go back and check in Postman if everything is working. If you get a message back, then the Custom Connector is ready to be used in your Flow.

Here you can see that the Custom Connector is represented above with the dog icon. The trigger fires when a json message is sent to it, the second and third actions are there to transform the json into the correct shape. Here's a screen shot of the main parts of the flow.

So in summary you have seen how we can take a message for the Graph API and host it in Flow and post messages to the Graph API with delegated authority.

Finding free time in Office 365 via REST

Creating a custom service endpoint

$
0
0

Overview

“Service endpoints - Customization” blog details how a custom endpoint can be registered with TFS/VSTS. This blog explains how a new endpoint for the custom endpoint type can be created taking Azure Classic endpoint type as an example.

Azure Classic endpoint is an endpoint type contribution that is part of the Azure extension.

Once an endpoint type is installed on a TFS collection or VSTS account, the endpoint type can be queried from the collection using a REST API. Here’s how Azure Classic endpoint type can be queried:

https://<account-name>.visualstudio.com/_apis/distributedtask/serviceendpointtypes?type=azure

The response for the query is captured here.

Creating custom endpoint through UI

The endpoint creation UI is driven out of the service endpoint type response got by querying the above REST API. For e.g. when “Azure Classic” endpoint type needs to be created, the following UI is presented upon clicking “New Service Endpoint” -> “Azure Classic” from the endpoint UI menu:

UI elements:

Authentication schemes

  1. The radio button group on the top indicates the authentication schemes supported by the endpoint type. Since Azure Classic endpoint supports Credentials & Azure Certificate authentication schemes, we see two options. UI uses the display names of the authentication scheme specified in the contribution:


"authenticationSchemes": [
{

"scheme": "UsernamePassword",
"displayName": "Credentials",
},
{

"scheme": "Certificate",
"displayName": "Certificate Based",
}

 

Connection name

Connection name is a mandatory input required for any endpoint type.

 

Endpoint data

Below are the additional data that are specific to Azure Classic endpoint:

  1. Environment is defined as an input with inputMode=combo & with a given set of values.

{
"id": "environment",
"name": "Environment",
"description": "Microsoft Azure Environment for the subscription",
"type": null,
"properties": null,
"inputMode": "combo",
"isConfidential": false,
"useInDefaultDescription": false,
"groupName": null,
"valueHint": null,
"validation": {
"dataType": "string",
"maxLength": 300
},
"values": {
"inputId": "environmentValues",
"defaultValue": "AzureCloud",
"possibleValues": [
{
"value": "AzureCloud",
"displayValue": "Azure Cloud"
},
{
"value": "AzureChinaCloud",
"displayValue": "Azure China Cloud"
},
{
"value": "AzureUSGovernment",
"displayValue": "Azure US Government"
},
{
"value": "AzureGermanCloud",
"displayValue": "Azure German Cloud"
}
]
}

The combo control shows the display value for each item in the values list.

2. Subscription Id is defined as input with inputMode=textbox with validation that it should be a GUID:

{
"id": "subscriptionId",
"name": "Subscription Id",
"description": "Subscription Id from the <a href="https://go.microsoft.com/fwlink/?LinkID=312990" target=_blank>publish settings file</a>",
"type": null,
"properties": null,
"inputMode": "textBox",
"isConfidential": false,
"useInDefaultDescription": false,
"groupName": null,
"valueHint": null,
"validation": {
"dataType": "guid",
"isRequired": true,
"maxLength": 38
}
},

Description of the input is used for showing help information on the input. The description can be a HTML mark-up.

3. Subscription name is a string with validation that it can be of maximum 255 characters length.

"id": "subscriptionName",
"name": "Subscription Name",
"description": "Subscription Name from the <a href="https://go.microsoft.com/fwlink/?LinkID=312990" target=_blank>publish settings file</a>",
"type": null,
"properties": null,
"inputMode": "textBox",
"isConfidential": false,
"useInDefaultDescription": false,
"groupName": null,
"valueHint": null,
"validation": {
"dataType": "string",
"isRequired": true,
"maxLength": 255
}

Authentication parameters

For “Credentials” authentication scheme, below are the parameters used:

  1. Username input corresponds to input defined as part of the authentication scheme.                    "inputDescriptors": [
    {
    "id": "username",
    "name": "Username",
    "description": "Specify a work or school account (for example <b>@fabrikam.com</b>). Microsoft accounts (for example <b>@live</b> or <b>@hotmail</b>) are not supported. Not recommended if Multi-Factored Authentication is enabled.",
    "type": null,
    "properties": null,
    "inputMode": "textBox",
    "isConfidential": false,
    "useInDefaultDescription": false,
    "groupName": "AuthenticationParameter",
    "valueHint": null,
    "validation": {
    "dataType": "string",
    "isRequired": true,
    "maxLength": 300
    }
    },

Since “isConfidential” is set to “false”, the value of this input does not get masked in the UI.

2. Password input also corresponds to the input defined as part of the authentication scheme.

{
"id": "password",
"name": "Password",
"description": "Password for connecting to the endpoint",
"type": null,
"properties": null,
"inputMode": "passwordBox",
"isConfidential": true,
"useInDefaultDescription": false,
"groupName": "AuthenticationParameter",
"valueHint": null,
"validation": {
"dataType": "string",
"isRequired": true,
"maxLength": 300
}
}

Since “isConfidential” is set to “true” the value of this field gets masked in the UI.

In case “Certificate” authentication scheme is chosen, below are the inputs seen in the UI:

 

For “Certificate” authentication scheme, the only input required is the “Management Certificate”:

{
"id": "certificate",
"name": "Management Certificate",
"description": "Management Certificate from the <a href="https://go.microsoft.com/fwlink/?LinkID=312990" target=_blank>publish settings file</a>",
"type": null,
"properties": null,
"inputMode": "textArea",
"isConfidential": true,
"useInDefaultDescription": false,
"groupName": "AuthenticationParameter",
"valueHint": null,
"validation": {
"dataType": "string",
"isRequired": true
}
}

For the certificate, the inputMode is set to textArea to accommodate larger text to be entered.

Help link

The markdown specified in the endpoint type is used to populate the help link in the endpoint UI:

"helpMarkDown": "For certificate: download <a href="https://go.microsoft.com/fwlink/?LinkID=312990" target=_blank><b>publish settings file</b></a>. <a href="https://msdn.microsoft.com/Library/vs/alm/Release/author-release-definition/understanding-tasks#serviceconnections" target=_blank><b>Learn More</b></a>",

Icon

The icon for the service endpoint type specified in the contribution shows up next to the created endpoint in the list view on the left pane of endpoint UI.

"iconUrl": https://<account>.visualstudio.com/_apis/public/Extensions/ms.vss-services-azure/16.125.0.938080564/Assets/icons/azure-endpoint-icon.png

Verify connection

Once the details for the endpoint are entered, we allow verifying the details if the endpoint type supports a data source named “TestConnection”. In case of Azure RM, here is how the data source looks like:

"dataSources": [
{
"name": "TestConnection",
"endpointUrl": "$(endpoint.url)/$(endpoint.subscriptionId)/services/WebSpaces?properties=georegions",
"resourceUrl": "",
"resultSelector": "xpath://Name",
"headers": []
},

Verify connection will succeed if the details entered are valid. In case any error is encountered when validating, the error will be displayed. But failure in verifying connection will not block creation of the endpoint.

Creating endpoint through REST API

In case there is a need to create endpoint of a type directly using API, it can be done by constructing the endpoint request body corresponding to the inputs specified by the endpoint type.

For example, here is the request/response for creating Azure Classic service endpoint type using Certificate authentication scheme:

REST API: https://<your-vsts-account-name>/<projectid>/_apis/distributedtask/serviceendpoints

Method: POST

Request body:

{
"id": “<<any GUID – this is ignored>>”,
"description": "",
"administratorsGroup": null,
"authorization": {
"parameters": {
"certificate": “<<management certificate>>”
},
"scheme": "Certificate"
},
"createdBy": null,
"data": {
"environment": "<<one of the valid Azure environments>>",
"subscriptionId": "<<valid subscription id>>",
"subscriptionName": "<<subscription name>>"
},
"name": "<<endpoint name>>",
"type": "azure",
"url": "https://management.core.windows.net/",
"readersGroup": null,
"groupScopeId": null,
"isReady": false,
"operationStatus": null
}

Response body:

{
"data": {
"environment": "<<environment name given above>>",
"subscriptionId": "<<subscription id given above>>",
"subscriptionName": "<<subscription name given above>>"
},
"id": "c0407398-23ea-465b-9857-a7647f924699",
"name": "<<endpoint name given above>>",
"type": "azure",
"url": "https://management.core.windows.net/",
"createdBy": {
"id": "<<identifier for identity>>",
"displayName": "<<display name for identity>>",
"uniqueName": "<<unique name for identity>>",
"url": "<<identity’s url>>",
"imageUrl": "<<identity’s image url>>"},
"description": "",
"authorization": {
"parameters": {
"certificate": null
},
"scheme": "Certificate"
},
"isReady": true
}

Office 365 Message Center to Planner: PowerShell walk-though–Part 1

$
0
0

*** Update 10/28/2017 - made code correction mentioned below - setting and using an environment variable for my tenantId

$uri = "https://manage.office.com/api/v1.0/" + $env:tenantId + "/ServiceComms/Messages"
$messages = Invoke-WebRequest -Uri $uri -Method Get -Headers $headers -UseBasicParsing

***

I mentioned in my previous blog post - https://lwal.me/3n - that I’d walk through the PowerShell – so here it is, at least the first part.  Hopefully this will help answer questions like “What was he thinking of!” – and “Why code it like that?” and maybe the answer will be that I didn’t know any better – so happy for comments back on this – but there will sometimes be a valid reason for some strange choices.  I’ll just go through the two Azure Functions scripts (the first here where I read the Messages, and creating the tasks in Part 2) – but the logic is the same in the full PowerShell only version – just a bigger loop.

#Setup stuff for the O365 Management Communication API Calls

$password = $env:aad_password | ConvertTo-SecureString -AsPlainText -Force

$Credential = New-Object -typename System.Management.Automation.PSCredential -argumentlist $env:aad_username, $password

Import-Module "D:homesitewwwrootReadMessagesOnTimerMicrosoft.IdentityModel.Clients.ActiveDirectory.dll"

$adal = "D:homesitewwwrootReadMessagesOnTimerMicrosoft.IdentityModel.Clients.ActiveDirectory.dll"
[System.Reflection.Assembly]::LoadFrom($adal)

$resourceAppIdURI = “https://manage.office.com”

$authority = “https://login.windows.net/$env:aadtenant”

$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList $authority
$uc = new-object Microsoft.IdentityModel.Clients.ActiveDirectory.UserCredential -ArgumentList $Credential.Username,$Credential.Password

$manageToken = $authContext.AcquireToken($resourceAppIdURI, $env:clientId,$uc)

The first few lines are setting up the Office 365 Management Communication API (Preview) connection.  Worth noting the ‘Preview’ there – as this is subject to change and might break at any point – so best keep an eye on it.  Once it is GA I’ll modify these scripts as necessary.  I’m storing the password as a variable in my Application Settings for the Function App hosting my fuinctions – and these are accessed via the $env: prefix.  As I mentioned in the previous blog – I am the only person with access to my Azure subscription so I’ve stored in App setting as plain text – but you might want to handle this more securely if you share subscriptions.  I’m then getting a credential object.  The dll for ADAL is also required – so is uploaded to the Function and the root directory for the functions is d:homewwwroot<FunctionName>.

The endpoint I need to authenticate to and get my token is https://manage.office.com.  I also need to pass in my authority Url, and this is my tenant added to https://login.windows.net/.  Both Graph and the Manage API required App and User authentication – so this is why I need both the user credentials and the Application ID (clientId) – the latter is also stored in my environment variables for the Function App.

#Get the products we are interested in
$products = Get-Content 'D:homesitewwwrootReadMessagesOnTimerproducts.json' | Out-String | ConvertFrom-json

The next part gets my products from the json file – and I chose to use a single plan and then push into Buckets by product and make assignments by product.  You could easily add PlanId at each product level here – and write to more than one plan.  Adding a new product is as easy as creating a new Bucket, getting the Id and the Id of the person handling the messages for that product and extending the json file accordingly.  On next run it will populate the new bucket – if there are any messages.

$messages = Invoke-WebRequest -Uri "https://manage.office.com/api/v1.0/d740ddf6-53e6-4eb1-907e-34facc13f08b/ServiceComms/Messages" -Method Get -Headers $headers -UseBasicParsing
$messagesContent = $messages.Content | ConvertFrom-Json
$messageValue = $messagesContent.Value
ForEach($message in $messageValue){
If($message.MessageType -eq 'MessageCenter'){

I really should have taken that GUID and put in a variable – or at least explained what it is.  That is the tenant identifier for my Office 365 tenant.  You can find it by going to the Admin Portal, then the Admin Center for Azure AD. then the Properties item under Manage – and the Directory ID is the GUID you are looking for.  I’ll revise the code with a $env: variable for this shortly.  The json returned is turned into a PowerShell object – which is an array containing all the messages – both SHD and Message Center.  I get the value from these messages into my messageValue array – then I can loop through all the individual messages, and am only interested in the ones of type ‘MessageCenter’.

ForEach($product in $products){
If($message.Title -match $product.product){
$task = @{}
$task.Add('id', $message.Id)
$task.Add('title',$message.Id + ' - ' + $message.Title)
$task.Add('categories', $message.ActionType + ', ' + $message.Classification + ', ' + $message.Category)
$task.Add('dueDate', $message.ActionRequiredByDate)
$task.Add('updated', $message.LastUpdatedTime)
$fullMessage = ''
ForEach($messagePart in $message.Messages){
$fullMessage += $messagePart.MessageText
}
$task.Add('description', $fullMessage)
$task.Add('reference', $message.ExternalLink)
$task.Add('product', $product.product)
$task.Add('bucketId', $product.bucketId)
$task.Add('assignee', $product.assignee)

The next section is looping through my products and matching product names to titles of the message center posts.  There are other fields returned that look more promising to use, but I found that they were not reliable as they were sometimes blank.  I have discussions started with the team to see if we can fix that from the message generation side.  I also chose to create multiple tasks if there were multiple products in the title.  It does look like the other potential fields I would prefer to use are also arrays – so multiple products should still be possible if I changed to WorkloadDisplayName or AffectedWorkloadDisplayName, or even AppliesTo.

Once I have a match I populate the Id, the title (with the Id prepended), then make a list of categories with the contents of ActionType, Classification and Category.  This may be another area where we can tighten up on the usage of these fields.  I set a dueDate if there is one and also get the lastUpdatedTime.  I’m not using that yet, but relying on updated titles for new postings.  Probably an area for improvement – but when they are not a huge number of records I wasn’t too bothered about trimming down the payload too much.

For the actual message this can be in multiple parts – more often used for the Service Health Dashboard where we issue updates as the issue progresses – but thought it made sense to include that in my code too.  I add any ExternalLink items as reference – then finally add the bucketId and assignee.  Doing that here saves me re-reading the product.json in the other function for each task request.

#Using best practice async via queue storage

$storeAuthContext = New-AzureStorageContext -ConnectionString $env:AzureWebJobsStorage

$outQueue = Get-AzureStorageQueue –Name 'message-center-to-planner-tasks' -Context $storeAuthContext
if ($outQueue -eq $null) {
$outQueue = New-AzureStorageQueue –Name 'message-center-to-planner-tasks' -Context $storeAuthContext
}

# Create a new message using a constructor of the CloudQueueMessage class.
$queueMessage = New-Object `
-TypeName Microsoft.WindowsAzure.Storage.Queue.CloudQueueMessage `
-ArgumentList (ConvertTo-Json $task)

# Add a new message to the queue.
$outQueue.CloudQueue.AddMessage($queueMessage)
}
}
}
}

I did initially plan to just call my other function at this point but reading up on Function best practices it looked like I should use a Storage Queue, so finding a good reference - http://johnliu.net/blog/2017/6/azurefunctions-work-fan-out-with-azure-queue-in-powershell I took that direction.  Pretty simple – just got my storage context and then create my queue if it doesn’t already exist.  Then I can just convert my $task object to json and pass this in as my argument and this will add each of my tasks to the queue – ready to be picked up.  And I will pick this back up in Part 2!


Run PHP Webjob on Azure App Service (Windows)

$
0
0

When you deploy a webjob to run PHP program, there are few items to verify and help to understand the PHP runtime for webjobs.

1. How to create a PHP webjob
- Execute a .php file
- Create batch to execute .php file
- Create shell to execute .php file

2. PHP runtime for webjob is difference from the PHP runtime for webapp, to verify the PHP runtime, you can trigger a webjob export phpinfo, or run the following command from Kudu 'Debug console',

   php -i > phpinfo.txt

3. From phpinfo.txt (if you run a phpinfo from webjob, check the output log),

- find the php.ini location
Loaded Configuration File => D:Program Files (x86)PHPv7.1php.ini

- find PHP_INI_SCAN_DIR
Scan this dir for additional .ini files => d:homesiteini
Additional .ini files parsed => d:homesiteinisettings.ini

You can define PHP_INI_SCAN_DIR in App Settings:

- find if PHP error log is enabled
log_errors => Off => Off

- find PHP error log file location (you can modify this location in additional .ini file defined in PHP_INI_SCAN_DIR, e.g. d:homesiteinisettings.ini)
error_log => D:Windowstempphp71_errors.log => d:Windowstempphp71_errors.log

- Enable PHP extensions
Check from Kudu, default installed PHP extensions are listed in D:Program Files (x86)PHPv7.xext
If you use PHP 7.x 64-bit, check from D:Program FilesPHPv7.xext
To enable an extension in this list, add it in additional .ini file, for example,

extension=php_ldap.dll

- Install PHP extensions
If the PHP extension is not available from default extension list, download the matching version, for example, you can put it in d:homesiteext, then add the extension in additional .ini file, e.g.

extension="D:homesiteextphp_redis.dll"

Converting PCL (Portable Class Libraries) to .NET Standard Class Libraries

$
0
0

In Part 1 of this 3 part series, App Dev Manager, Herald Gjura discusses converting PCL (Portable Class Libraries) to .NET Standard Class Libraries.


Overview

I have been working for this client for quite a few years and had advised them to break down some of their key business functionality and features and distribute them as NuGet packages.

With the introduction of the mobile apps, most of these packages were re-packaged again as PCLs to be used in Xamarin cross platform applications.

During this process of continuous refactoring, things evolved again. There was a migration from TFS to VSTS and rewriting of the build and release processes.

PCLs were great as a solution to the problem of what they were trying to solve, however, they were cumbersome and prone to all sort of issues and inconsistencies. I always thought of them as a temporary bridge to something that was coming later. Eventually PCLs and their technology were marked for extinction and the natural next step is now to migrate toward the .NETStandard model.

The purpose of this blog is to document this last refactoring effort in order to help anyone that is going through the same process.

Prerequisites

One thing I did from the beginning is to migrate to Visual Studio 2017.

I have seen some writing out there, documenting part of what is to follow being done in older versions of VS, but it will get increasingly complicated very fast, to the point that I would question the added complexity and effort vs just migrating to VS 2017 first.

If for any reason you cannot migrate to VS 2017, I suggest you wait and live with the PCLs as they are, until you are able to make the migration.

From PCL to .NET Standard Core Libraries

My first instinct when I started this process was to just simply upgrade the PCL to use the .NetStandard by changing the target framework, but I was wrong.

With Visual Studio 2017, the tooling and build functionality for packages has changed. Many new and welcomed features have been introduced that makes this process much easier.

So the right approach is to recreate the project as a .NET Standard Class Library. Here is what I did and the steps I followed.

1) Go to the original folder of your PCL project. (Make a zip backup of the project, in case something goes terribly wrong and you need to restore). Navigate to where the project files are and delete all project related files and folders (see image on how they look for my package library solution and project). Do NOT remove any of the .git folders or .gitattributes and .gitignores files.

clip_image002

2) Open Visual Studio 2017 and create a new solution/project by choosing the Class Library (.NET Standard) project template (see image on how this looks for me and where to find it).

a. In name type the name of the old PCL project (unless you really intend to give the package a new name).

b. Keep unchecked the Create directory for solution

c. Keep unchecked the Create new Git repository

d. In Location make sure you place the new solution files at the same place where the old PCL solution files where.

clip_image004

3) Open the newly created solution and follow some of these steps:

a. Delete the Class1.cs that was created.

b. In Solution Explorer click Show All Files and add any new files and folders that are there form the older PCL library.

c. Note that if you were using any nuspec files or command line file to create the package you will not need them anymore. Keep them for now to extract information, but be ready to remove them later.

d. Make sure that you are targeting .NET Standard 1.6 as a target framework. Some of the packages you were previously using may only be available for this target only.

e. Restore the nuget packaged you were using in the old PCL .Some refactoring and manual debugging may need to take place here to get it everything right.

f. Rebuild the solution.

Note that some older public nuget packages may have not been updated to support .NETStandard. At this point you should judge the situation yourself in a case by case scenario if you need to proceed further or roll back to where everything was.

In my case, for a large number of PCL libraries, at this point the solution will build just fine, without additional refactoring efforts needed. Some more optional steps to follow:

4) Open up in a text editor (Notepad or Notepad++) the .csproj file, and we will take a look at some of the changes we will make in there. You will see that the format of the .csproj file has changed significantly.

a. If you want your new .NETStandard library to be available and compatible with other standards and frameworks make the following change: Change the <TargetFramework> tag to <TargetFrameworks> (note it is now in plural) and the content as net462;netstandard1.6. This will make the library compatible and available for both .NET 4.6.2 and .NETStandard 1.6.

clip_image006

b. You should also go ahead and delete the metadata in the nuspec file and add it to the .csproj file. You may also delete the nuspec file at this time.

5) If you have any test project that tests the functionality of the package, they will not work anymore with the .NETStandard packages. You will need a >NET Core Test project for that.

a. Go ahead and create a new .NET Core Test Project

clip_image008

b. Cut and paste all the old test files into the new .NET core project

c. Compile the test project and refactor and issues with the tests.

At this point, all the work we had to do in Visual Studio and with the code is complete. We should look now into the deployment and release tasks of VSTS.

Coming soon - Part 2: Upgrading the Continuous Delivery and Build/Release pipeline in VSTS


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

C++ coroutine tutorial

$
0
0

I’ve been experimenting with coroutines recently and I found that information on C++ coroutines are very difficult to find. I’m planning to write a series of C++ coroutine blogs on how to use C++ coroutines, how they work, and how to write your own library that uses C++ coroutines. My last post Basic Concepts are probably a bit too high-level and is not really meant for people who are new to C++ coroutines. So I’m going to start over with a simple C++ coroutine tutorial instead.

 

Enough talking. Show me the code

OK. I’m not going to bore you with the technical details (any more), and let’s jump straight into some sample code!

future<int> async_fib(int n)
{
    if (n <= 2)
        co_return 1;

    int a = 1;
    int b = 1;

    // iterate computing fib(n)
    for (int i = 0; i < n - 2; ++i)
    {
        int c = co_await async_add(a, b);
        a = b;
        b = c;
    }

    co_return b;
}

As you can see, this code simply calculates fibonacci, and it’s not a very good one either (again, coroutine != performance). But it’s a good one to show some basic concepts:

  • A C++ coroutine (such as async_fib) returns a coroutine type.

In this case, we are returning std::future<T>. This is a standard C++ library type. Unfortunately the default implementation in most compilers don’t support using future as a coroutine type. VC++ has its own extension that adds coroutine support to std::future. For the purpose of showing what coroutine is, we are going to assume `std::future` has that support. To run this code, you'll need VC++.

So what is a coroutine type anyway? It is a type that is aware of coroutines and implements a bunch of contracts required by C++ compilers. Most these contracts are (synchronous) callback that compiler will call when it’s about to suspend, to resume, to retrieve return value, to record exception, etc. Again, we’ll talk about details in future posts.

  • C++ coroutines uses co_await / co_return operators

co_await operator means “fire this async operation, suspend my code (if necessary), and resume execution with the return value”. So in this case, when calling co_await async_add(a, b), it’ll let the “expensive” add operation happen in another thread, suspend the execution, and resume the c = asignment with return value, and proceed with next execution. The async operation itself needs to return a awaitable expression. But for now, let’s simplify this to say it has to return a coroutine type. Not quite correct, but for the purpose of this tutorial, this is good enough at the moment.

co_return operator simply returns a return value to the coroutine, just like any other function. Note that for coroutines, you typically return the value type T in the coroutine type. In this case, the function signature returns future<int>, so you need to return int. std::future<int>here means : I promise I’ll give you a int value in the future when I’m done.

I get it now, but how do I implement an async operation that returns a coroutine type?

async_add is a good example:

future<int> async_add(int a, int b)
{
    auto fut = std::async([=]() {
        int c = a + b;
        return c;
    });

    return fut;
}
  • async_add returns a future<int> type (note that this is different than earlier - it actually returns the correct type!)

This is effectively saying: I’m returning an object back to you, promising that I’ll give you an int when I’m done. That’s exactly what co_await operator needs.

  • The real operation is done inside std::async, which conveniently returns a future, that’ll resolve/complete when the async operation is finished. The async operation is running in another thread.

How do I run this thing?

I’m testing this on VC++ on VS 2017 for now. In order to compile the code, you need to pass /await as a compiler switch:

Adding /Await Option

For your convenience, I’ve shared the entire cpp file here

If you want to use clang, you need to do a bit more work because most importantly future<T>isn’t a coroutine type there. I’ll have another post talk about how to run it in clang5.

What’s next

OK. I’m going to stop here for now. A few things I’d planning to cover in the future (no pun intended):

  • How to augment std::future<T> to be a coroutine type, and running coroutines in clang 5
  • What’s an awaitable, and how to wrap your own async functions using your own awaitable
  • How to write your own version of future<T> - let’s call it task<T>
  • Gory details - what does compiler codegen looks like, how suspension/resume works, some additional subtle issues to consider

Hope this helps.

You can also find this post in http://yizhang82.me/cpp-coroutines-async-fibonacci

 

Azure VMs – Active Directory members and getting time sync (ntp) right

$
0
0

I've recently had this question asked of me multiple times recently.
"If a VM in Azure is a member of a Domain, where does it get its time from, Azure (time.microsoft.com) or the NTP source set by AD?"
The answer is both, unless you make some changes.

Here are the specific time setting recomendations, buried inside this post https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/windows-time-service/accurate-time

Azure and Windows IaaS considerations

  • Azure Virtual Machine: Active Directory Domain Services
    If the Azure VM running Active Directory Domain Services is part of an existing on-premises Active Directory Forest, then TimeSync(VMIC), should be disabled. This is to allow all DCs in the Forest, both physical and virtual, to use a single time sync hierarchy. Refer to the best practice whitepaper “Running Domain Controllers in Hyper-V
  • Azure Virtual Machine: Domain-joined machine
    If you are hosting a machine which is domain joined to an existing Active Directory Forest, virtual or physical, the best practice is to disable TimeSync for the guest and ensure W32Time is configured to synchronize with its Domain Controller via configuring time for Type=NTP5
  • Azure Virtual Machine: Standalone workgroup machine
    If the Azure VM is not joined to a domain, nor is it a Domain Controller, the recommendation is to keep the default time configuration and have the VM synchronize with the host.

So in short, for Active Directory VMs in Azure, update the registry path: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesW32TimeTimeProvidersVMICTimeProvider and set the value for 'Enabled' to 0

To do this via Group Policy the setting looks something like this -

My experiment and observation on Service Fabric Communication Stacks

$
0
0

This post is provided by Senior App Dev Manager, Vishal Saroopchand who asks the question, “How do you decide what Communications Stack to use in your Service Fabric applications?”


How do you decide what Communication Stack (Remoting, WCF, Custom Implementation) to use in your Service Fabric applications? Do you know how each communication stack performs? This post is to help shed some light on the performance characteristics that I observed with my recent experiment.

The questions I was attempting to answer

Which communication stack should I choose for inter-service communication on a low latency workload? What was the performance footprint of the out-of-the-box (OOtB) communication stack? How does my custom implementation perform against the OOtB options?

Experiment setup

In order to answer this question, I decided to visualize the time it takes to move a message of variable size from a Web Proxy Gateway through a series of Stateful services and back to the Web API. Each Stateful service will listen on WCF, Remoting, a custom WebSocket and custom PubSub using Service Bus Topics. I will timestamp each visit and then plot it in a box-plot chart.

clip_image002

Results

Here is a snapshot of one test. Please keep in mind, call durations will fluctuate per test, but generally the performance follows the same pattern.

clip_image004

PubSub is not shown in the above diagram as the total duration was roughly 1.8 seconds. Here is a zoomed out view showing all 4 Communication Listeners.

clip_image006

The bottom line is this: If you want simplicity and can live with a sub ~30ms sending messages between nodes, use the built in communication stack (Remoting or WCF). If you want better performance, consider building your own ICommunicationListener and handle your own data serialization.

In Conclusion

Carefully plan your communication stack. Spend some time upfront to understand the characteristics of each, try improving it by taking ownership of serialization and/or the communication stack.

Consider building your own for low latency communication and use one of the built-in as a fallback. For custom communication stacks, remember, you must handle scenarios such as churn in your cluster where services move from one node to another. You should not assume an endpoint will remain stationary in your implementation. To test the soundless of your custom communication, use Chaos to simulate churn and see how your implementation perform.

Feel free to clone my experiment code here and include your own Communication stacks.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>