Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

Translator Text API を使った簡単なアプリを作ってみよう

$
0
0

 

Microsoft Japan Data Platform Tech Sales Team

倉重秀昭

 

先日の記事で Cognitive Services のTranslator Text APIについて紹介しましたが、今回は Cognitive Services の Translator API を使ったアプリケーションの開発方法について紹介をしたいと思います。

Cognitive Services は REST APIの形で提供していますので、アプリケーションから REST API を呼び出すだけで、翻訳や画像認識などの機能を利用できるようになっています。

 

Translator Text APIの仕様は以下のページに記載されています。

http://docs.microsofttranslator.com/text-translate.html#/

ページ下部の「default」をクリックすると、Translator Text APIのメソッドが一覧で表示されますが、TextからTextに翻訳をする場合には一番上に表示されている「/Translate」を利用します。

 

このメソッドを利用する際のポイントは以下の2つです。

(1) Authorization

Translator Text APIを利用するには、以下の2つのうちいづれかの方法でAuthorizationを行う必要があります。

a.  Access Tokenを利用する方法

b.  httpヘッダーにSubscription Keyを埋め込む方法

今回は、簡単に実装できる 「b. httpヘッダーにSubscription Keyを埋め込む方法」でAuthorizationを行いたいと思います。

 

(2) 入力パラメータ

authorization の方法として「httpヘッダーにSubscription Keyを埋め込む」方式を採用する場合、Translator Text APIを呼び出すときに必要なパラメーターは以下となります。

(パラメーターはGet方式で渡します)

パラメーター名 設定内容
text 翻訳したい文章
from 翻訳前の言語
to 翻訳語の言語
category 利用する翻訳エンジン
空白 :統計的機械翻訳
generalnn:ニューラルネットワーク

 

 

サンプルアプリケーション

次に Translator Text API を呼び出す簡単なサンプルアプリケーションとコードの内容を紹介したいと思います。

 

サンプルアプリケーションは、以下のように翻訳したい英文を入力すると翻訳語の日本語を返してくれる、シンプルなアプリケーションとなっています。

image

 

 

このアプリケーションのソースコートは以下のようなコードとなります。

using System;
using System.Net;
using System.IO;

namespace Azure_TranslatorTextAPISampl

{
class Program
{
static void Main(string[] args)
{

//翻訳前の言語
string fromLanguage = "en";
// 翻訳語の言語
string toLanguage = "ja";
//利用するモデル  :空白 (=統計的機械翻訳) or generalnn(=ニューラルネットワーク)を指定
string ModelType = "generalnn";
//Translator Text API の Subscription Key(Azure Portal(https://portal.azure.com)から取得)
string ocpApimSubscriptionKey = "";

//入力を促すメッセージを表示して、翻訳する文章を入力してもらう
Console.WriteLine("翻訳したい英文を入力してください ");
string inputText = Console.ReadLine();
Console.WriteLine("");

//URIの定義(Getでパラメータを送信する為、URLに必要なパラメーターを付加)
string uri = "https://api.microsofttranslator.com/v2/http.svc/Translate?" +
"&text=" + Uri.EscapeDataString(inputText) +
"&from=" + fromLanguage +
"&to=" + toLanguage +
"&category=" + ModelType;

//httpWebRequestの作成
HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create(uri);

//Authorizationのためにhttpヘッダーにサブスクリプションキーを埋め込む
httpWebRequest.Headers.Add("Ocp-Apim-Subscription-Key:" + ocpApimSubscriptionKey);

WebResponse response = null;
try{

//Translator Text APIへのリクエストを実行して結果を取得
response = httpWebRequest.GetResponse();
using (Stream stream = response.GetResponseStream())
{
System.Runtime.Serialization.DataContractSerializer dcs =
new System.Runtime.Serialization.DataContractSerializer(Type.GetType("System.String"));
string translationText = (string)dcs.ReadObject(stream);

//コンソールに翻訳結果を表示
Console.WriteLine("翻訳結果:");
Console.WriteLine(translationText);
Console.WriteLine("");
}

}
catch(Exception ex)
{
//エラー処理
Console.WriteLine("エラーが発生しました");
Console.WriteLine(ex.Message);
}
finally
{
if (response != null)
{
response.Close();
response = null;
}

Console.WriteLine("終了するには何かキーを押してください...");
Console.ReadKey();
}
}
}
}

 

このサンプルアプリケーションのプロジェクトファイル(Visual Studio 2015)は以下からダウンロードできますので、興味がある方は、実際にビルドして動かしてみてください。

Azure_TranslatorTextAPISample.zip


How to check if Azure App Service is on 2016, what version of IIS

$
0
0

If you are interested to find out which version of IIS or OS your Azure App Service is running on, you can check it like this.

  • Check in KUDU
  • Capture a Fiddler trace

I wrote about KUDU here, just in case you do now know about it.

Check in KUDU

As shown in Figure 1, after logging into KUDU, select the Environment link and you will see the OS Version.

image

Figure 1, which version of OS / IIS am I running on my Azure App Service

image

Figure 2, which version of OS / IIS am I running on my Azure App Service

Capture a Fiddler trace

In the response header from the App Service you will see Server: in the Miscellaneous collection that identifies the version of IIS.  Knowing the version of IIS you can link it back to the OS version as IIS is released along with the OS.  I have a table that defines that here.

As seen in Figure 3, as this is IIS 10, you can conclude you are running on Windows Server 2016.

image

Figure 3, which version of OS / IIS am I running on my Azure App Service

As seen in Figure 4, as you see IIS 8.0, you can conclude Windows Server 2012.

image

Figure 4, which version of OS / IIS am I running on my Azure App Service

Azure Log Analytics: Linux Groups

$
0
0


Earlier today I needed to look for some specific Linux machines, and a process name in Syslog.

If you happen to have a naming convention, that enables a startswith or endswith or even a contains then its reasonably easy to find this info,

e.g.

image

However I wanted to make sure it was a Linux server and the Heartbeat type allows you to find by OSType.  e.g.

image

Combing the two is somewhat harder, and the method I went for was:

// First create a list of Linux machines that startwith "Fnnnnn"

let myLinuxGrp = toscalar(Heartbeat

| where OSType == "Linux" and Computer startswith "F"

| summarize makeset(Computer));
  

I used a #let command to store the computer names in a variable called myLinuxGroup, this holds a list of the Linux servers that start with “F”.  I needed toscalar and makeset to enable this to work.  You can see the output of myLinuxGrp in this screen capture, its essentially a comma separated list.

image

I then added another #let to hold the process name (didn't really need to do this but…it looked neater IMO).  I used a #where to look into myLinuxGrp to see which Computers matched, and which had the process name in the Syslog in the last day. 

let myLinuxProcess = "sshd";                                       

Syslog

| where TimeGenerated > ago(1d)

| where myLinuxGrp contains Computer and ProcessName == myLinuxProcess

So putting it all together we get:

image

Licensing Issues in multiple VSTS accounts – 12/07 – Mitigating

$
0
0

Update: Thursday, December 7th 2017 19:51 UTC

We have identified a fix and are in the process of rolling it out across the service.

Sincerely,
Sri Harsha


Initial Update: Thursday, December 7th 2017 18:51 UTC

We're investigating reports of licensing issues in the service. Users may notice the access levels of users in their accounts are being downgraded to 'Stakeholder'.

  • Next Update: Before Thursday, December 7th 2017 19:25 UTC

Sincerely,
Sri Harsha

Visual Studio Toolbox: Database DevOps with Redgate ReadyRoll

$
0
0

In this first of two episodes, I am joined by Steve Jones to discuss how you can use the Redgate Data Tools that are included in Visual Studio Enterprise 2017 to extend DevOps practices to SQL Server and Azure SQL databases.

In this episode, Steve demonstrates the migrations-based approach used by ReadyRoll, which generates numerically ordered SQL migration scripts that sit inside your project and takes your schema from one version to the next. You can add them to version control, use them to build and release, and automate database and application deployments, all in one process.
Steve shows now only how to generate the migration scripts, but also how to incorporate them into a CI/CD pipeline using Visual Studio Team Services.

DevOps for Data Science – Continuous Integration

$
0
0

In the previous post in this series on DevOps for Data Science, I covered the first the concept in a DevOps “Maturity Model” – a list of things you can do, in order, that will set you on the path for implementing DevOps in Data Science. The first thing you can do in your projects is to implement Infrastructure as Code (IaC).

The next level of maturity is Continuous Integration. This is something developers have done for quite some time. Many Data Science projects have this element as well, but perhaps without using that term. First, let’s define “classical” Continuous Integration, or CI:

Continuous Integration is the process of merging changes from developer’s code quickly back into the release candidate code, rather than waiting until all changes are made in all developers’ code.

 We can make that a bit clearer with an example. Assume three developers (Jane, Bob, and Sandra) are working on a program that accepts a picture input from a user’s cell phone camera, runs an image recognition algorithm against it, and returns a label to the user – something like “That’s not your cat” or the like. The main part of the code is created, and “checked in” to a code repository system like git. This is called the “main branch” – it’s where all the work is complete as of a given point in time.  If Jane is using git, she uses a git command to copy *all* of the code in the main branch to her local system. She can then create a “branch” of her own, perhaps calling it “JaneBranch” that branch now has a copy of the code as well. She then begins to alter the code in JaneBranch to have a better User Interface for the program. At this point, the master branch has not changed, and Bob and Sandra have done the same on their systems, working away on anything they want to change.

After she makes any changes she likes and tests to make sure it works, she can request that her code be integrated back into the master branch, using something called a “Pull Request”. Her changes go to the master branch, and if Bob and Sandra have not pulled the changes down, they are still working on “old” code.

For the most part, this works fine – until you have lots of changes, some of which might conflict. For instance, if Jane alters the code in the User Interface that sends the image to the prediction algorithm, Bob might have a dependency on that call. That would cause a break. And then if Sandra’s code depended on things that both Bob and Jane are doing, that would also cause her code to break. If they all waited to merge to the main branch, all these errors (and probably more) would show up at once, like a pile-up on a freeway.

To avoid this issue, as soon as Jane’s code tests well, it should be pulled into the master branch, and Bob and Sandra should pull that new master branch back down, to test their code against. And as soon as Bob has made a change, he should also push that back into the master branch, as should Sandra. The key is that with little changes being merged back into the main branch, failures show up faster. If you fast-forward that to almost any functional change being made in any part of the code, you get Continuous Integration.

Doing that by hand would of course take a lot of coordination – so systems exist that make that a lot easier and more automatic, like Visual Studio Team Services and other packages.

What does this mean for the Data Science team? Actually, quite a lot. Depending on the type of algorithm, we have a lot of dependencies on the data we get for training or for the trained model. We expect certain parameters to pass as inputs, and we expect to return a certain parameter or parameters back, most of the time strongly typed and in both directions. If changes “break” our inputs or outputs, we need to know that as soon as possible. In some cases, it can be as dramatic as retraining the original model or even creating a new one using a different algorithm.

I’ve only intimated the main idea here – testing. It’s quite difficult to have Continuous Integration without a test happening automatically (Automated Testing) – but it can be done.  In practice most development shops will put Automated Testing together with Continuous Integration, but in a Data Science algorithm, it’s a bit tricky to create an automated test. I’ll cover that in the next installment of this series, but for now, try to get the Data Science coding process integrated into the organization’s code control system. Perhaps you’ve already done that – congratulations! If not, take some time, learn the system your organization uses, and get your R, Python or whatever other Data Science code you have checked in along with everyone else’s. Tell the team that runs the code control system that for now, you need a “Manual Test” step inserted after your code Integrates. They’ll not be too happy about that, but it is better than not testing at all. I’ll explain how to include as much automated testing as possible in our next article.

See you in the next installment on the DevOps for Data Science series.

Create a monitoring and notification mechanism for HADR worker thread Pool

$
0
0

A customer approached us asking for help on automatic a monitoring process. The goal was to send some type of notification when a particular threshold was reached.

He had already discovered this blog but still needed some guidance on how to get notified.

Monitoring SQL Server 2012 AlwaysOn Availability Groups Worker Thread Consumption

We created this prototype for him to use. Feel free to modify it to fit your needs.

  1. The script below creates an Xevent to track hadr_thread_pool_worker_start event. There are multiple targets for an Xevent – file, histogram, ETW, ring buffer.  I selected a Ring Buffer as a target because it is in memory and does not require you to maintain a file – delete it, re-created it, etc.
  2. I also created a limit for how much data this buffer can hold, so that you don’t impact other things on the server -  set it to 500 KB
  3. Next, the core of this functionality. This creates a table variable (in memory again) so no need for you to create a table on disk. However, if you decide to implement your solution with a table on-disk, you are welcome to do so.
  4. The script queries the Xevent ring buffer created, parses the XML output it produces into a rowset (using the .NODES() function) and then parses each individual XML row using the .VALUE() function.
  5. This data is filtered out to when WORKER_START_SUCCESS = ‘false’ and when Active_workers reaches 100 and that is what you see in the WHERE clause, but you can modify that to fit your needs.Since only rows that contain WORKER_START_SUCCESS = ‘false’ will be inserted into the @tbl_variable_filtered, the rowset there will be small. But also, checking for ANY rows in that table will mean that you have encountered this condition. That’s where the IF EXISTS plays a role
  6. If you find any rows in that able, then I suggest you send yourself an email via sp_send_dbmail so you can take action.
  7. You can create an SQL Agent job that fires every so often and does the querying of the Ring buffer and notification below.

CREATE EVENT SESSION HadrThreadPoolWorkerStart on server

ADD EVENT sqlserver.hadr_thread_pool_worker_start

ADD TARGET package0.ring_buffer (SET max_memory = 500 -- Units of KB. )

with ( startup_state = on );

 

alter event session HadrThreadPoolWorkerStart on server state = start

--the actual monitoring code, which you can consider adding to a SQL Agent job

DECLARE @TBL_VARIABLE AS TABLE (time_stamp varchar(32), active_workers int, idle_workers int, worker_limit int, worker_start_success varchar(5))

DECLARE @TBL_VARIABLE_FILTERED AS TABLE (time_stamp varchar(32), active_workers int, idle_workers int, worker_limit int, worker_start_success varchar(5))

;WITH XE_Hadr_ThrdPool as (

  SELECT execution_count, CAST(target_data AS XML) AS [target_data_XML] FROM sys.dm_xe_session_targets

  WHERE event_session_address IN ( SELECT address FROM sys.dm_xe_sessions WHERE name = 'HadrThreadPoolWorkerStart' )

)

--JUST PUT EVERYTING IN THE TABLE VARIABLE. It is faster to filter out tabular results than filter XML data via XML parsing

INSERT INTO @TBL_VARIABLE (time_stamp , worker_limit, idle_workers, active_workers , worker_start_success )

select top 20

T.xml_data.query('.').value('(/event/@timestamp)[1]', 'varchar(32)') as time_stamp,

T.xml_data.query('.').value('(/event/data/value)[1]', 'int') as worker_limit,

T.xml_data.query('.').value('(/event/data/value)[2]', 'int') as idle_workers,

T.xml_data.query('.').value('(/event/data/value)[3]', 'int') as active_workers,

T.xml_data.query('.').value('(/event/data/value)[4]', 'varchar(5)') as worker_start_success

 FROM XE_Hadr_ThrdPool

cross apply [target_data_xml].nodes('RingBufferTarget/event') as T(xml_data)

if exists (select top 1 * from @TBL_VARIABLE where worker_start_success = 'false' and active_workers >100)

begin

  INSERT INTO @TBL_VARIABLE_FILTERED

  select * from @TBL_VARIABLE

  where worker_start_success = 'false'

  order by time_stamp desc

end

if exists (select top 1 * from @TBL_VARIABLE_FILTERED)

   exec sp_send_dbmail ...  = --Configure DB Mail here to get notified 

go

alter event session HadrThreadPoolWorkerStart on server state = stop

 

[Advent Calendar 2017 Day 8] Azure Funcitons 上で Python と CNTK を使ってみる

$
0
0
こちらの記事は、Qiita に掲載した Microsoft Azure Tech Advent Calendar 2017 の企画に基づき、執筆した内容となります。
カレンダーに掲載された記事の一覧は、こちらよりご確認ください。

 

こんにちは。Azure Dev サポートの村山です。

今回は、Azure Funcitons で 64bit の Python を利用して、 CNTK と呼ばれるオープン ソースのライブラリ使って、与えられた値に基づいて予測を行う API を作ってみようと思います。

 

CNTK とは

弊社 Microsoft が開発している Deep Learning が利用可能なオープン ソースのライブラリになります。現在、Python や C#、C++ 、BrainScript で提供されております。

詳しくはこちらのドキュメントに記載されおり、また質問やフィードバックに関してはこちらのページになります。

 

Azure Functions では、C# や、Javascript のほかにも、 Python や PHP、PowerShell が試験段階という位置付けで提供されており、利用可能なデフォルトの Python ランタイムは 32 bit のみですが、実は拡張機能をインストールすることで 64 bit の Python も利用可能になります。

今回は、そんな Azure Functions での拡張機能の Python の利用方法をご紹介するとともに、 流行りのライブラリをインストールして遊んでみたいと思います。

最後に Azure Functions で現在利用可能な言語に関してもちょこっと紹介します。

1. App Service (Web Apps や Function App など)上の Python の 64 bit 対応に関して


まず、Function App の作成にいきなり入る前に、なぜ 64 bit の Python を利用する際に、拡張機能のインストールが必要なのかご説明します。

Web Apps や Function App では、現在デフォルトでは 32 bit の Python しか利用できません。(バージョンは Python 2.7.8 or Python 3.4.1 のいずれか)

このため、Web Appsや Function App では、[アプリケーション 設定] からプラットフォームのアーキテクチャを選択可能ですが、デフォルトでは 64 bit のPython は利用することはできず、アプリケーション設定で、設定を 64 bit へ変更した後も 32 bit の Python が利用されます。

 

 

Azure Web Apps や Funcion App で 64 bit の Python を利用するためには、別途拡張機能で提供されている64bit の Python ランタイムをインストールして設定する必要があります。

今回は割愛しますが、Web Apps や API Apps で 64bit の Python を利用する場合は、こちらの設定を試してみてください。

この記事では、Funcion App 上で 64 bit の Python 拡張機能をインストールして利用します。

その前に、試しに Python の Function を作って、Azure ポータル上で動かしてみようと思います。

2. Function App で Python の Function を作ってみる


まず、Azure ポータルからFunction App を作成してください。ホスティング プランとして、"従量課金プラン" と、"App Service プラン" があると思いますが、今回は "App Service プラン" を選択します。

Function App が作成できたら、さっそく Python の Function を作ってみましょう。

画面内の "関数" 右側の + マークをクリックした後、Http trigger もしくは Queue trigger を選択して、言語として Python を選択してください。

各トリガーの説明に関してはこちらをご覧ください

 

 

次に、作成した関数を使って、現在のPython のバージョンを確認してみましょう。下記のコードを保存して実行すると Python のバージョンが表示されます。

from platform import python_version

print(python_version())

 

 

デフォルトで選択可能な Pyhton のバージョンは2.7.8 か、3.4.1 のみですが、この後拡張機能をインストールして 64 bit の Python を利用できるように設定します。

 

3. 拡張機能のインストールと設定


次に、拡張機能のインストールを行い、64 bit の Python を利用できるように設定します。

先ほどの Function ポータル上のタブからプラットフォーム機能から Kudu を選択して、Web Apps の管理サイトである Kudu へ移動してください。

https://FunctionApp名.scm.azurewebsites.net からもアクセス可能です。

 

 

 

Kudu サイトへ移動後、画面上部の [Debug console] -> [CMD] と選択して、コンソール画面を表示してください。

その後、画面中段から site -> tools と選択してディレクトリを移動してください。 (コンソール画面で cd sitetools と入力して移動することもできます。)

 

 

 

このディレクトリに拡張機能をインストールすることで、64 bit の Python が利用可能になります。

現在利用可能な 他の Python のバージョンに関しては、こちらに記載されております。

今回は提供されている中で一番新しい Python 3.6.1 x64 をインストールします。Kudu のコンソール画面内で下記のコマンドを実行してください。

 

nuget.exe install -Source https://www.siteextensions.net/api/v2/ -OutputDirectory D:homesitetools python361x64

 

コマンドを実行すると、Python の拡張機能がインストールされて、フォルダが作成されます。

 

 

Function で利用するためには、tools 直下に python.exe を配置する必要がありますのでmv コマンドでインストールした中身を移動します。

 

mv /d/home/site/tools/python361x64.3.6.1.3/content/python361x64/* /d/home/site/tools/

 

これで、 Function App で 64 bit の Python  を利用する準備が整いました。

試しにさっきのコードをもう一度実行してみましょう。バージョンが変更されインストールした拡張機能が動いていることが確認できます。

 

 

4. CNTK のインストール


64 bit の Python を利用する準備が整いましたので、これから CNTK のインストールを行います。

まず、CNTK が依存しているモジュール (numpy , scipy ) をインストールしますので、先ほどの Kudu コンソール上で下記のコマンドを実行してください。

(カレント ディレクトリが D:homesitetools であることをご確認ください。)

 

python -m pip install numpy

python -m pip install scipy

 

 

numpy と scipy をインストールしたら、最後に CNTK をインストールします。

こちらから、今回インストールした Python のバージョンに合わせてリンクをコピーしてください。

今回のチュートリアルでは、Python 3.6.1 を拡張機能としてインストールしたので、CNTK は Python 3.6 CPU only を選択してます。

 

python -m pip install https://cntk.ai/PythonWheel/CPU-Only/cntk-2.3-cp36-cp36m-win_amd64.whl

 

インストールが完了すると、コンソール上に下記のように表示されます。

必要なモジュールのインストールが終了しましたので、Function を作っていきましょう。

 

5. ロジスティック回帰モデルの作成と、API


今回は、CNTK の Github チュートリアル内にある、ロジスティック回帰モデル作成してみようと思います。

ロジスティック回帰モデルは、入力された特徴量に基づいて2値 (以上) のクラスに分類する、教師あり学習モデルです。

詳しいロジスティック回帰モデルのチュートリアルはこちらに掲載されてます。

チュートリアル内には、モデルの作成には直接関係のないコードも含まれておりますので、モデル作成に必要なコードのみを抽出すると、下記の通りになります。

試しに関数を実行すると、Function のポータルのログ上に、チュートリアルに記載されているようにトレーニング過程と、ラベルと予測データが表示されます。


今回は、データ量も少ないのでモデルの学習にそこまで時間はかからないですが、大規模なデータで学習を行う場合、モデルの学習には相当な時間がかかります。

ローカルで学習したモデルを、Funcion App 上で 利用するシナリオ を想定して、CNTK 内にある save 関数でモデルを保存します。先ほどのコードに下記を追加して再度実行してください。

out.save("D:/home/site/wwwroot/trained_model")

Kudu サイトで確認すると、モデル ファイルが作成されているのがわかります。今回はデータ量が少ないのでモデルのファイル サイズも小さいですね。

 

このモデル ファイルをロードして、送られてきたデータに対してどちらのクラスか予測を行う Python の関数を新たに作ってみましょう。

実際にモデルがちゃんとロードされているかを確認するために、先ほど作った関数に下記のコードを追加して再度実行してください。

test_data = features[0]
print(test_data)
print(out.eval({feature:test_data}))

 

テスト データとして使用する特徴量と、予測結果であるクラスの所属確率がログに表示されますので、メモ帳などに控えておいてください。


それでは、Azure ポータルで、Python の Function を作成しましょう、今回は API を作りますので HttpTrigger関数を選択してください。

CNTK では load_model 関数を利用することで保存したモデルをロードすることができます。

今回はこんな感じでコードを書いてみました。

送信する要求本文に、パラメータとして、さっき控えておいた値を入れてください。

 

Function を実行すると、先ほど控えておいたクラスの所属確率と同じ結果が出てくることが分かります。きちんと訓練されたモデルがロードされていますね。

 

 

レスポンスにどちらのクラスに所属するかを出すようにしているので、データを投げたらその予測結果を返します。

 

 

サーバーのセット アップなしにコードを書くだけで API が作れるのは非常に便利です。

CNTK の他のチュートリアルも面白いので、ぜひ試してみてください。

 

 

 

6. 注意事項


商用環境での、プレビュー・試験段階の言語のご利用はお控えください

冒頭でも少しご案内しましたが、Azure Functions 上での Python の利用は現在試験段階となります。(2017/12/08 現在)

試験段階の言語は現在サポートされておらず、Function のランタイムの更新などがあった場合、廃止される可能性もございます。

このため、もしも運用環境上でお困りごとがあった場合には、弊社サポート サービスにて十分にご支援を差し上げることができない可能性がございますのでご了承ください。

下記資料内で、現在サポ―トされている言語に関してご案内しております。

https://docs.microsoft.com/ja-jp/azure/azure-functions/supported-languages

なお、試験段階ではございますが、Python に関しては、商用環境でのご利用をご要望のお客様が多数いらっしゃいます。

このため、時期は未定とはなりますが、新しい Function ランタイム ver 2.0 で提供できますよう現在開発を進めている段階です。

https://github.com/Azure/Azure-Functions/wiki/Language-support#what-about-the-experimental-languages-that-are-in-v1

また、Azure Functions 上でPython のご利用をご要望いただいているお客様に向けて、現在  Survey を行っておりますので、もしよろしければご参加いただけますと幸いです。

 

モデルを作成する場合は、ファイルサイズが大きくなりすぎないように注意してください。

モデルファイルをロードする際にファイルサイズが大きすぎると、ロードするまでに時間がかかる可能性がございます。

特に、従量課金プランをご利用の場合には、初回 Function が呼びだされた際は、まず Azure Storage からデータが実行環境にロードされ、その後  Function が実行されます。

Function が一定期間実行されない場合は、自動的にデータがアンロードされ、次回実行時にまた同じような動作を行います。

このため、"常時接続" 機能を使ってあらかじめデータをロードした状態の保持が可能な、App Service プランと比べて Function の呼び出しが遅くなります。

また、関数の実行に時間がかかりすぎると、Function がタイムアウトすることもあり、HttpTrigger の場合は、約 230 秒でタイムアウトが発生しますので、重すぎる関数の実行はお控えください。

そのほかの Azure Functions のベストプラクティスに関しては、こちらでご案内しております。

 

それでは、失礼します !


test the draft post

$
0
0
https://support-bay.scm.azurewebsites.net/support/?sitename=msdn-west&tab=mitigate&source=ibiza#

Windows 10 Fall Creators Update での GetPixel、SetPixel 関数の処理速度について

$
0
0

 

こんにちは、Platform SDK (Windows SDK) サポートチームです。

今回は Windows 10 Fall Creators Update 環境で GetPixel および SetPixel 関数の処理速度が、Windows 10 Creators Update 以前の Windows 10 の処理速度と比較し、遅くなる現象についてご案内いたします。

 

現象

プログラムの内容にもよりますが、ほぼ同等スペックの Windows 10 Creators Update と Windows 10 Fall Creators Update で

GetPixel 、SetPixel 関数の処理速度を GetTickCount 関数を使用して計測した結果

Windows 10 Fall Creators Update 上の処理速度は、Windows 10 Fall Creators Update の数倍から数十倍遅くなることが報告されています。

 

回避策

有効な回避策は確認できていません。

 

状況

マイクロソフトでは、この問題について調査をしています。

進展があり次第、本ブログを更新予定です。

[Power BI] Salesforce のデータ分析

$
0
0

Microsoft Japan Business Intelligence Tech Sales Team 伊藤

Microsoft はかつてのように他社の製品・サービスを排除して Microsoft 帝国を築くのではなく、他社の製品・サービスと連携できるように開発をしており、それは Power BI においても例外ではありません。今回は Salesforce のデータを分析する方法についてご紹介します。分析方法は 3 種類あります。

  • コンテンツパック
  • Power BI Desktop でインポート
  • ソリューション テンプレート

コンテンツパック

Salesforce のデータを分析する一番簡単な方法は、コンテンツパックの利用です。

  1. Power BI サービスの画面上で[データを取得] をクリックし「サービス」の [取得] ボタンをクリックします。
    image
  2. AppSource の画面が開くので、検索窓で「Salesforce」と入力して検索します。
    image
  3. 3つのアプリ (コンテンツパック) がヒットしますが、いずれも Salesforce にサインインして、そのユーザー自身のデータを Power BI サービスに取り込みます。簡単に説明すると以下の通りです。
    Salesforce Reports Salesforce アカウントに接続し、1つまたは複数の Salesforce レポートを選択します。レポートは、新しいデータセット内の別のテーブルにインポートされ、Power BI でレポートやダッシュボードを作成できます。
    Salesforce Sales Manager ユーザー自身と、チームメンバーのデータを取り込みます。セールスマネージャにとって重要なビジュアルと洞察が含まれています。ダッシュボードには、販売パイプライン、最高のアカウント、営業担当者のパフォーマンスなどの主要な指標が用意されています。
    Salesforce Sales Rep ユーザー自身のデータを取り込みます。営業担当者にとって重要なビジュアルと洞察が含まれています。ダッシュボードには、販売パイプライン、ベストアカウント、最新の取引などの主要な指標が用意されています。

Power BI Desktop でインポート

コンテンツパックは Salesforce のデータを Power BI サービスに直接インポートするため、他のシステムやファイルのデータとの連携ができません。データ連携を行いたい場合は、Power BI Desktop の [データの取得] から行います。
image

ソリューション テンプレート

Power BI Desktop もお手軽にできるのですが、「基幹系の売上実績と Salesforce 上の見込みデータを突き合わせて分析したい」というように、大量のデータが対象となるような場合には向いていません。そんな時は「ソリューション テンプレート」を使用します。
https://appsource.microsoft.com/ja-jp/marketplace/apps?product=pbi-solutiontemplates
image

Salesforce から Azure Data Factory または Informatica を使って、SQL Server または Azure SQL Database にデータを取り込んで、必要に応じて Azure Analysis Services でデータマートを作成し、Power BI から接続して分析するテンプレートです。Azure Analysis Services は、Salesforce から Azure 上に取り込んだデータと、オンプレミスにある基幹系の売上実績データを組み合わせて分析したいといった場合に便利です。テンプレートは無料でお使いいただけますが、Azure や Informatica は有償 (どちらも試使用が可能) です。こちらの使い方については自習書を公開しています。ぜひお試しください。
https://www.microsoft.com/ja-jp/cloud-platform/documents-search?pdid=PBI&svid=Microsoft_Power_BI_&dtid=Self_learning

image

Announcing Data & BI Summit in Dublin, Ireland 24-26 April 2018

$
0
0

Working with the Power BI User Group and Dynamic Communities I am very proud to announce Data & BI Summit will be held in Dublin, Ireland 24-26 April 2018.

While the content committee hasn’t announced the final agenda…I have it on good authority Me, Amanda Cofsky, Will Thompson and Nikhil Gaekwad will be getting invited to present!

Data-BI-18-Blog-Header

Data & BI Summit

Join Business Analysts, Data Professionals & Power BI Users at the inaugural Data & BI Summit, located in Dublin, Ireland 24-26 April 2018 at the Convention Centre Dublin.

What to expect with the Data & BI Summit:

  • Have access to exceptional content: Learn how to bring your company through the digital transformation by gaining new understandings of your data and deepening your knowledge of the Microsoft Business Intelligence tools. Products will include: Power BI, PowerApps, Flow, SQL Server, Excel, Azure, D365 and more! Sessions will be available for all users, whether you’re just exploring these technologies are looking for advanced information that can take your skills to the next level. 
  • Get your questions answered: Network with the Microsoft Power BI team, dig-in onsite to find immediate answers with industry experts, Data MVPs, and User Group Leaders while taking advantage of the opportunity to engage in interactive sessions, workshops and labs.
  • Network with your peers: Enjoy countless opportunities to create lasting relationships by connecting and networking with user group peers, partners and Microsoft team members. 
  • Stretch your skillset: Advance your career by learning the latest updates and how they can help you and your business. Interested in sharing your experiences and sharpening your presenting skills? Click here to submit a session and join the list of volunteers bringing the community together in Dublin.

Why to attend:

“I'm excited to be part of the upcoming Data & BI Summit event. By attending you will have opportunity to hear from some of the best speakers in this field, network with peers, learn about new features, and share best practices.  It’s a can’t miss for anyone using or interested in the Microsoft Business Intelligence tools!”

- Reza Rad, PUG Board Member, Microsoft MVP, PUG Leader

Reza-300x280.jpg


Early Bird Pricing

Join your peers & other experts in Dublin, Ireland 24-26 April 2018  by registering Early Bird pricing, a savings of €400

Register Now


Print

GUI Designer – Small Basic Featured Program

$
0
0

Today I'd like to introduce GUI Designer which is created by Roshan Kumar Priya who is a 7th grade student (age 11).

This program allows you to create Small Basic GUI.  And the description by Roshan is:

GUI in Small Basic is a Graphics editor used to draw basic objects like Ellipse, Rectangle, Controls etc.
It has the small basic code generation feature that can convert the objects you draw to small basic. This project is my learning project to learn about the controls development using Small Basic.

GUI Designer is provided as a zip file.  Extract "GUI Designer v1.6" folder and make sure that all exe and dll files should be unlocked (click mouse right button on a file and select property then check unlock).  And run "GUI v1.6.exe" on the folder.

Screen shot of a program GUI Designer v1.6

GUI Designer can output a Small Basic source code like JCS474.

Have fun with this fantastic tool!  And thank you Roshan for sharing this program!

New Git Features in Visual Studio 2017 Update 5

$
0
0

This week we released Visual Studio 2017 Update 5. In this release, we added new Git features which were based on your UserVoice requests to support Git submodules, Git worktreesfetch --prune, and pull --rebase. To learn more about all of our Git features and what's new in Visual Studio 2017 Update 5, check out our Git tutorials and the Visual Studio release notes.

Git submodules and worktrees

Visual Studio now treats Git submodules and Git worktrees like normal repos. Just add them to your list of Local Repositories on the Team Explorer Connect page and get coding! Please note that you still cannot do anything that requires multi-repo support (such as viewing files in a parent repo and a submodule at the same time). If you would like multi-repo support, please vote on UserVoice.

Configure fetch.prune and pull.rebase

Ever delete a branch on the server, only to see it listed in your local list of branches? The best way to make sure your local list of branches is up-to-date is to use the --prune option when you fetch. Pruning on fetch removes local tracking branches that no longer exist on the server.

Another common Git practice is to rebase your changes (rather than create a merge commit) when you pull. Rebasing helps keep your commit history linear and easy to follow.

Now, it’s easy to set Git to prune on every fetch and rebase on every pullConfigure your default behavior in the Global and Repository Settings in Team Explorer.

Early technical preview of JDBC 6.3.6 for SQL Server released!

$
0
0

We are excited to release a new early technical preview of the JDBC Driver for SQL Server. Precompiled binaries are available on GitHub and also on Maven Central.

Below is a summary of the new additions to the project, changes made, and issues fixed.

Added

  • Added support for using database name as part of the key for handle cache #561
  • Updated ADAL4J version to 1.3.0 and also added it into README file #564

Fixed Issues

  • Fixed issues with static loggers being set by every constructor invocation #563

Getting the Preview Refresh
The latest bits are available on our GitHub repository and Maven Central.

Add the JDBC preview driver to your Maven project by adding the following code to your POM file to include it as a dependency in your project.

<dependency>
    <groupId>com.microsoft.sqlserver</groupId>
    <artifactId>mssql-jdbc</artifactId>
    <version>6.3.6.jre8-preview</version>
</dependency>

We provide limited support while in preview. Should you run into any issues, please file an issue on our GitHub Issues page.

As always, we welcome contributions of any kind. We appreciate everyone who has taken the time to contribute to the project thus far. For feature requests, please file an issue on the GitHub Issues page to help us track and follow-up directly.

We would also appreciate if you could take this survey to help us continue to improve the JDBC Driver.

Please also check out our tutorials to get started with developing apps in your programming language of choice and SQL Server.

David Engel


Amazon Alexa Skills authenticated by Azure Active Directory and backed by ASP.NET Core 2.0 Web API hosted on Azure

$
0
0

This post is provided by Premier Field Engineer, Nathan Vanderby, who walks us through how to write an Alexa skill using .NET core and AAD.


Amazon's Echo devices and Alexa assistant need no introduction. In this post I lay out how you can write an Alexa skill and use a .NET core 2.0 API backend service to handle the requests. We will also show how to leverage Azure Active Directory (AAD) so these calls can be authenticated and your users can get a personalized experience. Rob Reilly has a great write up on how to accomplish this using .NET Core 1.0. Here we'll update the tutorial to .NET Core 2.0, simplify the code with Alexa.NET NuGet package and fix a bug in his JWT middleware.

There are four main items that need to be configured, the Alexa Skill in Amazon's developer portal, an Azure App. Service, your .NET Core 2.0 API, and Azure Active Directory. Once these items are configured a user just needs to add the skill in their Alexa app, use Alexa app or website to link their devices to their AAD and then they can start sending skill requests!

Components of an Alexa Skill

  • A set of intents that represent actions that users can do with your skill. These intents represent the core functionality for your skill.
  • A set of sample utterances that specify the words and phrases users can say to invoke those intents. You map these utterances to your intents. This mapping forms the interaction model for the skill.
  • An invocation name that identifies the skill. The user includes this name when initiating a conversation with your skill.
  • A cloud-based service that accepts these intents as structured requests and then acts upon them. This service must be accessible over the Internet. You provide an endpoint for your service when configuring the skill.
  • A configuration that brings all of the above together so that Alexa can route requests to the service for your skill. You create this configuration in the Alexa developer portal.

Step-by-step tutorial

Step 1: Create an Azure Subscription

If you haven't already signed up for Azure you can do so here. This tutorial can be done using only free services. There is a free tier for Azure App. Service, Azure Active Directory App. registrations are free and Amazon doesn't charge for creating a skill.

Step 2: Create your API

In this step we'll create the API app and configure it to authenticate with Azure Active Directory. You can always configure the authentication manually, or simply use the Visual Studio API App template wizard to handle the heavy lifting for you. I'm going to use the wizard and I'll point out all of the changes it is doing behind the scenes.

First create a new project and select ASP.NET Core Web Application as the template.

Make sure your creating an ASP.NET Core 2.0 template, then select Change Authentication

Select from one of your Azure AD domains. Check Read directory data and optionally give your App ID URI a unique value. This URI is a unique identifier for your application as well as the audience URI when it comes to authentication later on. Normally the default is sufficient.

The authentication wizard automatically makes changes to 4 files, Startup.cs, AzureAdAuthenticaitonBuilderExtensions.cs, appsettings.json and ValuesController.cs.

In Startup it tells the app to use AAD with the JWT bearer authentication scheme and IIS to use authentication middleware.

In AzureAdServiceCollectionExtensions it defines the AddAzureAdBearer as a JwtBearer and sets the Audience and Authority properties for this application to trust and validate the token against.

In the appsettings.json it automatically puts your Azure AD information for the login URI, Domain, TenantId and ClientId. From the screenshot above we can see that that ClientId is being used as the audience value.

options.Audience = _azureOptions.ClientId;

We need to update this parameter to be the app URI instead of its Id. This is important otherwise you will get 401: Unauthorized result if the audience in the app code does not match the audience string in the authentication token from AAD. Note: In a production scenario I would probably add another setting called Audience to the appsettings & AzureAdOptions class then assign that property to the options.Audience property in the ConfigureAzureOptions class.

If you choose to configure authentication manually these settings come from as follows:

Instance - Azure AD login URL. This is dependent on the Azure cloud you are in. Commercial Azure, Azure Government, China & Germany have different URLs.
Domain - This is the AD tenant name where the app is registered.
Tenant ID - This is your Azure subscription tenant id/Azure AD Directory ID. This can be found in the Properties blade of Azure Active Directory resource.
Client ID - This is unique to the application. It is found in the app registration blade of Azure Active Directory. Remember though the template assigns the audience to the client id so instead of the actual client id we are assigning the app id URI to this setting.

To find out your Tenant ID go to Azure Active Directory resource and select Properties

To get to the client ID (Audience value), go to Azure Active Directory resource and select App registrations and click on your registration

Then select All Settings -> Properties and there will be an App ID URI. This should match the value from the Visual Studio new project wizard.

In the ValuesController class it's important to note the [Authorize] attribute on the controller. This attribute means only authenticated (and authorized users if roles are defined) can call these methods. This attribute can also be applied at the method level.

Now that your application is configured to leverage Azure Active Directory we need to add code to handle the Alexa request. To start, add the Alexa.NET NuGet Package https://github.com/timheuer/alexa-skills-dotnet

Then, alter your ValuesController to accept a SkillRequest and return a SkillRepsonse from a Post method like so:

[Authorize]
[Route("api/[controller]")]
public class ValuesController : Controller
{
    [HttpPost]
    public SkillResponse Post([FromBody]SkillRequest request)
    {
        SkillResponse response = null;
        if (request != null)
        {
            PlainTextOutputSpeech outputSpeech = new PlainTextOutputSpeech();
            string firstName = (request.Request as IntentRequest)?.Intent.Slots.FirstOrDefault(s => s.Key == "FirstName").Value.Value;
            outputSpeech.Text = "Hello " + firstName;
            response = ResponseBuilder.Tell(outputSpeech);
        }

        return response;
    }
}

Without the [Authorize] attribute this code will work without account linking. Alexa sends the authentication token in the body of the request whereas the Authorize attribute expects it to be in the Authorization header. There are numerous ways to handle this. We will address this by adding a piece of middleware to automatically look at each request and if the body contains an authorization token then move it to the header. This allows the default authentication middleware to work seamlessly.

Create a new class called AlexaJwtMiddleware and copy the code below in. It's fairly straightforward and heavily commented. Its sole responsibility is to search through the request and move the authentication token from the body to the authorization header.

public class AlexaJWTMiddleware
{
    private readonly RequestDelegate _next;

    public AlexaJWTMiddleware(RequestDelegate next)
    {
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        if (context.Request.Headers.Keys.Contains("Authorization"))
        {
            await _next(context);
            return;
        }

        // Keep the original stream in a separate
        // variable to restore it later if necessary.
        var stream = context.Request.Body;

        // Optimization: don't buffer the request if
        // there was no stream or if it is rewindable.
        if (stream == Stream.Null || stream.CanSeek)
        {
            await _next(context);
            return;
        }

        try
        {
            using (var buffer = new MemoryStream())
            {
                // Copy the request stream to the memory stream.
                await stream.CopyToAsync(buffer);
                byte[] bodyBuffer = new byte[buffer.Length];
                buffer.Position = 0L;
                buffer.Read(bodyBuffer, 0, bodyBuffer.Length);
                string body = Encoding.UTF8.GetString(bodyBuffer);

                if (!string.IsNullOrEmpty(body))
                {
                    SkillRequest request = JsonConvert.DeserializeObject<SkillRequest>(body);
                    if (request.Session.User.AccessToken != null)
                    {
                        context.Request.HttpContext.Request.Headers["Authorization"] = $"Bearer {request.Session.User.AccessToken}";
                    }
                }

                // Rewind the memory stream.
                buffer.Position = 0L;

                // Replace the request stream by the memory stream.
                context.Request.Body = buffer;

                // Invoke the rest of the pipeline.
                await _next(context);
            }
        }
        finally
        {
            // Restore the original stream.
            context.Request.Body = stream;
        }
    }
}

// Extension method used to add the middleware to the HTTP request pipeline.
public static class AlexaJWTMiddlewareExtensions
{
    public static IApplicationBuilder UseAlexaJWTMiddleware(this IApplicationBuilder builder) => builder.UseMiddleware<AlexaJWTMiddleware>();
}

Finally we need to tell the Startup class to also use this middleware. Inside of Startup.cs Configure() method add a line before the authentication middleware. Order is important here as we need our new middleware to move the token before the authentication middleware searches for it to validate it.

Step 3: Publish to Azure

Now that your application has the proper code we need to publish it to Azure. There are a myriad ways to do this. One of the simplest is to right click on the project and select publish. Then follow the wizard to deploy to an App Service.

On the first screen either create a new app service or use an existing one if you already have one setup.

Then fill in the info for your existing or new app service. When finished click Ok/Create then Publish

Step 4: Configure Azure Active Directory for the Alexa Front End

With our API registered in AAD and deployed to Azure we need to integrate it with Amazon's Alexa service. The first step is to create another application registration for the Alexa service to leverage for user authentication.

Create another app registration in AAD and call it AlexaFrontEnd. It's type will be Native and the RedirectURI will be https://pitangui.amazon.com/api/skill/link/{lookuplater}. We will find out the last part of the RedirectURI after we create an Alexa skill in a later step.

Now that this app registration is setup we need to give it delegate permissions into our API. This allows user access to our backend API if they get their token from the front end (Alexa service) app. This is required since the users are not directly calling our API from a web browser.

Inside the newly created FrontEnd registration navigate to the app registration settings and select Required permissions.

Select Add and start typing in AlexaBackendAPI. Select the app registration we created earlier.

That’s it for now. We’ll come back to both app registrations later to update the Redirect URI (front end) and get a Client Secret (AKA Key) from the backend API after we have created our new Alexa Skill in the Alexa Developer Portal.

Step 5: Configure the Alexa Skill

Now that we have our code in Azure we need to configure the Alexa skill. Navigate to Amazon's developer page for Alexa, you will have to create an account. Under Alexa Skills Kit click Get Started.

Then select Add a New Skill from the upper right corner.

Skill Information

In the Skill Information screen select Custom Interaction Model, give the skill a name and an invocation name that people will use when talking to an Echo. Click Save then Next to move onto the Interaction Model.

Interaction Model

Under Intent Schema fill in this JSON:

{
"intents" :
[
	{
		"intent" : "Tutorial",
		"slots" :
		[
			{
				"name" : "FirstName",
				"type" : "AMAZON.DE_FIRST_NAME"
			}
		]
	},
	{
		"intent" : "AMAZON.HelpIntent"
	},
	{
		"intent" : "AMAZON.StopIntent"
	}
]
}

The intent here is so Alexa knows that when we talk to "Tutorial" it should also expect a parameter called FirstName. If you look back at the code in the API Post method you will see how we can read the FirstName slot from the Tutorial intent on the request.

Leave Custom Slot Types blank.

Under Sample Utterances add Tutorial Can you greet {FirstName}
Notice how this sample utterance uses both the intent and the FirstName slot.

Click Save then Next to move onto Configuration.

Configuration

Select HTTPS for the endpoint. Under the Default text box enter the URL for your API method.

In the Account Linking section select Yes. Enter the following information into the fields:

  • Authorization URL: https://login.windows.net/**{TenantId (DirectoryId)}/oauth2/authorize?resource=** The TenandId/DirectoryId is the same GUID that is in the appsettings.json config file. The App ID URI is the URI we entered when we created the API. This can be found in the Properties settings of the app registration for your API. In this tutorial it is https://AlexaToAzure.onmicrosoft.com/AlexaBackendAPI This parameter is needed as it tells AAD which resource Alexa is requesting access to and defines the audience property in the JWT token returned from AAD.
  • Client Id: This is the Application Id from the native (FrontEnd) app registration
  •  
  • Authorization Grant Type: Auth Code Grant
  • Access Token URI: https://login.windows.net/**{TenantId (DirectoryId)}**/oauth2/token This URI can be found in the Endpoints settings of the app registration blade. Copy the OAUTH 2.0 TOKEN ENDPOINT
  • Client Secret: This is effectively the password the Alexa service will use to allow account linking to your API backend. We will generate one now. Navigate to the Keys setting in the API app registration.
    Give a key a description and a duration then click Save. The key will be generated on the save. This will be the one time this key is displayed, so copy it.
  • Client Authentication Scheme: HTTP Basic (Recommended)

In the end your Account Linking settings should look similar to this:

Before we move onto the next screen, notice the Redirect URIs in the settings. Copy these URLs into the Redirect URLs setting for the native app (FrontEnd). This replaces the one we used earlier that had in it.

Click Save then Next

SSL Certificate

In this section select My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority and click next.

Test

Testing won't work until you perform account linking in the next step or remove the [Authorize] attribute from the API controller. We'll come back to this later.

Step 6: Link Accounts and Test
  1. Login to https://alexa.amazon.com
  2. Navigate to Skills -> Your Skills
  3. You should see our newly created skill. Click on it.
  4. Select SETTINGS
  5. Click on the Link Account button in the upper right corner
  6. You will be redirected to login to Azure Active Directory and then asked to accept the access being requested. Once you get through that result and if all goes well you should see:
Step 7: Testing

We have multiple options by which to test the Alexa Integration. We can use an actual device, the Amazon Developers Portal Test client or there is an Alexa Skill Testing Emulator. I'll go over the last two since they don’t require you to purchase anything.

Amazon Developer Portal

  1. In the Amazon Developer portal where we configured the Alexa skill Note: If you left this tab over from the prior instructions, you will have to refresh the page after perofrming the account linking
  2. Make sure you are under the Skill you are wanting to test
  3. Click on the test section
  4. In the Enter Utterance Section enter the following: Tutorial Can you greet
  • = your name or any name
  • This is derived in the Interaction Model section for the skill.
  1. Click the Ask AlexaToAzure button.
  • If all goes well you should see something that looks like:
  • if you get a 401: Unauthorized result, see some debugging tips below.

Alexa Skill Testing Tool (Emulator)

The Alexa emulator is a simulated version of using an Alexa device that is surfaced in a web page. You need to have a microphone and speaker to use this tool.

  1. Open browser and go to following URL: https://echosim.io/welcome?next=%2F
  2. Click on the Login with Amazon button.
  • Use your Amazon Developer account
  1. Once logged in you’re ready to test

Troubleshooting

  • Double check Web App settings in Azure aren't overriding the web.config settings
  • F12 browser tool
    • If you need to look at the HTTP Request Response stream pressing F12 from Explorer, Edge, Chrome will open integrated developer tools.
  • JWT.IO - https://jwt.io/
    • This is a browser based tool where you can take your JWT token from the request and decode it and verify the signature
  • If account linking works and your still getting a 401 unauthorized response try the following:
    • Refresh the Amazon develop page and try again (this makes Amazon request a new token)
  • Enabling diagnostic logs, detailed error messages and leveraging live streaming of logs in the Azure portal for the web app is very useful for debugging

Sample Code

The companion source code for the web API portion of this tutorial can be located on GitHub at https://github.com/vanderby/AlexaToAzureAPI

Links

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Integrate Web App with Azure Virtual Network by Point-to-Site VPN

$
0
0

It is a common scenario that we want to use VNet Integration to enable our web app access a database or other services running on a virtual machine in an Azure virtual network. With VNet Integration, we don't need to expose a public endpoint for applications on the virtual machine but can use the private non-internet routable addresses instead. This requirement can be achieved easily as long as the virtual network has a point-to-site VPN which is configured with a Dynamic routing gateway instead of static routing.By building on point-to-site technology we can limit network access to just the virtual machine hosting the app. Access to the network is further restricted on those app hosts so that the applications can only access the networks that we configure them to access. Based on the VNet Integration feature, we can further make our virtual network connect to our on-premises network via a Site to Site VPN, then our apps can have access to on-premises resources as well.

Now, let's firstly create a virtual network with the following configurations, I will not show details about how to make it as there have been a lot of introductions for that.

Virtual Network Address Block: 10.1.0.0/16
Default Subnet Address Block: 10.1.0.0/24
Gateway Subnet Address Block: 10.1.1.0/28

Next, we need create a new Virtual Network Gateway for the VNet above, this could take for a while. After that, click "Point to site configuration" to configure the VPN, set Point-to-site Address Block as 172.16.0.0/24 in this example. This is the point to site IP address space for the VNet. Our web apps will show communication as coming from one of the IPs in this address space. In addition, make sure IKEv2 Tunnel Type is not checked shown below as currently Azure web app vnet intetration doesn't support IKEv2 yet.



Here, just please understand SSTP is an SSL-based VPN tunnel that is supported only on Windows client platforms and it can penetrate firewalls, which makes it an ideal option to connect to Azure from anywhere. On the other hand, IKEv2 VPN is a standards-based IPsec VPN solution and it can be used to connect from Mac devices as well. Lastly, we can go to Web App's blade, select Networking, click Setup and finish the all.



If you have an existing virtual network and it throws errors such as "Adding network jac-vnet to web app jawebsite failed.: Legacy Cmak generation is not supported for gateway id /subscriptions/subscriptionid/resourceGroups/javnetrg/providers/Microsoft.Network/virtualNetworkGateways/p2sgateway when IKEV2 or External Radius based authentication is configured. Please use vpn profile package option instead" when you integrate your web app to the virtual network, a common problem is the existing virtual network supports both of SSTP and IKEv2. In order to fix that, run the below Powershell script to check the cause and limit VPN client protocol to SSTP only if needed.

$gateway = Get-AzureRmVirtualNetworkGateway -ResourceGroupName groupname -Name vpngatewayname
$gateway.VpnClientConfiguraiton.VpnClientProtocols
Set-AzureRmVirtualNetworkGateway -VirtualNetworkGateway $gateway -VpnClientProtocol "SSTP"

Lesson Learned #30: How to measure a TSQL execution time against Azure SQL Database

$
0
0

Very often, our customers asked about the connection latency or execution time of a query between their servers and their Azure SQL Database. Trying to identify what is the time spent in each step I created this PowerShell Script that measures several counters like Network Server Time, Execution time, Connection Time, etc.. based on the statistics data obtained from the function RetrieveStatistics of SQLClient.SQLConnection 

You could customize modifying the TSQL.SQL with the queries that you want to execute and modifying the PowerShell script you could configure the server, user and password for testing the connection and execution time.

Feel free to add more options and test in this Powershell script.

Enjoy!

 

Lesson Learned #31: How to measure a TSQL execution time against Azure Database for MySQL

$
0
0

Often, our customers asked about the connection latency or execution time of a query between their servers and their Azure Database for MySQL. Trying to identify what is the time spent in each step I created this PowerShell Script called SQLMySQLConnectivityTest.ps1 that in based on SHOW PROFILE ALL results shows the time spent in different steps of the query.

You could customize modifying the TSQL.SQL with the queries that you want to execute and modifying the PowerShell script you could configure the server, user and password for testing the connection and execution time.

It is needed to have installed the Connector/Net: ADO.NET driver for MySQL first.

Feel free to add more options and test in this Powershell script.

Enjoy!

Readia Comic Viewer が使いやすい!

$
0
0

自炊本を読むのにViewer を使っています。Android であれば Perfect Viewer ですが、Windows 10 の環境ではいくつかのアプリを試し続けています。デフォルトでは古いデスクトップアプリであるMangaMeeya ですが、もうサポートどころか配布も望めず、最近挙動が怪しくなってきています。

中でもPico Viewer は挙動の速さカスタマイズ制の高さから愛用していたアプリでした。ただ、最近は使い勝手の良さからReadia Comic Viewer がお気に入りです。

正直細かい設定や速度や機能では Pico Viewer の方が上だと思いますが、それでもこのアプリを選びたい理由は大きく2つ、それは

  1. ドラッグ&ドロップでファイルを表示できる事
  2. UWPでありMRでもHubでもMobileでも使える事

だいぶ前からUWPはドラッグ&ドロップに対応していますが、アプリで対応してくれているところは少ないです。今はスタートアップにも登録できます。MangaMeeya 感覚で使えるのはうれしい限り。とはいえ、

  • 読み込み時は時間かかったりしますし、大きなファイルだと顕著
  • 本棚がないというかファイルをフラットに扱ってしまうのでしんどい
  • 拡張子の関連付けができると嬉しい

など改修していただけたら嬉しい部分もあります。読み込みなどは最初少し読んだら残りは別スレッドで読んでくれると嬉しいですし、拡張子の関連付けこの辺の方法で切ると嬉しいですが、それ以上はわがままなユーザーの意見になってしまうので。

UWPというかMicrosoft Store にもいろいろ良いアプリがそろってきました。

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>