Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

新しい CUI ツール mssql-cli の使い方

$
0
0

Microsoft Japan Data Platform Tech Sales Team

佐藤秀和

SQL Server 2017 では Windows 以外に Linux や コンテナ への対応を行いましたので、マルチプラットフォームに対応したクライアント ツールの整備を急ピッチで行っております。
今回は SQL Server の新たな CUI ツール である mssql-cli についてご紹介いたします。

クライアントツールのプラットフォーム対応状況
clip_image001

mssql-cliとは
CUI には 繰り返しの処理実行や正確性( GUI ツールでの押し間違いなど)、実行履歴を保持できる、少ないリソースで動作できる、などGUI にはない多くのメリットがあります。
SQL Server のCUIツールとしては 従来より sqlcmd がご利用いただけますが、Oracle や OSS DB 等他のDB製品をお使いになられている方々には、少し使い勝手が悪く感じられるところがあるかもしれません。また、SQL Server 2017 では、Linux や コンテナなどマルチプラットフォームに対応したこともあり、普段 Linux で CUI をお使いになっている方には、GUI ツールにはちょっと馴染めないという声をお聞きすることもあります。

そこでマイクロソフトは、OSS のコミュティである dbcli コミュニティと連携して、様々なプラットフォームで動作する SQL Server コマンドライン クエリ ツールを開発し、提供を始めました。それが今回ご紹介する mssql-cli です。

セットアップ
mssql-cli のセットアップには、Python が事前にインストールされている必要があります。
お使いの環境に Python がインストールされていない場合は こちら からダウンロードしてセットアップしてください。https://www.python.org/downloads/
Python  は、2.7 およびそれ以降のバージョンに対応しています。

Python のセットアップの際には、pip ( Python パッケージ管理ツール) も Python のインストールの際に併せてセットアップしておくと、後ほど mssql-cli のセットアップの際に楽になります。(インストール画面のオプション機能で pip にチェックマークが入っていることを確認)
また、Windows 環境であれば Path 環境変数に Python のPath設定を追加するオプションを有効にしておくこともお勧めします。
clip_image001[6]

Windows 環境の場合は、コマンドプロンプトで下記スクリプトを実行すると、mssql-cli のセットアップが完了します。image

その他のプラットフォームへのセットアップについては、こちらを参照ください。
・dbcli/mssql-cli /doc/installation_guide.md
https://github.com/dbcli/mssql-cli/blob/master/doc/installation_guide.md#Alternative-Installation-via-direct-downloads

mssql-cli の特長
mssql-cli はインタラクティブなコマンドライン クエリー ツールで以下の様な特徴があります。

・SQL ステートメント の IntelliSense
SQL Server Management Studio や SQL Server Data Tools, Visual Studio をお使いの方であればよくご存知かと思いますが、キーワードの自動入力補完機能です。入力候補をリストから選択することで、キー入力の省力化や、誤入力の低減により、作業性や生産性を高めることが出来ます。
例えば、SELECT クエリで対象のデータベース内のテーブル一覧からテーブルを選択して、列にはそのテーブルの列一覧が表示され、必要な列を選択するだけでクエリーが作成できます。

clip_image001[8]

こちらに動作イメージのアニメーションもあります。

・構文の強調表示
SQL ステートメントや定数等は色分けされ表示されます。
・クエリ結果の書式設定
出力結果はデータ型や列名等に応じて罫線が成形されて出力されます。

image

・ショートカット
よく利用されるコマンドラインは、ショットカットキーワードで入力するが可能で、入力を省力化することが出来ます。

ショットカット一覧
image

データベース一覧の出力結果
image

・複数行の編集モード
複数行にわたるクエリの編集では、マルチライン機能 (F3 キー) を有効にすることで、複数行にわたってクエリの編集を行うことが出来ます。クエリの編集が終わりましたらマルチライン機能を OFF にして Enter キーを押下すれば、クエリが実行されます、

image

その他にも、次々と新たな機能が実装されています。直近ではGDPR対応向けの機能としてクエリの実行履歴を保持する機能などが実装されています。

いかがでしたでしょうか?
現時点ではプレビューの扱いとなりますが、冒頭で触れました様にマイクロソフトと OSS コミュニティが連携して開発を行っており、今までに無い大きな可能性を秘めたツールとなりますので、ぜひ評価頂きコミュニティにフィードバックをあげて頂ければと思います。

mssql-cli のフィードバックはこちらから
GitHub Issues


The Data Analysis Maturity Model – Level One: Data Collection Hygiene

$
0
0

Data Science and Advanced Analytics are umbrella terms that usually deal with predictive or prescriptive analytics. They often involve Reporting, Business Intelligence, Data Mining, Machine Learning, Deep Learning, and Artificial Intelligence techniques. Most of the time these technologies rely heavily on linear algebra and statistics for their predictions and pattern analysis.

In any foundational mathematics, and especially in statistics, base-data trustworthiness is essential. The result of a calculation is only as good as the information it uses. Depending on the algorithm in use, even small errors in the data are magnified until the result is untrustworthy. It follows that the processes and technologies used to collect, store and manipulate the data your downstream predictions use should be verified before you implement any kind of advanced analytics. However, many companies assume or ignore these concepts and rush in to creating solutions without verifying their upstream sources. I’ve found that describing a series of “Maturity Models” is useful to understand that there is a progression you should follow to get to effective Advanced Analytics.

In the first level of data analytic maturity, the organization has good data collection “hygiene”. Starting at the collection point, the base data must be consistent and trustworthy. 

Although it might not seem important to Data Science and predictive analytics that a web form have proper field validation, being able to trust a prediction algorithm begins here. As data professionals, we often think about Declarative Referential Integrity (DRI), proper data types, and other data hygiene controls at the storage and processing level. That’s very important of course – but beginning with the way we collect data, and even in the way we verify who we collect it from, it’s vital to ensure we have a chain of custody mindset through to the end prediction. Often the source of data comes not only from a Relational Database Management System (RDBMS) but from “unstructured” (although there’s really no such thing) sources such as a text or binary files. Mechanisms starting at the collection point must be set in the context of Programmatic Referential Integrity (PRI) where you write code to provide linking between data elements in a file, as well as trusting the DRI in a data engine. It’s also about validating  the structure of XML, JSON and other files for integrity.

This maturity level also documents the data flow, referencing the programs that collect the data. For analytics such as medical or financial predictions, the Data Scientist should be able to trace back the way the data is collected.

In the next article, I’ll explain the second maturity level - reliable data storage and query systems.

How to link SQL SPID to user in Dynamics 365 for Finance and Operations on-premises

$
0
0

Quick one today! How to link a SQL SPID back to a Dynamics user in Dynamics 365 for finance and operations on-premises. You use this when, for example, you have a blocking SQL process, and you want to know which user in the application triggered it - this allows you to look up the blocking SPID and find out which user.

Run this SQL:


select cast(context_info as varchar(128)) as ci,* from sys.dm_exec_sessions where session_id > 50 and login_name = 'axdbadmin'

First column, shows the Dynamics user. It's much like it was in AX2012, except you don't need to go set a registry key first.

You can do the same thing in the Cloud version, but there you don't need to do it in TSQL because in LCS you can go to Monitoring and "SQL Now" tab where you can see SPID to user for running SQL.

Failed to update web app settings: The storage URI is invalid

$
0
0

Overview

When updating your Azure Function App storage account keys, if you specify the wrong key… You could get an error including the text:

Failed to update web app settings:  Bad Request  The storage URI is invalid ExtendedCode:04203 No valid combination of account information found

Capture

Fix

I found the older version of the key did not have the suffix listed.  When I generated the new key it did… thinking that it was invalid I removed it and then got this error.  Ensure that you copy and use the entire key as provided!

Example: DefaultEndpointsProtocol=https;AccountName=cs2d3de4593cb35x492dxa63;AccountKey=xxxxxxxxxxxxx==;EndpointSuffix=core.windows.net

 

Please drop me a note if you found this useful!

サポート プランの初回請求書について

$
0
0

※本トピックは 2018 年 6 月時点の情報です。将来的に情報が変わる可能性がありますのでご了承ください。

 

いつも大変お世話になります。Microsoft Azure サポート チームです。

サポート プランの請求書について本記事で以下をご案内させていただきます。
ご参考になりましたら幸いです。

 

1. サポート プランの初回請求書発行日について

2. サポート プランの初回請求書のお支払いに関する情報について

3. 参考情報

 

 

1. サポート プランの初回請求書発行日について

 

サポート プランの場合、ご契約いただいた日の翌日に初回請求書が発行されます。
また、クレジット カード払いの場合、この初回のご請求につきましては、サポート プランを
ご購入いただいた時点で、クレジット カード会社様へご請求をさせていただいております。

このため、翌日に初回請求書が発行された時点でお支払いが完了しているため、請求書上の合計金額
(ご請求金額)は 0 円となっておりますこと、何卒ご理解賜りますようお願いいたします。

 

例)クレジット カード払いのサポート プラン初回発行請求書 1 ページ目

 

 

 

2. サポートプランの初回請求書のお支払いに関する情報について

 

クレジット カード払いのサポート プランの場合、サポート プランをご購入いただいた時点で、クレジット カード会社様へ請求されるため、初回に発行された請求書の 2 ページ目の [お支払いについて] には、[お支払いは不要です] と記載されます。
しかしながら、ご購入いただいた日にお支払いは完了されるため、実際にはクレジット カードへのお支払いはおこなわれておりますこと、何卒ご理解賜りますようお願いいたします。

 

例)クレジット カード払いのサポート プラン初回発行請求書 2 ページ目

 

 

3. 参考情報

 

以下の参考情報についても併せてご確認くださいませ。

Microsoft Azure の請求書の見方
https://blogs.msdn.microsoft.com/dsazurejp/2013/10/10/windows-azure-5/

 

以上の通りご案内いたします。

引き続き弊社製品・サービスについてお客様のお役に立てる情報のご案内に努めさせていただきます。

よろしくお願いします。

 

 

 

 

Enterprise Deployment of an Internal Load Balancer with an App Service Environment v2

$
0
0

App Dev Manager Mariusz Kolodziej kicks off this multi-part series covering the deployment of an Internal Load Balancer with an App Services Environment via ARM templates and PowerShell.


Infrastructure as Code (IaC) is becoming the norm for deploying all resources (IaaS and PaaS) in the Cloud. This post is part 1 of 7 of a miniseries which will take us though the process of deploying an Internal Load Balancer (ILB) with an App Service Environment (ASE) v2, all via Azure Resource Manager (ARM) Templates and PowerShell.  The miniseries will cover the following 7 topics:

  1. Deploying ILB ASE v2 with ARM Templates
  2. Uploading Certificate to Key Vault and Assigning it to the ILB ASE v2 with ARM Templates
  3. Creating an App Service Plan (ASP) with ARM Templates
  4. Creating App Service Web Apps with ARM Templates
  5. Uploading Certificates to Key Vault and Assigning to App Service Plan for Web App Usage with ARM
  6. Assigning Network Security Groups (NSGs) to the ILB ASE
  7. Resource Group Recommendations for RBAC for ILB ASE, ASP and Web Apps

Read the full post and track this series on Mariusz’s blog.



Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

If your MX record doesn’t point to Office 365, how do you disable spam filtering in Office 365?

$
0
0

One of the questions that has come up recently, especially as a lot of customers migrate over from an existing spam filtering solution to Office 365, is how to force Office 365 to rely upon the spam/non-spam verdict of the service that's sitting in front of Exchange Online and not double-filter.

Office 365 already has a way to do this via connectors if the email server in front is an on-premise Exchange server. It does this by using TLS certs to "promote" the pre-existing properties of a message stamped by the Exchange server so they are re-used by Office 365. But what if your server in front is not Exchange? Then what?

There are a few ways to do this, but the key thing is that there is no simple way to disable spam filtering in Office 365. The option for "Do not filter spam" does not exist at an organizational level. Instead, you have to do a few tricks depending on the configuration.

  1. You want the service in front of Office 365 to get rid of spam, such as sending it to a spam quarantine 

    In this case, the spam is taken care of somewhere else, and all email going to Office 365 is non-spam (according to the upstream filter). Therefore, you can create a set of IP Allow List entries corresponding to the relay IPs into the service. This will set SCL -1 and send all email into your users' inbox, bypassing spam scanning, and stamp IPV:CAL and SFV:SKN in the X-Forefront-Antispam-Report header.Alternatively (even preferably) you can create an Exchange Transport Rule (ETR) for those connecting IPs that (a) sets the SCL to -1, and (b) sets an x-header:

    X-Relay-IP-for-service: Allow email from <name of service>

    ... as it is already filtered for spam.

    This way, when someone in your organization gets a message that is spam, and you decide to escalate to Office 365 for missing spam even though we didn't filter it because you said not to, we'll be able to quickly look at the headers and see that's why the message was delivered to the user's inbox.

  2. You want to use Office 365 to manage both spam (in the Junk Email folder) and non-spam 

    In this case, you want non-spam to end up in users' inboxes, and spam to go to the Junk Email folder the way it would for something like an Outlook.com or Gmail account.This requires you to configure the upstream mail service to configure two headers: one that is stamped when the service marks a message a spam, and a second one that is stamped when the service marks a message as non-spam. For example, Office 365 uses the X-Forefront-Antispam-Report header to stamp SCL -1 or 1 for non-spam, and SCL 5 or 6 when it is spam, and SCL 9 when it is high confidence spam.Then, in the Office 365 platform, you would write an ETR that sets SCL -1 when it sees the stamp for non-spam, and SCL 6 (or 9, depending on what you want the action to be) when the message has the spam stamp.

    You can extend this for safe and blocked senders, safe and blocked domains, etc. Whatever stamp the upfront service stamps in each case should be consumed downstream.

There is another method such as disabling junk mail filtering on a per-mailbox basis which will force messages to go through filtering but still land in the inbox (this is what I do on my own personal email, although my MX points to Office 365). However, this is an advanced scenario and causes customers a lot of trouble.

Putting additional services in front of Office 365 is what we call "complex routing"; there are several scenarios for complex routing, and the MX pointing to another service is the simplest case.

Finally, if you do use complex routing like this, there are some things to be aware of:

  • You lose a big chunk of the benefits of pointing your MX to Office 365 as the first hop in the path. Make sure you read the first article in the Related Posts section at the bottom of this article
  • Outlook Web Access (OWA) has some client-side integration that lights up when hooked together with Office 365, and some of those features depend on Office 365 being the first hop in the path (e.g., showing a '?' in the sender photo). When these behaviors come to Outlook desktop client, it will have the same requirement. That's another feature loss for complex routing.
  • Additional services in front of Office 365 results in additional layers of management complexity
  • Advanced Threat Protection works much better when Office 365 is the first hop in the path, we usually recommend putting other services behind us, and then looping the email back in via a connector. You'll have to use ETRs again and they are more complex, but that way you get the full protection
  • If you get missed spam or false positives because of this configuration, it is challenging for us to do much with them because parts of our filter have been disabled

Related posts

Running USB Type-C System HLK tests with the Type-C MUTT

$
0
0

Authored by Philip Froese [MSFT]

In the next release of the HLK, the UCSI tests have been updated to run against the new Type-C SuperMUTT device instead of a partner system. This means the test setup will be simpler, the tests will run more quickly, and test content will be more thorough than in Windows 10 April 2018 Update HLK and earlier. The new preview tests are only in pre-release HLK builds 17676 and higher.

New Test Setup

The new test setup is simple: it requires a single Windows PC with Type-C, designated the System Under Test (SUT), and the Type-C SuperMUTT connected to one of the Type-C connectors on the system. See the image below.

Type-C SuperMUTT

The Type-C SuperMUTT is the newest edition to the Microsoft USB Test Tool (MUTT) family of devices. It incorporates the functionality of the legacy SuperMUTT device as well as Type-C and PD functionality.

The software and firmware utilities are available in the MUTT Package download: https://docs.microsoft.com/en-us/windows-hardware/drivers/usbcon/mutt-software-package

And the tool is available for purchase through MCCI: http://www.mcci.com/mcci-v5/devtools/type-c-supermutt.html

Firmware version 45 or later, available in MUTT Package v2.6 and later, is required to run the UCSI HLK tests in the next release of the HLK.  See the TypeCSuperMUTT.pdf documentation in the MUTT Package download for more information.

New Test Names

The UCSI compliance tests have been replaced with a new version of the tests to run against the Type-C SuperMUTT. The new version of the test has the tag “[Type-C MUTT]” added to the test name. You will see both the old and the new versions of the test in the next release of the HLK pre-release kit for some time as we want to leave the old version in for at least a few weeks to enable cross-version validation if needed.

What is Different?

Beyond a new name and a simpler test setup, what else is new in the tests? For the most part, the tests cover the same features in the same manner as the original tests did. However, there are some cases where the new tests will be more rigorous.

Take USB Operation Role tests for example. Many desktop UCSI systems do not support being set into the UFP role, though they may resolve to that role initially when connected to a pure DFP (such as a charger). This behavior makes it difficult to test every permutation of DFP/UFP/DRP connections that a platform may have to support if it is tested against a functionally equivalent peer device. However, with the Type-C SuperMUTT the test can deterministically place the Type-C SuperMUTT into a specific, known port direction or configuration before connecting it to the SUT and is thus able to provide more complete test coverage of USB Operation Role features.

Thus, there may be some test cases in next release of the HLK that are more thorough and thus uncover new platform quirks that Windows 10 April 2018 Update HLK and earlier test content did not.

You will also notice that some tests have been removed. Some tests were determined to be redundant when run against the Type-C SuperMUTT and so were removed. Others targeted UCSI commands that have been removed in UCSI v1.1, and which Windows never supported, so were removed as obsolete.

USB Type-C UCSI Data and Power Role Swap Tests

Applicable Tests

  • USB Type-C UCSI Data Role Swap
  • USB Type-C UCSI Power Role Swap

Hardware Requirements

  1. One UCSI compliant Windows system.
  2. One Type-C MUTT device.

Test Setup

  1. Install HLK client on the System Under Test (SUT).
  2. Connect the Type-C MUTT to any USB Type-C port on the SUT.
  3. Locate the device node in Device Manager (devmgmt.msc) named "UCSI USB Connector Manager". The node is under the "Universal Serial Bus controllers" category.
  4. Right-click on the device, and select "Properties" and open the "Details" tab.
  5. Select "Device Instance Path" from the drop-down and note the property value.
  6. Open Registry Editor (regedit.exe).
  7. Navigate to the device instance path under this key: HKEY_LOCAL_MACHINESystemCurrentControlSetEnum<device-instance-path from step 6>Device Parameters
  8. Create a DWORD value named "TestInterfaceEnabled" and set the value to 0x1.
  9. Restart the device by selecting the "Disable" option on the device node in Device Manager, and then selecting "Enable". Alternatively, you can simply restart the PC.

Test Parameters

Parameter Name Parameter Description
SwapsToPerform Number of data role swaps to perform. The minimum is 2 so that both host and function mode are tested.
ValidateUsbFn If ValidateUsbFn = true, the test will validate function stack behavior.

Troubleshooting

  • Some UCSI commands are marked "optional" in the UCSI specification. However, Microsoft requires some optional commands to be implemented in order for the UCSI driver to function. The tests may fail if you've forgotten to implement one of these commands in your BIOS. You can find that list of commands here.
  • If the test is not detecting the Type-C MUTT:
    • Check your cable connection to the Type-C MUTT.
    • Check in Device Manager. Has the Type-C MUTT enumerated?
      • Look for the "SuperMUTT" device with hardware ID USBVID_045E&PID_078F in the Device Manager.
    • Check in Device Manager. Does the "Device Status" of the UCSI device report any errors? If so:
      • Right-click on the UCSI device and disable it.
      • Start UCSI logging (see https://aka.ms/usbtrace )
      • Enable the UCSI device. Do whatever it takes to get the yellow bang to reproduce.
      • Stop logging.
      • View the resulting WPP logs and check for errors.

USB Type-C UCM Data and Power Role Swap Tests

Applicable Tests

  • USB Type-C UCM Data Role Swap
  • USB Type-C UCM Power Role Swap

Hardware Requirements

  1. Onoe Windows system with a USB Type-C connector.
  2. One Type-C MUTT device.

Test Setup

  1. Install the HLK client on the System Under Test (SUT).
  2. Connect the Type-C MUTT to any USB Type-C port on the SUT.
  3. Ensure all other USB Type-C ports are disconnected.

Test Parameters

Parameter Name Parameter Description
SwapsToPerform Number of data role swaps to perform. The minimum is 2 so that both host and function mode are tested.
ValidateUsbFn If ValidateUsbFn = true, the test will validate function stack behavior.

Troubleshooting

  • "No USB Type-C Connectors found with partners attached".
    • Check your cable connection to the Type-C MUTT.
    • Check in Device Manager. Has the Type-C MUTT enumerated?
      • Look for the "SuperMUTT" device with hardware ID USBVID_045E&PID_078F in the Device Manager.
      • If your system is using UCSI, you can take UCSI logs during attach to investigate why the attach has not been reported to the OS.

UCSI Compliance tests

Applicable tests

This category of tests refers to all tests in the HLK with a name that begins with "UCSI" and is post-fixed with “[Type-C MUTT]”. The early RS5 pre-release HLK builds will include the original UCSI Compliance Tests as well, these tests have the same names, but without the “[Type-C MUTT]” suffix. Unless you are specifically running the old test for comparison, you should ignore the old versions of the tests.

UCSI Compliance Tests are meant to test the UCSI-capable Type-C system’s compliance to UCSI Specification.

Broad Categories of UCSI Compliance Tests

  • UCSI Command Interface tests.
    • Tests all of the UCSI commands that are claimed to be supported by the SUT.
  • USB Operation Mode tests.
    • Tests all of the USB Operation Modes that are claimed to be supported by the SUT on the given Connector.
  • USB Operation Role tests.
    • Tests all of the USB Operation Roles and role swaps that are claimed to be supported by the SUT on the given Connector.
  • Power Direction Role Tests.
    • Tests all of the Power Direction Roles that are claimed to be supported by the SUT on the given Connector. Performs role swaps.
  • UCSI Notification tests.
    • Tests all of the UCSI notifications that are claimed to be supported by the System under Test on the given Connector.

Hardware Requirements

  1. One UCSI compliant Windows system.
  2. One USB Type-C MUTT device.

How to identify Connector 1

In the next release of the HLK, the tests can be run against any available connector on the SUT, but you will need to specify the connector that the Type-C MUTT is connected to via the UcsiConnectorNumber parameter. If you do not know the connector numbers on a multi-port system, the UCSI Connector One Identification [Type-C MUTT] test can help you identify which is connector 1. If the SUT has 3 or more connectors, and you don’t know their mapping, you will simply need to experiment by connecting the Type-C MUTT to an unknown port, setting UcsiConnectorNumber to a new value, and running a test to see if the device is found on that connector; repeat with a new connector number until successful. This should just take a few attempts to establish the connector number layout for any new system.

Test Setup

  1. Install the HLK client on the System Under Test (SUT).
  2. Connect the Type-C MUTT to any USB Type-C port on the SUT.
  3. Record the connector number to which the Type-C MUTT is attached. You will supply this number when scheduling the tests.
  4. Install TestUcsi.sys driver on the SUT. On a URS system, it is important that the Type-C MUTT is connected before the test driver is installed, otherwise the USB host stack may not get loaded.
    • In Device Manager, find the device "UCSI USB Connector Manager". This device will have the driver UcmUcsi.sys installed.
    • Right-click the device and select "Update Driver Software"
      • Enter the path to TestUcsi.inf
        • TestUcsi.inf and TestUcsi.sys are located in TestUcsi<ARCHITECTURE>
      • Follow the prompts to complete the driver installation. This will replace UcmUcsi.sys with TestUcsi.sys.

Test Parameters

Parameter Name Parameter Description
UcsiConnectorNumber The Type-C connector number on which the Type-C MUTT device is attached.
WaitTimeInMinutes For tests that require manual intervention, this is the amount of time in minutes the test will wait to see the event that is expected to occur in response to the requested manual action.
WaitTimeMultiplier An integral multiplier of wait times required in the test when interacting with the Type-C MUTT. Some systems may take somewhat longer than expected to enumerate the device when it reconnects.

Test Execution

The Type-C MUTT allows the HLK tests to emulate and automate all scenarios in RS5 that previously may have required manual intervention. When scheduling the UCSI tests, only select the ones with the “[Type-C MUTT]” designation. You can select all of them at once, or only run a subset of them if you wish.

Test Cleanup

The UCSI Data and Power role swap tests require UcmUcsi.sys rather than TestUcsi.sys. If you plan to run the UCSI Data Role Swap or UCSI Power Role Swap tests in the HLK after this test, be sure to clean up your test environment after running the UCSI compliance tests. You can do this by replacing TestUcsi.sys with the original UcmUcsi.sys or by reinstalling Windows on both systems.

Troubleshooting

  • Please review the HLK test log to determine why the test failed. In many cases, the test log will state common known causes of a given failure and how it may be resolved.
  • Exame the driver logs per the guidance provided in the following blog post:
  • If the test failure is unclear from both the test and driver logs or you believe it to be incorrect, and you are going to reach out to Microsoft for additional guidance, please prepare the following when reporting the issue to us. This will help us get to the bottom of your bug report as quickly as possible!
    • The HLKX package containing the failed test result. This will contain the failed test log as well as driver and Type-C MUTT firmware logs we may need to review.
    • An explanation of what diagnostic efforts you have already applied and why they were inconclusive:
      • Was the test log inconclusive or confusing? (We'd love to hear feedback in order to make them better!)
      • Do you believe the test result to be incorrect, and if so, why?
    • If you believe a test failure to be in error, do you have passing logs from the same test case in Windows 10 April 2018 Update HLK? If so, please provide them. Or do you have other evidence (e.g. driver traces, PD trace, etc.) that would demonstrate the PPM behaving properly in the scenario under test?

Application Insights – Advisory 06/05

$
0
0
Between 06/01/2018 17:00 UTC and 06/05/2018 08:00 UTC, all customers may have experienced delay in Alert/Information emails in East US region. The impact was due to an issue in one of our dependent services which is responsible for sending emails. Our Devops team has identified the issue and deployed the fix to address this. The issue is now mitigated and these emails might have triggered in between 06/05/2018 08:00 UTC and 06/05/2018 18.30 UTC.

We apologize for any inconvenience.

-Sindhu

What is a DTU?

$
0
0

If you are logged into the Unify Portal and navigate to Kusto actions > Performance > Perf Metric, you will be presented with the DTU Timeline, which details DTU consumption.

But what exactly is a DTU?

A DTU, or Database Transaction Unit, is a blended measure of CPU, memory, and disk resources.  This does not apply to on-premises SQL Server, but applies to our PaaS offering of SQL Server called Azure SQL Database, so only Azure-based D365FO customers should be concerned with this.  With Azure SQL Database, Microsoft guarantees a certain level of resources available to your database based on the selected service tier, making performance more predictable.  If your workload exceeds this level, throughput is throttled, which results in slower performance.

The DTU unit of measure is an easy-to-understand metric, especially when comparing between performance levels and service tiers.  Doubling the DTUs translates into doubling the amount of resources available for your Azure SQL Database.  Conversely, halving the DTUs in your service tier translates into halving the amount of resources available.

Migrating a large collection from on premise TFS to Azure hosted service (VSTS)

$
0
0

 

The TFS Database Import Service provides a way for Team Foundation Server (TFS) customers to complete a high-fidelity migration into Visual Studio Team Services (VSTS). It works by importing a collection in its entirety into a brand new VSTS account.

The Import Service has been used by tons of customers, small-to-large, to successfully migrate to VSTS. This includes using it internally at Microsoft to move older TFS instances into VSTS. Helping us achieve our larger goal of using one engineering system – more formerly called the 1ES effort. As you could imagine, there are lots of exceptionally large collections hanging around at Microsoft. We thought we could share our experiences from a recent migration of a collection that was around 10.5TB in total size. Yep, that TB for Terabytes 😊.

Please refer to my other blogs for details about the process, tool, common commands etc. here. You can also learn more about the Import Service by visiting https://aka.ms/tfsimport.

Below are some statistics on the size of the collection we imported. It’s truly a big one!

 

Description Numbers
Full database size 10578 GB
Database meta data size 583 GB
Number of users 2800
Team Projects 5
Work Items Count 19.2 Million
Test runs Count 1.3 Million
Builds Count 28.7 Thousand

 

Migration time:

Migrating the collection including all the data within the collection took ~92 hours. Please note that the migration time may differ based on the data/content size, content type, network speed etc.

Performance:

For most of companies/team’s performance of the service is crucial factor to make the change. I am glad to share that in our testing after migrating this large collection, the work items, builds and other information was accessible in similar of less time than on-premise collection.

What if I need help

Please reach out to migration team in Microsoft (vstsdataimport@microsoft.com)  to discuss/plan the migration plan or if you have any queries.

Tips & Tricks:

If your on-premise SQL server does not have in-bound & out-bound internet collection you can host the SQL database to an azure hosted SQL virtual machine (VM). It’s recommended to host the VM in the same region where you are planning to host your VSTS account, data transfer rate from the same region will be much better.

If you need to transfer collection backup to Azure hosted VM, plan to divide the backup file in multiple files (I divided backup to 24 files) and use power of parallelism to transfer files).

Below are details to speed-up the process:

  1. If your collection is not hosted on a sever which allows inbound internet traffic, you will have to restore the collection on an internet inbound/outbound enabled server virtual machine. It is recommenced to host the server/virtual machine in the same region where you are planning to host the migrated VSTS account. Please refer to migration guide for the setup options.

 

  1. Collection database Backup: Divide collection backup into multiple files (16 to 24 files) and use MAXTRANSFERSIZE hint. This approach reduced total backup duration by 75% for our collection database.

 

Example:

BACKUP DATABASE [Tfs_DefaultCollection] TO

DISK = N'H:MSSQLBAKTfs_DefaultCollection_1.bak',

DISK = N'H:MSSQLBAKTfs_DefaultCollection_2.bak',

DISK = N'H:MSSQLBAKTfs_DefaultCollection_3.bak',

DISK = N'H:MSSQLBAKTfs_DefaultCollection_4.bak',

DISK = N'H:MSSQLBAKTfs_DefaultCollection_5.bak',

.

.

.

.

DISK = N'H:MSSQLBAKTfs_DefaultCollection_23.bak',

DISK = N'H:MSSQLBAKTfs_DefaultCollection_24.bak'

 

WITH NOFORMAT, NOINIT,  NAME = N'Tfs_DefaultCollection-Full Database Backup',

SKIP, NOREWIND, NOUNLOAD, COMPRESSION, MAXTRANSFERSIZE = 4194304, STATS = 5

GO

 

  1. Use AZCopy to transfer backup to Azure blob/ VM: You can get optimal performance for database files copy to Azure blob/VM using AZCopy utility.

 

  1. Initiate multiple instance of AZCopy simultaneously: Running multiple instances of AZCopy in parallel (for different files upload) would help speed-up the upload/download files to/from blob storage. You will have to use /z switch to run parallel instances of AZCopy

 

Example: AzCopy /Source:H:MSSQLBAK /Dest:https://<storage account>.blob.core.windows.net/tfsMigration /SourceKey:"<SAS>" /pattern:"File.bak" /S /z:<unique folder for azcopy journal file>

 

Summary:

In addition to small collections you can migrate large collections to VSTS and can take advantage of hosted service.

 

Special thanks to Rogan Ferguson for review & contribution.

Seamless migration of MySQL apps to Azure Database for MySQL with minimum downtime

$
0
0

At the recent Microsoft \Build 2018 conference, we announced that one of the key themes in the Data Modernization pillar is to showcase how easy it is to migrate existing apps to the cloud. Migrating your existing infrastructure, platform, services, databases, and applications to the cloud is much easier and more seamless today with the tools and guidance that Microsoft provides to help you migrate and modernize your applications.

To this end, you should definitely log in to the Microsoft Build Live site and take a look at the videos associated with the Playlist: Migrate existing apps to the cloud, which contains a great demo and sessions that discuss migrating applications and databases to the cloud.

One of the tools that Microsoft provides to our customers for migrating different database engines (including SQL Server, Oracle, MySQL, and PostgreSQL) to the cloud is Azure Database Migration Service (DMS). One cool feature that we introduced for MySQL migrations is the continuous sync capability, which limits the amount of downtime incurred by the application. DMS performs an initial load of your on-premises to Azure Database for MySQL, and afterward continuously syncs any new transactions to Azure while the application remains running.

When the data catches up on the target Azure side, stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then simply update your connection string to point to Azure. There you have it, your application is now live on Azure!

I delivered session, Easily migrate MySQL/PostgreSQL apps to Azure managed service, with a demo showing how to migrate MySQL apps to Azure Database for MySQL.

DMS migration of MySQL sources is currently in preview. If you would like to try out the service to migrate your MySQL workloads, please contact the Azure DMS Feedback alias expressing your interest. We would love to have your feedback to help us further improve the service.

Thanks in advance!

Shau Phang
Senior Program Manager
Microsoft Database Migration Team

Various Power BI, PowerApps and Flow Conferences and Events around the world. 

$
0
0

Various Power BI, PowerApps and Flow Conferences and Events around the world.

If you have an event you want added email me: Chass@microsoft.com

 

D365 Saturday ZURICH SWITZERLAND
ZURICH SWITZERLAND
Fri, Jun 8 to Sat, Jun 9
Power of the Cloud – Power Users Conference
Toronto, Canada
Fri, Jun 22
CALIFORNIA/ SAN FRANCISCOHome Dynamics Saturday California/ San Francisco
CALIFORNIA/ SAN FRANCISCO
Sat, Jun 30
 Business Insights Summit Preconference Session
Seattle
Sun, Jul 22
Charlotte, North Carolina World Tour – August 27-29, 2018
North Carolina
Mon, Aug 27
Sydney, Australia – World Tour August 27 – 29, 2018
Sydney, Australia – August 27 – 29, 2018
Tue, Aug 28
 Copenhagen, Denmark World Tour- September 10 – 12, 2018
9. Copenhagen, Denmark World Tour- September 10 – 12, 2018
Mon, Sep 10 to Wed, Sep 12
d365 Dublin
http://365saturday.com/dynamics/dublin-2018/
Sat, Sep 15
 Dallas, Texas World Tour - September 17 - 19, 2018
10. Dallas, Texas World Tour - September 17 - 19, 2018
Mon, Sep 17 to Wed, Sep 19
D365 BUENOS AIRES
BUENOS AIRES 

SQL Saturday San Diego
San Diego California, Sep 22

Sat, Sep 22
 Orlando, Florida Microsoft Ignite 2018 September 24–28, 2018
14. Orlando, Florida Microsoft Ignite 2018 September 24–28, 2018
Mon, Sep 24 to Fri, Sep 28
DataMinds Connect in Belgium.
Belgium.
Mon, Oct 15 to Tue, Oct 16
Phoenix Summit Oct 15-18, 2018
Phoenix Summit Oct 15-18, 2018
Mon, Oct 15 to Thu, Oct 18
D365 GERMANY
GERMANY
Sat, Oct 20
d365 Belgium
http://365saturday.com/dynamics/belgium/#
Sat, Oct 27 to Sun, Oct 28
 Seattle, Washington World Tour – October 29 – 31, 2018
13. Seattle, Washington World Tour – October 29 – 31, 2018
Mon, Oct 29 to Wed, Oct 31
Seattle SQL PASS Nov 6-9th
Seattle SQL PASS Nov 6-9th
Tue, Nov 6 to Fri, Nov 9
D365 Paris
Paris
Sat, Nov 10
Dubai, United Arab Emirates World Tour – November 12 – 13, 2018
Dubai, United Arab Emirates World Tour – November 12 – 13, 2018
Mon, Nov 12 to Tue, Nov 13
D365 Singapore
Singapore
Sat, Nov 17
D365 Japan
Japan
Sat, Nov 24
D365 New Zealand
New Zealand
Sat, Dec 1
Tampa SQL Live 1st week in December
Tampa SQL Live 1st week in December
Sun, Dec 2 to Fri, Dec 7

 

Use Azure policy Service to manage Azure Resources and stay compliant with corporate standards

$
0
0

Azure policy Service can be used to implement rules that helps organizations stay compliant when deploying and configuring resources in Azure. This sample implements a rule that ensures that no Compute resources in Azure, like Virtual Machines are deployed without having mandatory tags included in the Provisioning request. The tag names used are 'Cost Center Name' to be able to attribute the charge to, and the 'Service Name' of the Application that would be hosted in the Virtual Machine.

Using an Azure Policy Service Policy ensures that these rules are honored irrespective of how the resource is provisioned, be it using ARM Template, Azure Portal, CLI, or using the REST API.

Policy Definition

The Json document (VMTagsPolicy.json) representing the Policy definition is available in the GitHub repository accompanying this post. The screen shot below shows the Policy definition.

Refer to the Azure documentation for the steps to define a Policy and assign it - here  and here .

In the Policy rule defined above, if either of the Tags are not specified in the request, the Provisioning request gets denied. The tag values are also validated to ensure that they are in the list of allowed Cost Center and Service Name values that were specified in the Policy assignment.

The tag values are parameterized, and allowed values for Cost Center and Service Names bound to a predetermined set of values in the JSON definition.

Policy Assignment

In this example, the Policy is assigned to a specific Resource group in the current Azure Subscription, so that the Policy gets applied only to this scope. The allowed values for the Cost Center codes in this assignment are  "Cost Center 1" and "Cost Center 2". (See screenshot below)

Validating this Policy by provisioning a VM using different options

1) Using CLI

az vm create --resource-group azpolicyrg --name azpolicyvm1 --image UbuntuLTS --admin-username onevmadmin --admin-password Pass@word123 --debug

 

 

The request above fails since the tags were missing in the request.

The next request below fails since the values set for the tags did not conform to the allowed values specified in the Policy assignment defined in the previous steps.
az vm create --resource-group azpolicyrg --name azpolicyvm1 --image UbuntuLTS --admin-username onevmadmin --admin-password Pass@word123 --tags CostCenter="Cost Center 2" ServiceName="Service 1" --debug

The request below includes all the mandatory tags and the allowed values as set in the Policy definition, hence it succeeds and the VM gets provisioned.
az vm create --resource-group azpolicyrg --name azpolicyvm1 --image UbuntuLTS --admin-username onevmadmin --admin-password Pass@word123 --tags CostCenter="Cost Center 2" ServiceName="Service 1" --debug

2) Using an ARM Template
The ARM template (SimpleVmJson.json) used here is uploaded to the GitHub Repository referred to in this article
Selecting a wrong value for the Cost Center Code ('Cost Center 3' selected in the ARM Template is not from among the list of allowed values in the Policy Assignment created in the previous steps), fails the resource provisioning request. See screenshot below

3) Using the Azure portal to create a VM will not succeed, since the wizard does not provide an option to specify tags. However, when a user edits the tags in a VM that already exists, the Policy validation kicks in and ensures that any changes that violate the policy are disallowed.
In the screen shot below, setting a value different from that in the policy definition or deleting the 'Cost' Center' tag, and selecting 'save' errors out citing the Policy violation.

While in the Policy definition the rule action is set to 'Deny' when the validation fails, and the VM provisioning fails, setting the rule action to 'audit' could be used instead to ensure that the provisioning requests succeeds, but the violations are written to audit and surfaces in the compliance dashboard. An organization could take corrective action manually, and at their convenience.

Scenario 2:
Azure Storage now provides the option to associate a Vnet Service endpoint to it, that ensures that only services deployed in the Subnet would have access to the Storage account.

The Policy definition below implements this rule, whereby only requests to provision a Storage account that have a VNET Service endpoint configured would be permitted, else the action is set to 'deny' the request. See screenshot below for the Policy Definition. The Policy definition file, StorageSecurityCompliance.json is available in the GitHub Location accompanying this article

 

 

 

Universal Resource Scheduling – Requirement Calendar

$
0
0

Applies to: Field Service for Dynamics 365 v 6.1 and above, Project Service Automation for Dynamics 365 v 1.1 and above, and Universal Resource Scheduling (URS) solution on Dynamics 365 8.2x and 9.0x

 

In this blog post, we will review the requirement calendar’s role in creating requirement details, along with the time zone that the pop-out schedule board loads in when using the “book” button.

When creating a resource requirement, each requirement record is associated with a calendar. On the requirement form, there is a “modify calendar” option on the ribbon bar, which allows you to modify the calendar for the requirement.

 

 

Image pointing to the modify calendar button on the ribbon

Image pointing to the modify calendar button on the ribbon

 

 

Image showing modify calendar screen

Image showing modify calendar screen

 

This calendar drives two behaviors in URS:

1.  Creation of requirement details:

 

When creating a new requirement, there are several “allocation methods” you can choose from.

 

 

Image showing allocation method options on the requirement form

Image showing allocation method options on the requirement form

 

The calendar has no applicability if I select “none” as the allocation method. However, the allocation methods “Full capacity,” “Percentage capacity,” “Distribute evenly,” and “Front load” all drive logic that contours the requirement duration into many “requirement detail” records. For example, here I will create a requirement and set the allocation method to full capacity from June 1st until June 29th.

 

You will notice if you navigate to the requirement details related sub-grid, there are requirement detail records generated for 8 hours (480 minutes), with a start time of 9:00 AM. There is a record for most days, but you will notice that it skips June 2nd and 3rd but resumes on the 4th.

 

Image showing generated requirement details

Image showing generated requirement details

 

 

This is because the requirement details are generated using the requirement calendar. Since the requirement calendar is 9 to 5, Monday through Friday, each requirement detail spans 8 hours starting at 9 AM each day, in the time zone set by the requirement calendar. Since June 2nd is a Saturday and June 3rd is a Sunday, there are no requirement details generated for these days since the requirement calendar has both days set as non-work days.

 

Image showing requirement calendar

Image showing requirement calendar

 

 

At the time this article was published, these requirement details are created automatically when a requirement is created, based on the allocation method and the requirement calendar. After they are generated, you can always change the requirement details by using the “Specify Pattern” experience, or by interacting with the requirement details entity directly; you can’t, however, regenerate the requirement details using the logic as if it were initially created.

 

Now that we explored one way the requirement calendar has an impact, let's explore the other way.

 

2.  Book button pop-out schedule board:

 

When scheduling from a form or view (as opposed to starting from within the schedule board itself), a pop-out schedule board is launched.

 

 

Image of “Book” button on ribbon of requirement on requirement form

Image of “Book” button on ribbon of requirement on requirement form

 

 

 

Image of “Book” on ribbon of requirement view with one requirement selected

Image of “Book” on ribbon of requirement view with one requirement selected

 

 

When clicking this book button, the pop-out schedule board loads with scheduling options. But what time zone is the board displayed in? You were not in context of a schedule board before, so we have the ability to launch the Schedule Board in any time zone. Therefore, we load the schedule board in the time zone of the requirement calendar with the logic that you’re likely to want to view the schedule board in context of your requirement’s time zone, since that’s likely to be your customer’s or resource’s time zone.

 

Image of Pop Out Schedule Board showing the time zone as Eastern Time

Image of Pop Out Schedule Board showing the time zone as Eastern Time, Same as Requirement Calendar

 

On a side note, the order in which we set the time zone of the schedule board is:

  • Requirement calendar
  • If there is somehow no requirement calendar or there is somehow no time zone on the requirement calendar, the schedule board will load in the default schedule board’s time zone.Now that we have explored where the requirement calendar has impact, let’s explore how the requirement calendar is configured, along with its schema.
  • When creating a new requirement, there is a field on the requirement entity (hidden on the form by default) where you can set a work hours template (msdyn_workhourtemplate). If you set a value in this field, the requirement calendar will be based on this work hours template. As a result, the requirement details will be generated according to the template, and the pop-out schedule board will be loaded in the time zone for this template (since those values are set on the requirement calendar). This gives system customizers a way to build business logic where you can set the work hours template field based on whatever logic you choose, through workflows, plugins, etc.

 

 

Image of resource requirement form with work hours template field displayed

Image of resource requirement form with work hours template field displayed

 

  • If you do not set the work hours template field on the requirement, the calendar is automatically created as 9 to 5, Monday through Friday, in the time zone defined by the user’s personalization settings.

Image of time zone setting in Dynamics 365 personalization settings

Image of time zone setting in Dynamics 365 personalization settings

 

  • Once a requirement is created and a calendar is created, the requirement is associated to its unique calendar through an attribute on the requirement entity called “Calendar ID” (msdyn_calendarid). This attribute is a string field and contains the guid of the Calendar instance for that requirement. This field is also not exposed on the form by default. Now that you know how the requirement is related to its calendar, customizers have more power to interact with the requirement calendar.

 

Image of Calendar ID field exposed on the requirement form

Image of Calendar ID field exposed on the requirement form

 

 

 

Happy scheduling!

Dan Gittler

Principal Program Manager, Dynamics 365 Engineering

 


Microsoft Disaster Response: Another great reason to work for Microsoft

$
0
0

This post is provided by App Dev Manager Rich Maines who shares his experience volunteering with Microsoft Services Disaster Response.


clip_image002“When a disaster occurs, you’ve entered a “new reality,” says Lewis Curtis, director of Microsoft Services Disaster Response. It won’t be enough to simply restore your systems and applications to where they were before – disasters change everything, irrevocably and permanently.”

Catastrophic events like the San Jose Floods, Hurricane Harvey, Hurricane Irma & Maria, Mexico Earthquake, California Wildfires, Cape Town Water Crisis are recent examples that illustrate the need for rapid, coordinated action during times of crisis. Microsoft Services Disaster Response is an organization within Microsoft, designed to help address both immediate needs and long-term objectives by delivering technologies, Information and Communication Technology (ICT) expertise, partnership resources and volunteer support to help members of the disaster response community boost their operational effectiveness.


clip_image004

I had never even heard of MSDR until last September. As MSDR was mobilizing support for Hurricane Harvey recovery efforts, two additional category 5 storms, Hurricane Irma and Hurricane Maria, devastated Puerto Rico and the surrounding island nations. The immediate flooding, the persistent power outages and the destroyed infrastructure significantly delayed the arrival and dispersal of supplies and aid. It was then that I saw an internal ‘call to action’ soliciting volunteers within Microsoft to assist with various missions related to Hurricane Irma and Maria. Inspired by the heroics of my colleagues, I decided to join “a mission or two” as a Mission Coordinator.

As the missions kept rolling in, what stood out most during mobilization after mobilization was the dedication and commitment of volunteers in helping others. Most of us had never heard of MSDR or Microsoft’s ability to help in a crisis. However, employees quickly adapted, collaborated, learned, and became seasoned veterans to quickly help more communities in their time of need. This meant overcoming unforeseen challenges, improving processes, and mentoring others to help new response teams help more communities affected by new disasters. These simple and selfless acts, sharing knowledge, offering guidance, and giving more are the reasons I absolutely love working for Microsoft.

Lewis Curtis likes to frequently remind us: “If you are doing it alone, you are doing it wrong.” This is true for all of us. Microsoft’s commitment to leveraging our capabilities and assets to address human suffering are the droids you’ve been looking for.

Most of the work we do as ADMs here in Premier Support for Developers is focused on helping customers succeed with Microsoft technology.  As part of Microsoft, sometimes those skills and technologies are called to do even more in the wake of a natural disaster or catastrophic event.  If you share a passion for technology and helping others, Microsoft is a wonderful place to put those skills to work. Join. Our. Team.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

GUID Table for Windows Azure Active Directory Permissions

$
0
0

Introduction

This blog is meant to help users who need to get the Windows Azure Active Directory Permissions (WAAD) Globally Unique Identifiers (GUIDs) in order to create AAD Applications using the Microsoft Graph API, or for other reasons where they just need to get the GUID for a certain WAAD permission. For further information regarding AAD permissions please refer to the blog post : https://blogs.msdn.microsoft.com/aaddevsup/2018/05/21/finding-the-correct-permissions-for-a-microsoft-or-azure-active-directory-graph-call/

 

Note: That these GUIDs are subject to change in the future and may not be the same anymore.

Table

The Resource App ID for the Windows Azure Active Directory is : 00000002-0000-0000-c000-000000000000

GUID of Permission Permission
5778995a-e1bf-45b8-affa-663a9f3f4d04

Type : Role

Read directory data
abefe9df-d5a9-41c6-a60b-27b38eac3efb

Type : Role

Read and write domains
78c8a3c8-a07e-4b9e-af1b-b5ccab50a175

Type : Role

Read and write directory data
1138cb37-bd11-4084-a2b7-9f71582aeddb

Type : Role

Read and write devices
9728c0c4-a06b-4e0e-8d1b-3d694e8ec207

Type : Role

Read all hidden memberships
824c81eb-e3f8-4ee6-8f6d-de7f50d565b7

Type : Role

Manage apps that this app creates or owns
1cda74f2-2616-4834-b122-5cb1b07f8a59

Type : Role

Read and write all applications
aaff0dfd-0295-48b6-a5cc-9f465bc87928

Type : Role

Read and write domains
a42657d6-7f20-40e3-b6f0-cee03008a62a

Type : Scope

Access the directory as the signed-in user
5778995a-e1bf-45b8-affa-663a9f3f4d04

Type : Scope

Read directory data
78c8a3c8-a07e-4b9e-af1b-b5ccab50a175

Type : Scope

Read and write directory data
970d6fa6-214a-4a9b-8513-08fad511e2fd

type: Scope

Read and write all groups
6234d376-f627-4f0f-90e0-dff25c5211a3
type: Scope
Read all groups
c582532d-9d9e-43bd-a97c-2667a28ce295
type: Scope
Read all users' full profiles
cba73afc-7f69-4d86-8450-4978e04ecd1a
type: Scope
Read all users' basic profiles
311a71cc-e848-46a1-bdf8-97ff7156d8e6
type: Scope
Sign in and read user profile
2d05a661-f651-4d57-a595-489c91eda336
type: Scope
Read hidden memberships

 

Conclusion

If you have anymore issues in regards to this please file a support ticket and one of our support engineers will reach out to you to resolve the issue. Please include a fiddler trace of a repro of the issue occurring as well as a summary of the expected behavior versus the current behavior.

Site-to-Site VPN between pfSense Firewall and Azure using BGP

$
0
0

Site-to-Site VPN between pfSense and Azure with BGP to allow dynamic discovery of your networks

This post explains how to set up a VPN connection from an open-source pfSense Firewall to Azure. We will use BGP running on top of the VPN IPSEC tunnel to enable our local network and Azure to dynamically exchange routes. This removes the burden of having to declare manually on your VPN gateways which subnets you want to advertise to the other end

First thing to bear in mind is that you cannot have overlapping IP address between your LAN side on the Firewall and the VNET address space. My home router sits on a 192.168.0.0/24 and the pfSense is connected to the home router, using the pfSense WAN port. The Firewall has a LAN address space on 192.168.1.0.24 and has a PC connected to the LAN port of the Firewall

Parameters to fill Values
My Home Router Public IP 1.2.3.4
LAN subnet behind pfSense (Local VPN Gateway) 192.168.1.0/24
Azure VNET Address Space 10.11.0.0/16
Azure VNET VM Subnet 10.11.0.0/24
Azure VNET Gateway Subnet 10.11.3.0/24
Azure VPN Gateway Public IP 23.97.137.42
Azure VPN Type Route-Based
Azure VPN BGP ASN 65515
Azure Gateway Type VPN
Azure Local Network Gateway Name LocalVPN-pfSense
Azure Local Network Gateway BGP peer address 192.168.1.1
Azure Local Network Gateway BGP ASN 65501
Azure VPN Connection Name VPN-conn2pfSense
Azure VPN Shared Key mySuperSecretKey123

We will start creating a Virtual Network (again make sure the address space you enter doesn't overlap with the space on your local network)

image of vnet

Followed by the gateway subnet (I decided to use /24 to keep the same subnetting scheme but the recommendation from Microsoft is to use a /27 or /28 for the gateway subnet)

image_of_gwsubnet

Next, we will create the Virtual Network Gateway. We will chose to create a new public IP address. Also, we will use BGP to exchange routes between Azure and the pfSense firewall, so we need to mark the BGP option when creating the Gateway. We will use a private BGP ASN of 65515

image_of_vpn-gw

You will find the BGP peer address on your VPN Gateway. This is the local address that BGP will use in your Azure VPN Gateway to initiate a BGP connection to your home gateway

image_of_bgppeer

Now we are going to create the Local Network Gateway. Azure refers to the VPN device that sits in your home network. You will need to indicate the BGP peer address, your local network behind the Firewall (or local VPN gateway) and a Private BGP ASN (I am using 65501)

image_of_local-gw

Once the local gateway is created we will define a connection to our home VPN Gateway. We will use a private shared key to enable the IPSEC VPN to come up. Remember to mark BGP to 'enabled' on your Connection. This is how it looks like when the connection is up and running (assuming at this poit have done the similar on the other end)

image_of_connection

Now, moving to the other end we will use the Web UI on the pfSense firewall to work on the Rules and VPN settings To configure a new tunnel, a new Phase 1 IPSEC VPN must be created. Remote Gateway will be the public IP address assigned to my Virtual Network Gateway in Azure. Leave 'auto' as IKE key exchange version, selecting WAN as the interface to run the VPN. For the authentication part, use the Pre-Shared Key you have defined. Use the encryption algorithm you need, in my case AES (256 bits), DH group and Hashing algorithm

image_of_phase1

We will then move to Phase 2. This phase is what builds the actual tunnel, sets the protocol to use, and sets the length of time to keep the tunnel up when there is no traffic. For remote network, use the VNET address space. Local subnet will the address space on the LAN side of the pFsense

image_of_phase2

Apply changes and go to IPSEC Status

image_of_ipsec-status

You will need to create a rule to permit IPSEC traffic coming through your WAN interface

I have also open TCP port 179 on a rule on the IPSEC interface to permit incoming BGP connections from Azure

image_of_ipsec-rule

Now, in order to use BGP on pfSense you will need to install OpenGPD through the Packet Manager We will use BGP peer groups to define the BGP ASN of the Azure peer

image_of_bgp-group

With BGP, you only need to declare a minimum prefix to a specific BGP peer over the IPsec S2S VPN tunnel. It can be as small as a host prefix (/32) of the BGP peer IP address of your on-premises VPN device. The point of using BGP over VPN is that you can control dynamically which on-premises network prefixes you want to advertise to Azure to allow your Azure Virtual Network to access

My BGP settings are the following:

image_of_bgp-settings

BGP neighbor will be the IP address of the Virtual Gateway on Azure, in my case with IP address 10.11.3.254

image_of_bgp-neighbor

You can also visualize the whole BGP raw config in pfSense

image_of_bgp-rawconfig

Finally, you will be able to see the BGP session coming up after a few minutes

image_of_bgp-status1

image_of_bgp-status2

To test this, you can simply ping from a computer on the LAN side of the pfSense (192.168.1.0/24) to a VM in Azure on the VNET address space (10.11.0.0/16), and that should work! 🙂

image_of_ping

06/04: Errata added for [MS-HGSA]: Host Guardian Service: Attestation Protocol

06/04: Errata added for [MS-FAX]: Fax Server and Client Remote Protocol

Viewing all 5308 articles
Browse latest View live