Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

A second update to the problem of email forwarding in Office 365

$
0
0

18 months ago, I wrote the following blog post: Why does my email from Facebook, that I forward from my outlook.com account, get rejected.

6 (ish) months ago, I provided an update at An update on the forwarding email problem in Office 365 where I said that we made a change such that Exchange Transport Rules (ETRs) will work with forwarding, that is, the original message content is retained so that a message signed with DKIM will pass at the forwarded-to destination. However, mailbox forwarding does not work.

Today, I am announcing another update - Office 365 mailbox forwarding now works! That is, if you go into your Office 365 web portal and navigate to Settings > Mail > Accounts > Forwarding and start forwarding your email, then a message that passes DKIM/DMARC at Office 365 will also pass at the forwarded-to destination.

I tested it out myself, I sent a message from my Gmail account to my Office 365 account, forwarding to my Hotmail account. It passed DKIM/DMARC whereas a year ago it would have failed. I then sent from my corporate Microsoft account through my personal Office 365 account, forwarding to Gmail. It passed DKIM/DMARC again!

So that part is now working with respect to forwarding.

However, mailbox forwarding does not work. That is, Settings > Mail > Automatic processing > Inbox and sweep rules will not pass DKIM/DMARC when forwarding:

So, if you want to forward your email out of Office 365, my recommendation is to use mailbox forwarding, and not inbox rules. You could always forward the message as an attachment, but that makes it look like a manual forward and usually that's not what you want to do.

Oh, one more thing.

You may have noticed that I've said "Office 365" in this blog post when I've talked about forwarding, but not Outlook.com. That's because Outlook.com's mailbox forwarding does not yet work. We still have to work on that.

But this is a good step forward and should help relieve a chunk of issues our users have been experiencing.


Skype for Business Rate My Call with Power BI Analysis

$
0
0

When you are running a Skype for Business Online environment, CQD allows you to gain insights into the quality of calls via metrics and absolute network measurements from calls meta data.

The context and the actual user experience is collected through the Rate My Call, the RMC data is not currently included in any existing monitoring report, but it has a separate monitoring report. Data is collected in SQL tables that can be accessed by running SQL queries (work on reports in progress).

While we are waiting for the Rate My call Reports from Microsoft there is a way to produce our customized reports by the tools available now from Power BI and in this post we will create these reports by Power BI and use the Cognitive Analysis from Azure to analyze the user sentiment.

 

This Report shows User Sentiment from RMC data and analyzed with Azure Cognitive services in Power BI

This Report shows the Percentage of Subnets (resolved from Tenant Building Data) where users had complained by rating.

*Since VPN subnet in CQD shows as 1 IP per subnet its impossible to group them as one Region, in this post we will reverse-resolve them based on the Building data we have, so VPN IPs shows in their respective subnet name.

Requirements:

1- RMC is enabled and Users are giving feedback

2- Dump from the RMC Database (Explained below)

3- Power BI Desktop and Active account

4 - * (Optional) A Microsoft Azure account. Start a free trial or sign in.

5-* (Optional) A access key for Text Analytics. Sign up, then get your key.

  • * Only required for the Text Analytics report you can still skip those and use the other reports.

The Reports that we are going to Produce are the following:

1- Reports on who rated the least Stars

2- Reports on who rated with most feed backs

3- Correlation between feedbacks, Stars and subnets.

4- you can be creative with any report from the data given from the RMC.

 

So lets begin with the RMC Data Dump, Thanks to Jason Shave who wrote this blog on how to extract the RMC data through his script (https://www.powershellgallery.com/packages/CxdCallData/1.1.6.1 )

This Script will dump all your RMC data in CSV files, i found the best way to utilize this script is to do an "initial Seeding/Dump" since the beginning of Rate My Call policy being applied to the users, that will take very long depending on your data, but in my case a 7,000 Users over 3 months took about 14Hrs to complete;

but then  to avoid that going forward you can edit the downloaded module to automatically fill in your credentials and run it as a schedule task every week and get the last 7 days data only which can be less than a few hrs every week,

When you install the CxdCallData it will deploy the files in the folder C:Program FilesWindowsPowerShellModulesCxdCallData1.1.6.0 you can then  edit the line

$global:credential = Get-Credential -Message "Authenticate to Skype for Business Online"

to :

$O365_pass = cat 'C:mypasswordfolderOffice365_securepass.txt' | convertto-securestring
$global:credential = new-object -typename System.Management.Automation.PSCredential -argumentlist "USERNAME@TENANT.onmicrosoft.com",$O365_pass

so after that i created a script to create the list of users based on their RMC policy and collect them in a CSV file and later call that module and pass that file; and my script is then added to the task scheduler to run every 7 days

Import-Module LyncOnlineConnector
$O365_pass = cat 'C:Folder PathOffice365_securepass.txt' | convertto-securestring
$myO365cred = new-object -typename System.Management.Automation.PSCredential -argumentlist "USERNAME@TENANT.onmicrosoft.com",$O365_pass
$filename="C:Folder Path"+(Get-Date -format d).Replace("/","-")+"users.csv"
$CSSession = New-CsOnlineSession -Credential $myo365cred
Import-PSSession $CSSession -AllowClobber
Get-CsOnlineUser | ? {$_.Clientpolicy -match "RateMyCallPolicy"} | select-object UserPrincipalName | Export-Csv $filename
Remove-PSSession $CSSession

Get-CxdCallData -ReportSavePath c:SfBReports -NumberOfDaysToSearch 7 -CsvFileWithUsers $filename

Below is the Export of the Task Scheduler in XML

now as you see it collects all reports to c:SfBReports; you can only copy the SFB-UserFeedback-YYYY-MM-DD HH-MM-SS.CSV files to a folder lets call it c:SfBReportsRMC

now we will need to aggregate all the data in one file; so what we will do is using Excel queries run the following :

  

In the Advanced editor paste the following Query

let
Source = Folder.Files("C:FOLDER PATHReportRMC"),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Content", type binary}}),
Content = #"Changed Type"[Content],
#"Converted to Table" = Table.FromList(Content, Splitter.SplitByNothing(), null, null, ExtraValues.Error)
in
#"Converted to Table"

Then do the following steps:

Select the last step from the Applied steps and expand the drop down button.

Click on OK

Select the DiagID Column and then from Home>Remove Rows>Remove Duplicates

Rename The query to RMC and then Close and Load

Now you will get a Concatenated Excel Sheet of all the RMC User feedback reports in CSV from the Specified folder; make sure you only have those reports and nothing else in the folder, you can keep adding the reports there and Refresh the Excel sheet query to aggregate all files and build the new records.

After we had all results aggregated we will need to add some information that will help us build our reports with relation to our Network.

first we will add the functions that can calculate subnets in the Excel sheet, start by pressing ALT+F11

then Insert new Module as Shown here:

Then paste the following functions

' Copyright 2010-2017 Thomas Rohmer-Kretz

' This program is free software: you can redistribute it and/or modify
' it under the terms of the GNU General Public License as published by
' the Free Software Foundation, either version 3 of the License, or
' (at your option) any later version.

' This program is distributed in the hope that it will be useful,
' but WITHOUT ANY WARRANTY; without even the implied warranty of
' MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
' GNU General Public License for more details.

' You should have received a copy of the GNU General Public License
' along with this program. If not, see <http://www.gnu.org/licenses/>.

' http://trk.free.fr/ipcalc/

' Visual Basic for Excel

'==============================================
' IP v4
'==============================================

'----------------------------------------------
' IpIsValid
'----------------------------------------------
' Returns true if an ip address is formated exactly as it should be:
' no space, no extra zero, no incorrect value
Function IpIsValid(ByVal ip As String) As Boolean
 IpIsValid = (IpBinToStr(IpStrToBin(ip)) = ip)
End Function

'----------------------------------------------
' IpStrToBin
'----------------------------------------------
' Converts a text IP address to binary
' example:
' IpStrToBin("1.2.3.4") returns 16909060
Function IpStrToBin(ByVal ip As String) As Double
 Dim pos As Integer
 ip = ip + "."
 IpStrToBin = 0
 While ip <> ""
 pos = InStr(ip, ".")
 IpStrToBin = IpStrToBin * 256 + Val(Left(ip, pos - 1))
 ip = Mid(ip, pos + 1)
 Wend
End Function

'----------------------------------------------
' IpBinToStr
'----------------------------------------------
' Converts a binary IP address to text
' example:
' IpBinToStr(16909060) returns "1.2.3.4"
Function IpBinToStr(ByVal ip As Double) As String
 Dim divEnt As Double
 Dim i As Integer
 i = 0
 IpBinToStr = ""
 While i < 4
 If IpBinToStr <> "" Then IpBinToStr = "." + IpBinToStr
 divEnt = Int(ip / 256)
 IpBinToStr = Format(ip - (divEnt * 256)) + IpBinToStr
 ip = divEnt
 i = i + 1
 Wend
End Function

'----------------------------------------------
' IpSubnetToBin
'----------------------------------------------
' Converts a subnet to binary
' This function is similar to IpStrToBin but ignores the host part of the address
' example:
' IpSubnetToBin("1.2.3.4/24") returns 16909056
' IpSubnetToBin("1.2.3.0/24") returns 16909056
Function IpSubnetToBin(ByVal ip As String) As Double
 Dim l As Integer
 Dim pos As Integer
 Dim v As Integer
 l = IpSubnetParse(ip)
 ip = ip + "."
 IpSubnetToBin = 0
 While ip <> ""
 pos = InStr(ip, ".")
 v = Val(Left(ip, pos - 1))
 If (l <= 0) Then
 v = 0
 ElseIf (l < 8) Then
 v = v And ((2 ^ l - 1) * 2 ^ (8 - l))
 End If
 IpSubnetToBin = IpSubnetToBin * 256 + v
 ip = Mid(ip, pos + 1)
 l = l - 8
 Wend
End Function

'----------------------------------------------
' IpAdd
'----------------------------------------------
' example:
' IpAdd("192.168.1.1"; 4) returns "192.168.1.5"
' IpAdd("192.168.1.1"; 256) returns "192.168.2.1"
Function IpAdd(ByVal ip As String, offset As Double) As String
 IpAdd = IpBinToStr(IpStrToBin(ip) + offset)
End Function

'----------------------------------------------
' IpAnd
'----------------------------------------------
' bitwise AND
' example:
' IpAnd("192.168.1.1"; "255.255.255.0") returns "192.168.1.0"
Function IpAnd(ByVal ip1 As String, ByVal ip2 As String) As String
 ' compute bitwise AND from right to left
 Dim result As String
 While ((ip1 <> "") And (ip2 <> ""))
 Call IpBuild(IpParse(ip1) And IpParse(ip2), result)
 Wend
 IpAnd = result
End Function

'----------------------------------------------
' IpOr
'----------------------------------------------
' bitwise OR
' example:
' IpOr("192.168.1.1"; "0.0.0.255") returns "192.168.1.255"
Function IpOr(ByVal ip1 As String, ByVal ip2 As String) As String
 ' compute bitwise OR from right to left
 Dim result As String
 While ((ip1 <> "") And (ip2 <> ""))
 Call IpBuild(IpParse(ip1) Or IpParse(ip2), result)
 Wend
 IpOr = result
End Function

'----------------------------------------------
' IpXor
'----------------------------------------------
' bitwise XOR
' example:
' IpXor("192.168.1.1"; "0.0.0.255") returns "192.168.1.254"
Function IpXor(ByVal ip1 As String, ByVal ip2 As String) As String
 ' compute bitwise XOR from right to left
 Dim result As String
 While ((ip1 <> "") And (ip2 <> ""))
 Call IpBuild(IpParse(ip1) Xor IpParse(ip2), result)
 Wend
 IpXor = result
End Function

'----------------------------------------------
' IpAdd2
'----------------------------------------------
' another implementation of IpAdd which not use the binary representation
Function IpAdd2(ByVal ip As String, offset As Double) As String
 Dim result As String
 While (ip <> "")
 offset = IpBuild(IpParse(ip) + offset, result)
 Wend
 IpAdd2 = result
End Function

'----------------------------------------------
' IpComp
'----------------------------------------------
' Compares the first 'n' bits of ip1 and ip2
' example:
' IpComp("10.0.0.0", "10.1.0.0", 9) returns TRUE
' IpComp("10.0.0.0", "10.1.0.0", 16) returns FALSE
Function IpComp(ByVal ip1 As String, ByVal ip2 As String, ByVal n As Integer) As Boolean
 Dim pos1 As Integer
 Dim pos2 As Integer
 Dim mask As Integer
 ip1 = ip1 + "."
 ip2 = ip2 + "."
 While (n > 0) And (ip1 <> "") And (ip2 <> "")
 pos1 = InStr(ip1, ".")
 pos2 = InStr(ip2, ".")
 If n >= 8 Then
 If pos1 <> pos2 Then
 IpComp = False
 Exit Function
 End If
 If Left(ip1, pos1) <> Left(ip2, pos2) Then
 IpComp = False
 Exit Function
 End If
 Else
 mask = (2 ^ n - 1) * 2 ^ (8 - n)
 IpComp = ((Val(Left(ip1, pos1 - 1)) And mask) = (Val(Left(ip2, pos2 - 1)) And mask))
 Exit Function
 End If
 n = n - 8
 ip1 = Mid(ip1, pos1 + 1)
 ip2 = Mid(ip2, pos2 + 1)
 Wend
 IpComp = True
End Function

'----------------------------------------------
' IpGetByte
'----------------------------------------------
' get one byte from an ip address given its position
' example:
' IpGetByte("192.168.1.1"; 1) returns 192
Function IpGetByte(ByVal ip As String, pos As Integer) As Integer
 pos = 4 - pos
 For i = 0 To pos
 IpGetByte = IpParse(ip)
 Next
End Function

'----------------------------------------------
' IpSetByte
'----------------------------------------------
' set one byte in an ip address given its position and value
' example:
' IpSetByte("192.168.1.1"; 4; 20) returns "192.168.1.20"
Function IpSetByte(ByVal ip As String, pos As Integer, newvalue As Integer) As String
 Dim result As String
 Dim byteval As Double
 i = 4
 While (ip <> "")
 byteval = IpParse(ip)
 If (i = pos) Then byteval = newvalue
 Call IpBuild(byteval, result)
 i = i - 1
 Wend
 IpSetByte = result
End Function

'----------------------------------------------
' IpMask
'----------------------------------------------
' returns an IP netmask from a subnet
' both notations are accepted
' example:
' IpMask("192.168.1.1/24") returns "255.255.255.0"
' IpMask("192.168.1.1 255.255.255.0") returns "255.255.255.0"
Function IpMask(ByVal ip As String) As String
 IpMask = IpBinToStr(IpMaskBin(ip))
End Function

'----------------------------------------------
' IpWildMask
'----------------------------------------------
' returns an IP Wildcard (inverse) mask from a subnet
' both notations are accepted
' example:
' IpWildMask("192.168.1.1/24") returns "0.0.0.255"
' IpWildMask("192.168.1.1 255.255.255.0") returns "0.0.0.255"
Function IpWildMask(ByVal ip As String) As String
 IpWildMask = IpBinToStr(((2 ^ 32) - 1) - IpMaskBin(ip))
End Function

'----------------------------------------------
' IpInvertMask
'----------------------------------------------
' returns an IP Wildcard (inverse) mask from a subnet mask
' or a subnet mask from a wildcard mask
' example:
' IpInvertMask("255.255.255.0") returns "0.0.0.255"
' IpInvertMask("0.0.0.255") returns "255.255.255.0"
Function IpInvertMask(ByVal mask As String) As String
 IpInvertMask = IpBinToStr(((2 ^ 32) - 1) - IpStrToBin(mask))
End Function

'----------------------------------------------
' IpMaskLen
'----------------------------------------------
' returns prefix length from a mask given by a string notation (xx.xx.xx.xx)
' example:
' IpMaskLen("255.255.255.0") returns 24 which is the number of bits of the subnetwork prefix
Function IpMaskLen(ByVal ipmaskstr As String) As Integer
 Dim notMask As Double
 notMask = 2 ^ 32 - 1 - IpStrToBin(ipmaskstr)
 zeroBits = 0
 Do While notMask <> 0
 notMask = Int(notMask / 2)
 zeroBits = zeroBits + 1
 Loop
 IpMaskLen = 32 - zeroBits
End Function

'----------------------------------------------
' IpWithoutMask
'----------------------------------------------
' removes the netmask notation at the end of the IP
' example:
' IpWithoutMask("192.168.1.1/24") returns "192.168.1.1"
' IpWithoutMask("192.168.1.1 255.255.255.0") returns "192.168.1.1"
Function IpWithoutMask(ByVal ip As String) As String
 Dim p As Integer
 p = InStr(ip, "/")
 If (p = 0) Then
 p = InStr(ip, " ")
 End If
 If (p = 0) Then
 IpWithoutMask = ip
 Else
 IpWithoutMask = Left(ip, p - 1)
 End If
End Function

'----------------------------------------------
' IpSubnetLen
'----------------------------------------------
' get the mask len from a subnet
' example:
' IpSubnetLen("192.168.1.1/24") returns 24
' IpSubnetLen("192.168.1.1 255.255.255.0") returns 24
Function IpSubnetLen(ByVal ip As String) As Integer
 Dim p As Integer
 p = InStr(ip, "/")
 If (p = 0) Then
 p = InStr(ip, " ")
 If (p = 0) Then
 IpSubnetLen = 32
 Else
 IpSubnetLen = IpMaskLen(Mid(ip, p + 1))
 End If
 Else
 IpSubnetLen = Val(Mid(ip, p + 1))
 End If
End Function

'----------------------------------------------
' IpSubnetParse
'----------------------------------------------
' Get the mask len from a subnet and remove the mask from the address
' The ip parameter is modified and the subnet mask is removed
' example:
' IpSubnetLen("192.168.1.1/24") returns 24 and ip is changed to "192.168.1.1"
' IpSubnetLen("192.168.1.1 255.255.255.0") returns 24 and ip is changed to "192.168.1.1"
Function IpSubnetParse(ByRef ip As String) As Integer
 Dim p As Integer
 p = InStr(ip, "/")
 If (p = 0) Then
 p = InStr(ip, " ")
 If (p = 0) Then
 IpSubnetParse = 32
 Else
 IpSubnetParse = IpMaskLen(Mid(ip, p + 1))
 ip = Left(ip, p - 1)
 End If
 Else
 IpSubnetParse = Val(Mid(ip, p + 1))
 ip = Left(ip, p - 1)
 End If
End Function

'----------------------------------------------
' IpSubnetSize
'----------------------------------------------
' returns the number of addresses in a subnet
' example:
' IpSubnetSize("192.168.1.32/29") returns 8
' IpSubnetSize("192.168.1.0 255.255.255.0") returns 256
Function IpSubnetSize(ByVal subnet As String) As Double
 IpSubnetSize = 2 ^ (32 - IpSubnetLen(subnet))
End Function

'----------------------------------------------
' IpClearHostBits
'----------------------------------------------
' set to zero the bits in the host part of an address
' example:
' IpClearHostBits("192.168.1.1/24") returns "192.168.1.0/24"
' IpClearHostBits("192.168.1.193 255.255.255.128") returns "192.168.1.128 255.255.255.128"
Function IpClearHostBits(ByVal net As String) As String
 Dim ip As String
 ip = IpWithoutMask(net)
 IpClearHostBits = IpAnd(ip, IpMask(net)) + Mid(net, Len(ip) + 1)
End Function

'----------------------------------------------
' IpIsInSubnet
'----------------------------------------------
' Returns TRUE if "ip" is in "subnet"
' example:
' IpIsInSubnet("192.168.1.35"; "192.168.1.32/29") returns TRUE
' IpIsInSubnet("192.168.1.35"; "192.168.1.32 255.255.255.248") returns TRUE
' IpIsInSubnet("192.168.1.41"; "192.168.1.32/29") returns FALSE
Function IpIsInSubnet(ByVal ip As String, ByVal subnet As String) As Boolean
 Dim l As Integer
 l = IpSubnetParse(subnet)
 IpIsInSubnet = IpComp(ip, subnet, l)
End Function

'----------------------------------------------
' IpSubnetMatch
'----------------------------------------------
' Tries to match an IP address or a subnet against a list of subnets in the
' left-most column of table_array and returns the row number
' 'ip' is the value to search for in the subnets in the first column of
' the table_array
' 'table_array' is one or more columns of data
' 'fast' indicates the search mode : BestMatch or Fast mode
' fast = 0 (default value)
' This will work on any subnet list. If the search value matches more
' than one subnet, the smallest subnet will be returned (best match)
' fast = 1
' The subnet list MUST be sorted in ascending order and MUST NOT contain
' overlapping subnets. This mode performs a dichotomic search and runs
' much faster with large subnet lists.
' The function returns 0 if the IP address is not matched.
Function IpSubnetMatch(ByVal ip As String, table_array As Range, Optional fast As Boolean = False) As Integer
 Dim i As Integer
 IpSubnetMatch = 0
 If fast Then
 Dim a As Integer
 Dim b As Integer
 Dim ip_bin As Double
 a = 1
 b = table_array.Rows.Count
 ip_bin = IpSubnetToBin(ip)
 Do
 i = (a + b + 0.5) / 2
 If ip_bin < IpSubnetToBin(table_array.Cells(i, 1)) Then
 b = i - 1
 Else
 a = i
 End If
 Loop While a < b
 If IpSubnetIsInSubnet(ip, table_array.Cells(a, 1)) Then
 IpSubnetMatch = a
 End If
 Else
 Dim previousMatchLen As Integer
 Dim searchLen As Integer
 Dim subnet As String
 Dim subnetLen As Integer
 searchLen = IpSubnetParse(ip)
 previousMatchLen = 0
 For i = 1 To table_array.Rows.Count
 subnet = table_array.Cells(i, 1)
 subnetLen = IpSubnetParse(subnet)
 If subnetLen > previousMatchLen Then
 If searchLen >= subnetLen Then
 If IpComp(ip, subnet, subnetLen) Then
 previousMatchLen = subnetLen
 IpSubnetMatch = i
 End If
 End If
 End If
 Next i
 End If
End Function

'----------------------------------------------
' IpSubnetVLookup
'----------------------------------------------
' Tries to match an IP address or a subnet against a list of subnets in the
' left-most column of table_array and returns the value in the same row based
' on the index_number
' 'ip' is the value to search for in the subnets in the first column of
' the table_array
' 'table_array' is one or more columns of data
' 'index_number' is the column number in table_array from which the matching
' value must be returned. The first column which contains subnets is 1.
' 'fast' indicates the search mode : BestMatch or Fast mode
' fast = 0 (default value)
' This will work on any subnet list. If the search value matches more
' than one subnet, the smallest subnet will be returned (best match)
' fast = 1
' The subnet list MUST be sorted in ascending order and MUST NOT contain
' overlapping subnets. This mode performs a dichotomic search and runs
' much faster with large subnet lists.
' Note: add 0.0.0.0/0 in the array if you want the function to return a
' default value (best match mode only)
Function IpSubnetVLookup(ByVal ip As String, table_array As Range, index_number As Integer, Optional fast As Boolean = False) As String
 Dim i As Integer
 i = IpSubnetMatch(ip, table_array, fast)
 If i = 0 Then
 IpSubnetVLookup = "Not Found"
 Else
 IpSubnetVLookup = table_array.Cells(i, index_number)
 End If
End Function

'----------------------------------------------
' IpSubnetVLookupAreas
'----------------------------------------------
' Same as IpSubnetVLookup except that table_array parameter can be a
' named area containing multiple tables. Use it if you want to search in
' more than one table.
' Doesn't have the 'fast' option.
Function IpSubnetVLookupAreas(ByVal ip As String, table_array As Range, index_number As Integer) As String
 Dim previousMatch As String
 previousMatch = "0.0.0.0/0"
 IpSubnetVLookupAreas = "Not Found"
 For a = 1 To table_array.Areas.Count
 For i = 1 To table_array.Areas(a).Rows.Count
 Dim subnet As String
 subnet = table_array.Areas(a).Cells(i, 1)
 If IpIsInSubnet(ip, subnet) And (IpSubnetLen(subnet) > IpSubnetLen(previousMatch)) Then
 previousMatch = subnet
 IpSubnetVLookupAreas = table_array.Areas(a).Cells(i, index_number)
 End If
 Next i
 Next a
End Function

'----------------------------------------------
' IpSubnetIsInSubnet
'----------------------------------------------
' Returns TRUE if "subnet1" is in "subnet2"
' example:
' IpSubnetIsInSubnet("192.168.1.35/30"; "192.168.1.32/29") returns TRUE
' IpSubnetIsInSubnet("192.168.1.41/30"; "192.168.1.32/29") returns FALSE
' IpSubnetIsInSubnet("192.168.1.35/28"; "192.168.1.32/29") returns FALSE
' IpSubnetIsInSubnet("192.168.0.128 255.255.255.128"; "192.168.0.0 255.255.255.0") returns TRUE
Function IpSubnetIsInSubnet(ByVal subnet1 As String, ByVal subnet2 As String) As Boolean
 Dim l1 As Integer
 Dim l2 As Integer
 l1 = IpSubnetParse(subnet1)
 l2 = IpSubnetParse(subnet2)
 If l1 < l2 Then
 IpSubnetIsInSubnet = False
 Else
 IpSubnetIsInSubnet = IpComp(subnet1, subnet2, l2)
 End If
End Function

'----------------------------------------------
' IpFindOverlappingSubnets
'----------------------------------------------
' this function must be used in an array formula
' it will find in the list of subnets which subnets overlap
' 'SubnetsArray' is single column array containing a list of subnets, the
' list may be sorted or not
' the return value is also a array of the same size
' if the subnet on line x is included in a larger subnet from another line,
' this function returns an array in which line x contains the value of the
' larger subnet
' if the subnet on line x is distinct from any other subnet in the array,
' then this function returns on line x an empty cell
' if there are no overlapping subnets in the input array, the returned array
' is empty
Function IpFindOverlappingSubnets(subnets_array As Range) As Variant
 Dim result_array() As Variant
 ReDim result_array(1 To subnets_array.Rows.Count, 1 To 1)
 For i = 1 To subnets_array.Rows.Count
 result_array(i, 1) = ""
 For j = 1 To subnets_array.Rows.Count
 If (i <> j) And IpSubnetIsInSubnet(subnets_array.Cells(i, 1), subnets_array.Cells(j, 1)) Then
 result_array(i, 1) = subnets_array.Cells(j, 1)
 Exit For
 End If
 Next j
 Next i
 IpFindOverlappingSubnets = result_array
End Function

'----------------------------------------------
' IpSortArray
'----------------------------------------------
' this function must be used in an array formula
' 'ip_array' is a single column array containing ip addresses
' the return value is also a array of the same size containing the same
' addresses sorted in ascending or descending order
' 'descending' is an optional parameter, if set to True the adresses are
' sorted in descending order
Function IpSortArray(ip_array As Range, Optional descending As Boolean = False) As Variant
 Dim s As Integer
 Dim t As Integer
 t = 0
 s = ip_array.Rows.Count
 Dim list() As Double
 ReDim list(1 To s)
 ' copy the IP list as binary values
 For i = 1 To s
 If (ip_array.Cells(i, 1) <> 0) Then
 t = t + 1
 list(t) = IpStrToBin(ip_array.Cells(i, 1))
 End If
 Next i
 ' sort the list with bubble sort
 For i = t - 1 To 1 Step -1
 For j = 1 To i
 If ((list(j) > list(j + 1)) Xor descending) Then
 Dim swap As Double
 swap = list(j)
 list(j) = list(j + 1)
 list(j + 1) = swap
 End If
 Next j
 Next i
 ' copy the sorted list as strings
 Dim resultArray() As Variant
 ReDim resultArray(1 To s, 1 To 1)
 For i = 1 To t
 resultArray(i, 1) = IpBinToStr(list(i))
 Next i
 IpSortArray = resultArray
End Function

'----------------------------------------------
' IpSubnetSortArray
'----------------------------------------------
' this function must be used in an array formula
' 'ip_array' is a single column array containing ip subnets in "prefix/len"
' or "prefix mask" notation
' the return value is also an array of the same size containing the same
' subnets sorted in ascending or descending order
' 'descending' is an optional parameter, if set to True the subnets are
' sorted in descending order
Function IpSubnetSortArray(ip_array As Range, Optional descending As Boolean = False) As Variant
 Dim s As Integer
 Dim t As Integer
 t = 0
 s = ip_array.Rows.Count
 Dim list() As String
 ReDim list(1 To s)
 ' copy the IP list as binary values
 For i = 1 To s
 If (ip_array.Cells(i, 1) <> 0) Then
 t = t + 1
 list(t) = ip_array.Cells(i, 1)
 End If
 Next i
 ' sort the list with bubble sort
 For i = t - 1 To 1 Step -1
 For j = 1 To i
 Dim m, n As Double
 m = IpStrToBin(list(j))
 n = IpStrToBin(list(j + 1))
 If (((m > n) Or ((m = n) And (IpMaskBin(list(j)) < IpMaskBin(list(j + 1))))) Xor descending) Then
 Dim swap As String
 swap = list(j)
 list(j) = list(j + 1)
 list(j + 1) = swap
 End If
 Next j
 Next i
 ' copy the sorted list as strings
 Dim resultArray() As Variant
 ReDim resultArray(1 To s, 1 To 1)
 For i = 1 To t
 resultArray(i, 1) = list(i)
 Next i
 IpSubnetSortArray = resultArray
End Function

'----------------------------------------------
' IpParseRoute
'----------------------------------------------
' this function is used by IpSubnetSortJoinArray to extract the subnet
' and next hop in route
' the supported formats are
' 10.0.0.0 255.255.255.0 1.2.3.4
' 10.0.0.0/24 1.2.3.4
' the next hop can be any character sequence, and not only an IP
Function IpParseRoute(ByVal route As String, ByRef nexthop As String)
 slash = InStr(route, "/")
 sp = InStr(route, " ")
 If ((slash = 0) And (sp > 0)) Then
 temp = Mid(route, sp + 1)
 sp = InStr(sp + 1, route, " ")
 End If
 If (sp = 0) Then
 IpParseRoute = route
 nexthop = ""
 Else
 IpParseRoute = Left(route, sp - 1)
 nexthop = Mid(route, sp + 1)
 End If
End Function

'----------------------------------------------
' IpSubnetSortJoinArray
'----------------------------------------------
' this function can sort and summarize subnets or ip routes
' it must be used in an array formula
' 'ip_array' is a single column array containing ip subnets in "prefix/len"
' or "prefix mask" notation
' the return value is also an array of the same size containing the same
' subnets sorted in ascending order
' any consecutive subnets of the same size will be summarized when it is
' possible
' each line may contain any character sequence after the subnet, such as
' a next hop or any parameter of an ip route
' in this case, only subnets with the same parameters will be summarized
Function IpSubnetSortJoinArray(ip_array As Range) As Variant
 Dim s As Integer
 Dim t As Integer
 Dim a As String
 Dim b As String
 Dim nexthop1 As String
 Dim nexthop2 As String
 t = 0
 s = ip_array.Rows.Count
 Dim list() As String
 ReDim list(1 To s)
 ' copy subnet list
 For i = 1 To s
 If (ip_array.Cells(i, 1) <> 0) Then
 t = t + 1
 ' just use the networks as provide:
 'list(t) = ip_array.Cells(i, 1)
 ' or clean up the host part in each subnet:
 a = IpParseRoute(ip_array.Cells(i, 1), nexthop1)
 list(t) = IpClearHostBits(a) + " " + nexthop1
 End If
 Next i
 ' sort the list with bubble sort
 For i = t - 1 To 1 Step -1
 For j = 1 To i
 Dim m, n As Double
 a = IpParseRoute(list(j), nexthop1)
 b = IpParseRoute(list(j + 1), nexthop2)
 m = IpStrToBin(IpWithoutMask(a))
 n = IpStrToBin(IpWithoutMask(b))
 If ((m > n) Or ((m = n) And (IpMaskBin(a) < IpMaskBin(b)))) Then
 Dim swap As String
 swap = list(j)
 list(j) = list(j + 1)
 list(j + 1) = swap
 End If
 Next j
 Next i
 ' try to join subnets
 i = 1
 While (i < t)
 remove_next = False
 a = IpParseRoute(list(i), nexthop1)
 b = IpParseRoute(list(i + 1), nexthop2)
 If (IpSubnetIsInSubnet(a, b) And (nexthop1 = nexthop2)) Then
 list(i) = list(i + 1)
 remove_next = True
 ElseIf (IpSubnetIsInSubnet(b, a) And (nexthop1 = nexthop2)) Then
 remove_next = True
 ElseIf ((IpSubnetLen(a) = IpSubnetLen(b)) And (nexthop1 = nexthop2)) Then
 ' create a subnet with the same notation
 bigsubnet = Replace(IpWithoutMask(a) + "/" + Str(IpSubnetLen(a) - 1), " ", "")
 If (InStr(a, "/") = 0) Then
 bigsubnet = IpWithoutMask(a) & " " & IpMask(bigsubnet)
 Else
 End If
 If (IpSubnetIsInSubnet(b, bigsubnet)) Then
 ' OK these subnets can be joined
 list(i) = bigsubnet & " " & nexthop1
 remove_next = True
 End If
 End If

 If (remove_next) Then
 ' remove list(i+1) and make the list one element shorter
 For j = i + 1 To t - 1
 list(j) = list(j + 1)
 Next j
 t = t - 1
 ' step back and try again because list(i) may be joined with list(i-1)
 If (i > 1) Then i = i - 1
 Else
 i = i + 1
 End If
 Wend
 ' copy the sorted list as strings
 Dim resultArray() As Variant
 ReDim resultArray(1 To s, 1 To 1)
 For i = 1 To t
 resultArray(i, 1) = list(i)
 Next i
 IpSubnetSortJoinArray = resultArray
End Function

'----------------------------------------------
' IpDivideSubnet
'----------------------------------------------
' divide a network in smaller subnets
' "n" is the value that will be added to the subnet length
' "SubnetSeqNbr" is the index of the smaller subnet to return
' example:
' IpDivideSubnet("1.2.3.0/24"; 2; 0) returns "1.2.3.0/26"
' IpDivideSubnet("1.2.3.0/24"; 2; 1) returns "1.2.3.64/26"
Function IpDivideSubnet(ByVal subnet As String, n As Integer, index As Integer)
 Dim ip As String
 Dim slen As Integer
 ip = IpAnd(IpWithoutMask(subnet), IpMask(subnet))
 slen = IpSubnetLen(subnet) + n
 If (slen > 32) Then
 IpDivideSubnet = "ERR subnet lenght > 32"
 Exit Function
 End If
 If (index >= 2 ^ n) Then
 IpDivideSubnet = "ERR index out of range"
 Exit Function
 End If
 ip = IpBinToStr(IpStrToBin(ip) + (2 ^ (32 - slen)) * index)
 IpDivideSubnet = Replace(ip + "/" + Str(slen), " ", "")
End Function

'----------------------------------------------
' IpIsPrivate
'----------------------------------------------
' returns TRUE if "ip" is in one of the private IP address ranges
' example:
' IpIsPrivate("192.168.1.35") returns TRUE
' IpIsPrivate("209.85.148.104") returns FALSE
Function IpIsPrivate(ByVal ip As String) As Boolean
 IpIsPrivate = (IpIsInSubnet(ip, "10.0.0.0/8") Or IpIsInSubnet(ip, "172.16.0.0/12") Or IpIsInSubnet(ip, "192.168.0.0/16"))
End Function

'----------------------------------------------
' IpRangeToCIDR
'----------------------------------------------
' returns a network or a list of networks given the first and the
' last address of an IP range
' if this function is used in a array formula, it may return more
' than one network
' example:
' IpRangeToCIDR("10.0.0.1","10.0.0.254") returns 10.0.0.0/24
' IpRangeToCIDR("10.0.0.1","10.0.1.63") returns the array : 10.0.0.0/24 10.0.1.0/26
' note:
' 10.0.0.0 or 10.0.0.1 as the first address returns the same result
' 10.0.0.254 or 10.0.0.255 (broadcast) as the last address returns the same result
Function IpRangeToCIDR(ByVal firstAddr As String, ByVal lastAddr As String) As Variant
 firstAddr = IpAnd(firstAddr, "255.255.255.254") ' set the last bit to zero
 lastAddr = IpOr(lastAddr, "0.0.0.1") ' set the last bit to one
 Dim list() As String
 n = 0
 Do
 l = 0
 Do ' find the largest network which first address is firstAddr and which last address is not higher than lastAddr
 ' build a network of length l
 ' if it does not comply the above conditions, try with a smaller network
 l = l + 1
 net = firstAddr & "/" & l
 ip1 = IpAnd(firstAddr, IpMask(net)) ' first @ of this network
 ip2 = IpOr(firstAddr, IpWildMask(net)) ' last @ of this network
 net = ip1 & "/" & l ' rebuild the network with the first address
 diff = IpDiff(ip2, lastAddr) ' difference between the last @ of this network and the lastAddr we need to reach
 Loop While (l < 32) And ((ip1 <> firstAddr) Or (diff > 0))

 n = n + 1
 ReDim Preserve list(1 To n)
 list(n) = net
 firstAddr = IpAdd(ip2, 1)
 Loop While (diff < 0) ' if we haven't reached the lastAddr, loop to build another network

 Dim resultArray() As Variant
 ReDim resultArray(1 To n + 1, 1 To 1)
 For i = 1 To n
 resultArray(i, 1) = list(i)
 Next i
 IpRangeToCIDR = resultArray
End Function

'----------------------------------------------
' IpSubtractSubnets
'----------------------------------------------
' Remove subnets from a list of subnets
' this function must be used in an array formula
' 'input_array' is a list of assigned subnets
' 'subtract_array' is a list of used subnets
' the result is a list of unused subnets
Function IpSubtractSubnets(input_array As Range, subtract_array As Range) As Variant
 Dim i As Integer
 Dim j As Integer
 Dim k As Integer
 Dim s As Integer
 s = 0
 Dim list() As String
 ReDim list(1 To input_array.Rows.Count)
 ' copy subnet list
 For i = 1 To input_array.Rows.Count
 If (input_array.Cells(i, 1) <> 0) Then
 s = s + 1
 list(s) = input_array.Cells(i, 1)
 End If
 Next i

For i = 1 To subtract_array.Rows.Count
 ' try to remove each network in subtract_array from the list in input_array
 subtractNet = subtract_array.Cells(i, 1)
 If subtractNet <> 0 Then
 ' try to remove each network in subtract_array from each network in input_array
 j = 1
 Do
 net = list(j)
 ' is the network to remove equal or larger ?
 If IpSubnetIsInSubnet(net, subtractNet) Then ' remove the network from input_array
 For k = j To s - 1
 list(k) = list(k + 1)
 Next k
 s = s - 1
 ' is the network to remove smaller ?
 ElseIf IpSubnetIsInSubnet(subtractNet, net) Then ' split this network in input_array
 ' insert a line in the result array
 s = s + 1
 ReDim Preserve list(1 To s)
 For k = s To j + 2 Step -1
 list(k) = list(k - 1)
 Next k
 ' create 2 smaller subnets
 list(j + 1) = IpDivideSubnet(list(j), 1, 1)
 list(j) = IpDivideSubnet(list(j), 1, 0)
 Else
 ' nothing to do, skip to next network in input_array
 j = j + 1
 End If
 Loop While j <= s
 End If
 Next i

Dim resultArray() As Variant
 ReDim resultArray(1 To s + 1, 1 To 1)
 For i = 1 To s
 resultArray(i, 1) = list(i)
 Next i
 IpSubtractSubnets = resultArray
End Function

'----------------------------------------------
' IpDiff
'----------------------------------------------
' difference between 2 IP addresses
' example:
' IpDiff("192.168.1.7"; "192.168.1.1") returns 6
Function IpDiff(ByVal ip1 As String, ByVal ip2 As String) As Double
 Dim mult As Double
 mult = 1
 IpDiff = 0
 While ((ip1 <> "") Or (ip2 <> ""))
 IpDiff = IpDiff + mult * (IpParse(ip1) - IpParse(ip2))
 mult = mult * 256
 Wend
End Function

'----------------------------------------------
' IpParse
'----------------------------------------------
' Parses an IP address by iteration from right to left
' Removes one byte from the right of "ip" and returns it as an integer
' example:
' if ip="192.168.1.32"
' IpParse(ip) returns 32 and ip="192.168.1" when the function returns
Function IpParse(ByRef ip As String) As Integer
 Dim pos As Integer
 pos = InStrRev(ip, ".")
 If pos = 0 Then
 IpParse = Val(ip)
 ip = ""
 Else
 IpParse = Val(Mid(ip, pos + 1))
 ip = Left(ip, pos - 1)
 End If
End Function

'----------------------------------------------
' IpBuild
'----------------------------------------------
' Builds an IP address by iteration from right to left
' Adds "ip_byte" to the left the "ip"
' If "ip_byte" is greater than 255, only the lower 8 bits are added to "ip"
' and the remaining bits are returned to be used on the next IpBuild call
' example 1:
' if ip="168.1.1"
' IpBuild(192, ip) returns 0 and ip="192.168.1.1"
' example 2:
' if ip="1"
' IpBuild(258, ip) returns 1 and ip="2.1"
Function IpBuild(ip_byte As Double, ByRef ip As String) As Double
 If ip <> "" Then ip = "." + ip
 ip = Format(ip_byte And 255) + ip
 IpBuild = ip_byte  256
End Function

'----------------------------------------------
' IpMaskBin
'----------------------------------------------
' returns binary IP mask from an address with / notation (xx.xx.xx.xx/yy)
' example:
' IpMask("192.168.1.1/24") returns 4294967040 which is the binary
' representation of "255.255.255.0"
Function IpMaskBin(ByVal ip As String) As Double
 Dim bits As Integer
 bits = IpSubnetLen(ip)
 IpMaskBin = (2 ^ bits - 1) * 2 ^ (32 - bits)
End Function

Save the Macro and close the editor and then save the Excel sheet as Macro Enabled format.

now we have the functions that can reverse lookup an IP to a Subnet , but we need to import the data to the Excel sheet and then do more steps..

Now we will download the building data and open an empty sheet in the same Excel file that we have created and then Rename the Sheet to Subnets

Then on the first Column A write the following Formula in the first cell:

=IF(ISBLANK('C:tmp[BuildingDATAFILE.csv]BuildingDATAFILE'!$A2),"",CONCATENATE('C:tmp[BuildingDATAFILE.csv]BuildingDATAFILE'!$A:$A,"/",'C:tmp[BuildingDATAFILE.csv]BuildingDATAFILE'!$C:$C))

Replace BuildingDATAFILE with your Building data file name and path.

Fill down the rest of the column with this formula:

  • Select and copy the cell containing the formula and press CTRL+SHIFT+DOWN to select the rest of the column
  • Fill down by pressing CTRL+D
  • Use CTRL+UP to return up

 

Now you will get the Subnet and Mask in Column A; we have to sort them

In the same sheet select Column B

type the formula, =iPsortArray(A:A)
press Ctrl+Shift+Enter instead of Enter

Now we have them in sorted order in Column B; then on Column C we will add a new Formula to do vlookup for the Subnet name from the data building file.

in Column C type the following Formula:

=VLOOKUP(B1,'C:tmp[BuildingDATAFILE.csv]BuildingDATAFILE'!$A$1:$B$5,2,FALSE)

now we got the subnet sorted and with their corresponding names.

lets go back to the first sheet and add a coulmn called From Subnet and To Subnet corresponding to the to and from IP address

then add the formula respectively to reverse lookup the IPs

=IpSubnetVLookup([@FromIPAddr],Subnets!$A$1:$B$5,3)

=IpSubnetVLookup([@ToIPAddr],Subnets!$A$1:$B$5,3)

Finally as we got all the information we need we will open Power BI Desktop to import this information

Open a new Query

then Click Advanced Editor and enter the following query

let
 Source = Excel.Workbook(File.Contents("C:tmpBook1.xlsm"), null, true),
 Sheet1_Sheet = Source{[Item="Sheet1",Kind="Sheet"]}[Data],
 #"Promoted Headers" = Table.PromoteHeaders(Sheet1_Sheet, [PromoteAllScalars=true]),
 #"Removed Duplicates" = Table.Distinct(#"Promoted Headers", {"DialogId"}),
 #"Removed Blank Rows" = Table.SelectRows(#"Removed Duplicates", each not List.IsEmpty(List.RemoveMatchingItems(Record.FieldValues(_), {"", null})))
in
 #"Removed Blank Rows"

Click Close and Apply

Then all of the Excel information will be imported into Power BI we will start some interesting reports:

first we will need the From Subnet in percentage as well as the To Subnet

Then we will add the Ratings and the Count of Ratings only

as you can see if you clicked on these ratings it will show you the percentage of the fro and to subnet names that the users are reporting Very bad, fair, good or excellent from.

you can add as well the users by rating count which will show you the users who most rated poor , very bad, so you can address their issues.

 

Now moving to the next part where we Analyse the user sentiment, Thanks to DataCHant blog for helping me out in this task

(http://community.powerbi.com/t5/Community-Blog/Text-Analytics-in-Power-BI-Extraction-of-key-phrases-from/ba-p/88487)

i am copying most of his post blog here as i had to do the exact steps with minor changes

Before we start Sentiment Analytics 

Prerequisites

To do this tutorial, you need:

 

Preparing the API queries - A parameter and a query function

 

In this section we'll prepare the advanced Power Query code that we'll need for the Key Phrase API.

 

In the Query Editor's Home tab, click Manager Parameters, then select New Parameter.

 

In the Parameters dialog, set APIKey as Name and paste the API Key that you obtained from Microsoft Cognitive Services into the text box Current Value.

When you are done, click OK.

 

Screenshot_28.png

 

In the Query Editor's Home tab, click the drop down menu below the icon of New Source and select Blank Query. Still in Home tab, click Advanced Query and paste the following code:

 

(Source) =>
let
    JsonRecords = Text.FromBinary(Json.FromValue(Source)),
    JsonRequest = "{""documents"": " & JsonRecords & "}",
 
    JsonContent = Text.ToBinary(JsonRequest, TextEncoding.Ascii),
    Response =
        Web.Contents("https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases?",
            [
                Headers = [#"Ocp-Apim-Subscription-Key"= APIKey,
                           #"Content-Type"="application/json", Accept="application/json"],
                Content=JsonContent
            ]),
    JsonResponse = Json.Document(Response,1252)
in
    JsonResponse

 

 

Click Done in the Advanced Editor, and rename the new query to GetKeyPhrases.

Screenshot_31.png

 

 

Note: Now in Queries pane, you will notice that the new query GetKeyPhrases has an fx icon. The query above is not a normal query. It is a query function that receives a table as an input, converts it into a JSON format, sends it to Microsoft Cognitive Services and returns the JSON response.

 

Screenshot_32.png

 

Right click on the Sheet1 in the left Queries pane, and click Reference.

 

Rename the query Sheet1(2) to Sentiment Results. To rename the query, right click on it and click Rename.

 

Note: In the next step we will select the columns in our data that contain the actual text for analysis and a unique ID for each text. The unique ID is required by Microsoft Cognitive Services API. The unique ID will help us to map the Sentiment scores in the response to the relevant text. In our data, we use the diagIDs as unique IDs.

In Home tab of Query Editor, click Choose Columns, unselect all columns, then select DiagIDand FeedbackText and click OK.

 

Now, let's rename the columns. DiagID should be renamed to id and FeedbackText should be renamed to text. This step is critical. Don't miss it. The Sentiment Analysis API requires these names.

 

Screenshot_17.png

 

Note: If you need to perform analysis on text in French, Spanish or Portuguese, you can create another step here and define a custom column whose name is language and is value is "fr", "es" or "pt" (For French, Spanish or Portuguese). Since English is the default language in the API, we skip this step.

 

In the next step, we will remove rows with empty text messages. There is no point in sending such rows for Sentiment Analysis, and the service will return errors if we try to.

 

Click in the filter icon of the column text, and then click Remove Empty.

 

Screenshot_22.png

 

Rename the column message to text. This is crucial. Later on we convert the table into a JSON format with the name text.

 

In Add Column tab, click Custom Column.

Set language as New column name, and = "en" as Custom column formula. Then click OK.

 

Screenshot_44.png

 

Note: The Text Analytics Key Phrases API support the following languages: English (en), German (de), Spanish (es) and Japanese (ja). You can use any of these codes if your text is in those languages.

 

 

Group data to sub-tables of 1000 messages each

 

Microsoft Cognitive Services allow us to send 1000 different messages on each API call as part of the Key Phrases API. In this part, we'll group the table into sub-tables of 1000 rows.

 

In Add Column click Index Column.

 

Screenshot_37.png

 

Select the new column, Index, and in Transform tab, click Standard, then select Integer-Divide.

 

Screenshot_38.png

 

In Integer-Divide dialog, set 1000 as Value and click OK.

 

Screenshot_39.png

 

Now the values in column Index are all 0 in the first 1000 rows, all 1 in the next 1000 rows, and so forth.

Select the column Index, and click Group By in Transform tab.

 

Screenshot_40.png

 

In the Group By dialog, ensure that Index is selected in the drop down menu Group By.

Set sub-table as the text in New column name.

Select All Rows as Operation, and click OK.

 

Screenshot_41.png

 

After the grouping step, we have a new column sub-table that contains our tables in bulks of 1000 rows each (excluding the last row that has the remainders).

 

 

Screenshot_66.png

 

 

We can delete the column Index, and click Invoke Custom Function in Add Column tab.

 

In Invoke Custom Function dialog, set GetKeyPhrases as Function Query and select sub-tableas the column. When you are done, click OK.

 

Screenshot_67.png

 

Delete the column sub-table, and click the expand button in the left side of the header GetKeyPhrases.

Ensure all columns are selected in the expand pane, uncheck Use original column name as prefix, and click OK.

 

Screenshot_69.png

 

Delete the column errors.

 

Click on the expand button in the header of column documents, select all columns and click OK.

 

Screenshot_49.png

Click on the expand button in the header of column KeyPhrases.

 

Screenshot_50.png

 

Finally, we have a new column with a key phrase for each message id. You should note that we will usually have multiple key phrases per message, so our table is now expended to multiple rows per message id.

 

For a final cosmetic touch, let's capitalize each word in column KeyPhrases:

Select the column KeyPhrases, right click on its header, select Transform and then select Capitalize Each Word.

 

Screenshot_70.png

 

We have completed the extraction of key messages. It's time to move to the modeling and visualization.

In the Query Editor window, click Close & Apply.

 

Feeding Key Phrases to a WordCloud Custom Visual

 

We are almost done. It's time to move to the visualization part. We'll now learn how to create a simple WordCloud visual with our key phrases.

 

After completing the last two parts, we now have two tables FeedbackText and KeyPhrases. We can create a relationship between column id in both tables (if Power BI Desktop didn't detect them automatically). To do it, go to Relationships view, and drag and drop the column Diagid from  FeedbackText to KeyPhrases as shown here.

 

Screenshot_71.png

 

Download WordCloud (from here), and import it to your report .

 

Click the Report view, and add a WordCloud to your report. Drag and drop the column KeyPhrases to Category and count of  KeyPhrases to Value .

 

In Format tab in Visualizations, turn on Stop Words and Turn off Rotate Text.

 

Under section General, turn off Word-breaking.

 

Screenshot_58.png

You can now resize the WordCloud visual, and see the size of the key phrases is defined by the number of posts.

 

 

 

Data Lakes Stories of Relevance to Retail

$
0
0


ASOS: http://bit.ly/2ljJSgN

The offline recommendations engine is used to batch-process user
telemetry to build multiple machine learning models to be hosted by the online
recommendations service. User interaction telemetry is stored in Azure Data
Lake Store for long-term storage. Competing versions of the user and product
vector models are generated, using the Apache Spark MLlib machine learning
library in Azure HDInsight using Python LightFM and TensorFlow. These are then
bulk-loaded by Azure Data Factory into Azure Cosmos DB.


Ignition AI: http://bit.ly/2zd3GWg

Turning marketing from an art to a precise science requires
ingesting vast amounts of live information from numerous sources such as social
media, browsing data and previous purchasing patterns and deploying highly
specialised technology. Ignition AI uses Microsoft Azure Data Lakes to manage
the data because it provides a cloud based platform to store, process and
analyse any amount or type of data at incredible speeds.


Acxiom: http://bit.ly/2sdSgkM

Acxiom is using Azure Data Lake and Azure HDInsight to build a
data exchange platform and marketplace where it can share its data with third
parties.


Damco: http://bit.ly/2ze3c4Y

Damco’s disruptive Logistics application makes extensive use of
Microsoft Azure cloud technologies to improve insight into potential supply
chain problems, reduce the amount of manual effort involved when workarounds
for disruptions are necessary, and proactively notify clients. The app uses
Azure Event Hubs and Azure Stream Analytics to combine and analyze external
data from news and weather feeds and internal data from the Damco supply chain
management solution. All of this data is ingested, parsed, processed, and
stored using Azure Data Factory, Azure Data Lake, and Azure SQL Data Warehouse,
and it is visualized so it can be easily turned into actionable business
insights.


Sustainalytics: http://bit.ly/2AjtqjR

Sustainalytics, a global responsible investment research firm,
adopted a broad range of Microsoft Azure services for processing and
integrating a large volume of structured and unstructured, high and low
velocity data as part of its research activity. Sustainalytics built a Data
Lake on top of Azure HDInsight in order to consolidate and streamline
structured and unstructured data from 20 different sources.


Experiencing Data Gaps for Availability Data Type – 11/04 – Investigating

$
0
0
Initial Update: Saturday, 04 November 2017 17:51 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers whose data sent to North Europe region experience availability data gaps The following data types are affected: Availability.
  • Work Around: Customers should ensure their availability tests are running from at least three locations to ensure redundant coverage through maintenance. Please refer to the following article on how to configure availability web tests: https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/
  • Next Update: Before 11/04 20:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Praveen

Deleting an Azure Recovery Services Vault with all Backup Items

$
0
0

When I am prototyping Azure solutions with Azure Resource Manager templates and scripts, I often find myself allocating resources and when I am done, I will delete the entire resource group to clean up. If you have ever created some backups in Azure Recovery Services Vault you will know that you cannot simply remove the Azure Recovery Services vault without removing any backup items that you have in there. The details on how to do that can be found here. It is however a bit painful to click your way through deleting a lot of backup items, so I use a PowerShell script. It looks something like this:

First make sure you login:

Login-AzureRmAccount

or

Login-AzureRmAccount -Environment AzureUSGovernment

if you are using Azure Government

Then create a script, e.g. DeleteRecoveryServicesVault.ps1:

param
(
    [Parameter(Mandatory=$true,Position=1)]
    [String]$ResourceGroupName,

    [Parameter(Mandatory=$true,Position=2)]
    [String]$VaultName
)

$rv = Get-AzureRmRecoveryServicesVault -Name $VaultName -ResourceGroupName $ResourceGroupName
Set-AzureRmRecoveryServicesVaultContext -Vault $rv
$rcs = Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureVM

foreach ($c in $rcs) {
    $bi = Get-AzureRmRecoveryServicesBackupItem -Container $c -WorkloadType AzureVM
    Disable-AzureRmRecoveryServicesBackupProtection -Item $bi -RemoveRecoveryPoints -Force
}

Remove-AzureRmRecoveryServicesVault -Vault $rv

And you can remove the vault with:

.DeleteRecoveryServicesVault.ps1 -ResourceGroupName <RESOURCE GROUP> -VaultName <VAULT NAME>

Of course, you may need to make adjustments depending on what types of workloads you have in the vault. Keep the script handy for when you need to do that again.

Be careful. This script will wipe out your recovery services vault. Make sure that is what you want to do.

We’ve moved! Feature updates, release notes, and more

$
0
0

Hello Outlook for Mac users:

 

We wanted to tell you about two new locations for Outlook for Mac content, brought to you directly by our product team:

  1. Outlook blog on Microsoft Tech Community: here you'll find new feature announcements, news, and other insights
  2. Release notes on Office Support: including Insider Fast, Insider Slow, and Production release notes

 

Thanks for following us here on our "Outlook for Mac Insights" MSDN blog.

We'll keep prior posts up for the time being as a reference, and we encourage you to visit (and bookmark!) the pages above for the latest on Outlook for Mac.

 

Thanks,

Jeff, on behalf of the Outlook for Mac team

Last post…

$
0
0

 

Hey folks.

 

After 23 years of fun, I'm leaving Microsoft to try some new things, and therefore this blog will be going into readonly mode.

 

If you want to continue to read about agile development or design, you might be interested in my new blog:

 

EricsCodeShack.com

Cannot validate argument on parameter (BizTalk 2016 deployment, feature pack 1)

$
0
0

When working with the new feature pack feature related to auto build and deployment, you can get the following error in the deployment phase:

Error:

2017-11-02T08:25:50.9704669Z ##[error]Cannot validate argument on parameter 'PackagePath'. The " Test-Path -Path $_ -PathType 'Leaf' " validation script for the argument with value "your path" did not return a result of True. Determine why the validation script failed, and then try the command again.

 

As it is stated in the official BizTalk Server documentation here: https://docs.microsoft.com/en-us/biztalk/core/add-a-biztalk-server-application-to-visual-studio-team-services a new application project should be added to your BizTalk Server solution (BizTalk Server Application Project (.btaproj).)

However, it might happen that when you try to add this new kind of project it is not available. This is because Visual Studio is not targeting the right .NET Framework version for this new BizTalk Server feature (required  >= 4.61)

Now the new project will become available and after you set up the all properties as per the previous referenced documentation, everything should run smoothly:

 

 

 

 

 

 


Creating workshop machines on Azure

$
0
0

During Directions US and Directions EMEA, we had to spin up approx. 2000 Azure VMs for our hands on labs.

All of these machines was deployed individually from PowerShell (multiple simultaneous jobs, but still one job=one VM) running on my Developer Machine in Lyngby. The scripts used to create these VMs has been shared with a number of partners with the same need.

The scripts were using an ARM (Azure Resource Management) template, much like the one you probably know from http://aka.ms/navdeveloperpreview, but I never shared the scripts on my blog because I knew there was a better way. ARM has the ability to loop creation of resources, so it must be possible to create a template, which can spin up multiple VMs, and indeed it is.

http://aka.ms/getnavworkshopvms

If you navigate to http://aka.ms/getnavworkshopvms you will be asked to login to your Azure Subscription (much like http://aka.ms/getnav, explained in this blog post). You will be met by a template, which looks slightly different from the other templates:

Let me explain the fields in this template

First, the mandatory fields:

  • Resource group is the resource group, which will contain all workshop VMs. The ARM template will create a number of resources in this group and deleting the resource group after usage will remove everything used for this deployment:
    • 1 Storage account
    • 1 Virtual Network
    • 1 Network Security group
    • x Virtual Machines, which each has a network interface and a public IP address.
  • Location is the datacenter in which your Virtual Machines should be created.
  • vm Name all VMs created in the deployment will get a name starting with this name, followed by a VM number
  • VM Admin Username is the username which you can use to connect to the VM with remote desktop.
  • Nav Admin Username is the username created as a super user in NAV.
  • adminPassword is the password used as administrator password for VM, NAV and SQL.
  • Nav Docker Image is the name of the docker image you want to use for the workshops
  • Count is the number of workshops VMs you want to create
  • Offset is the offset of the VM numbers. If vm Name is test, Count is two, and Offset is 10, you will get two VMs with the names test10 and test11.

Then the optional fields:

  • License File Uri  is where you can provide a secure URL to the license file you want to use for the workshop machines.
  • Workshop Files Url is where you can provide a Url to a .zip file, which will be downloaded and extracted in c:workshopfiles on each VM
  • Final Setup Script Url is where you can provide a Url to a PowerShell script, which can perform final Setup. An example would be https://raw.githubusercontent.com/Microsoft/nav-arm-templates/master/SetupWorkshop.ps1 which installs PDF reader and Visual Studio.
  • Certificate Pfx Url is where you can provide a secure Url to a Certificate Pfx file if you want to use a trusted certificate. If you do not specify a trusted certificate, the workshop VM will be secured with a self-signed certificate, which you can download and install following the instructions on the landing page.
  • Certificate Pfx Password is where you provide the password for the Certificate Pfx file provided above
  • Public Dns Name is where you specify the CNAME record pointing to your VM. Insert a # in the DNS name, where you want to insert the VM number (f.ex. ws#.navdemo.net would give ws10.navdemo.net to test10, and ws11.navdemo.net to test11 - note that you need to create the CNAME records manually).

Note: You will have to have sufficient quota for creating the workshop machines in the location you are creating them. Navigate to usage & quotes under your subscription to validate. Typically a subscription has 100 standard D family cores, 350 network interfaces and 60 public IP addresses, meaning that without requesting a quota increase, you can spin up 50 workshop machines in a subscription (50 * D2 = 100 cores). Quota increase is requested in the Azure Portal and can take anywhere between a few minutes to a few days.

Important Note: Currently (November 2017) I do NOT recommend you to use a trusted certificate when running workshops. Reason for this is, that the Web Client uses STS (Strict Transport Security), which forces all subsequent requests made to the DNS name to become https. This in effect means, that you cannot access the landing page once you have accessed the Web Client (unless you access the landing page by the IP address). I will update this blog post once there is a solution to this problem.

Azure deployment succeeded

Like with the other Azure Templates for generating VMs, Azure Deployment will say succeeded when the Azure part of the deployment is done:

This means that you shortly after can connect to the landing page of the VMs (click the Virtual Machine and get the URL) and follow the progress of the individual VMs. You probably don't want to do that though, you probably just want to wait ~1 hour and then test whether all the VMs have been deployed successfully (PowerShell script follows:-))

Azure deployment failed

If you provision 100 Azure Workshop VMs, some will probably fail - and if just one fails, Azure will mark the deployment as failed.

Don't panic - majority of your VMs are probably fine and you shouldn't remove the resource group and redeploy just because Azure indicated failure, just always deploy a few extra spare ones and then shut down the failed or the extra VMs individually if you like.

Whether you are provisioning 10 or 100 VMs, it takes approx 1 hour since everything is running simultaneously.

Run this powershell piece to test whether your workshop VMs are ready to use:

$vmname = "fkb"
$domain = "westeurope.cloudapp.azure.com"
$offset = 1
$count = 2

$good = 0
$bad = 0
$offset..($offset+$count-1) | % {
  $status = ""
  try {
    $status = (New-Object System.Net.WebClient).DownloadString("http://$vmname$_.$domain/status.aspx")
  } catch { }
  if ($status.Contains('Ready for connections!') -and $status.Contains('Desktop setup complete!')) {
    $good++
  } else {
    $bad++
    Write-Host "$_ failed"
  }
}
Write-Host "Succeeded: $good"
Write-Host "Failed: $bad"

As you probably can read, the script will inspect the status.aspx page to see whether the Desktop setup and the Container setup both completed successfully.

 

Enjoy

Freddy Kristiansen
Technical Evangelist

In the beginning

$
0
0

In the beginning there was the Metaverse, and the Metaverse was good.

[Service Fabric] Auto-scaling your VMSS

$
0
0

In this article from his blog, Premier Developer consultant Larry Wall discusses building an ARM template that configures auto-scaling on a virtual machine scale set (VMSS).


Like most people, whenever I need to build an ARM template to do something with Service Fabric, I'm browsing around Github or where ever else I can find the bits and pieces of JSON that I need.

I was recently working on a project where they needed 3 things:

  1. They wanted their Service Fabric cluster to use managed disks instead of Azure Storage accounts.
  2. They wanted to have auto-scaling setup for their virtual machine scale set (VMSS). In this case, we were using a CPU rule
  3. They had a stateless service where they wanted to register their service event source to output logging information in to the WADETWEventTable.

Continue reading on Larry’s blog here.

Can your desktop app leverage the new UIA Notification event in order to have Narrator say exactly what your customers need?

$
0
0

This post describes some approaches that Windows desktop app devs might consider for having a screen reader make a specific announcement. This includes leveraging the new UI Automation (UIA) Notification event introduced in the Windows 10 Fall Creators Update.

Introduction

By default, screen readers being used by your customers will convey information related to where your customers are currently working. For a screen reader to make announcements relating to other things going on in an app could introduce distractions that make working at the app very difficult. But periodically questions are raised about how desktop apps can take action which leads to a screen reader making a particular announcement, even though the UI where the screen reader's currently located doesn't convey the information related to the announcement.

So the first thing to consider in this situation, is whether it's really helpful to your customers to have the announcement made. There are certainly critically important scenarios where an announcement must be made, and silence is to be avoided. You should never hear your customers say "Did anything just happen?" when something important has just happened in your app. There are also scenarios where announcements can be an unwelcome distraction, and really don't add any value to your customer experience. For the discussion below, we'll assume that it is helpful to your customer for a particular announcement to be made.

Apologies up-front: When I uploaded this post to the blog site, the images did not get uploaded with the alt text that I'd set on them. So any images are followed by a title. Also, the code snippets below don't look anything like they would in a code editor, so they're pretty difficult to follow. I've not found a way of fixing that with the tools I have available.

 

Are you sure a particular UI design couldn't result in a helpful announcement being made by default?

Before considering how specific action on your app's part might lead to an announcement being made by a screen reader, it's worth considering whether a particular UI design could lead to the announcement being made by default, and which is a good experience for everyone. For example, say some background transaction is in process, and when some unexpected error occurs, today your app shows the text "Your really important transaction did not complete!" somewhere down near the bottom of your app's window. By default, Narrator probably won't announce that important text given that it's interacting with other UI in the app at the time the text appears, despite potentially time-sensitive action needing to be taken by your customer.

But say, instead of the text appearing at some corner of your app, a message box appeared which explained what's going on. Narrator would make your customer aware of the message box immediately, and your customers could access all the information in the message box at that time. What's more, could the use of a message box be a preferred approach for all customers? A customer using magnification would be made aware of the transaction result, where they might not have been if the magnified view didn't happen to include the text that previously appeared down at some lower corner of the app. In fact, the risk of any customers not noticing the time-sensitive information is reduced, regardless of whether they use assistive technology.

 

Figure 1: A message box showing important status information. A screen reader will announce the message box appearing.

 

Now, you may have considered the above, and still feel that the best interaction model does not include a message box. But the important thing is that you've considered it.

 

Avoid well-intentioned actions which might degrade the customer experience

Sometimes you find that you've considered all options available to your desktop app, and you feel there really isn't a practical path to deliver the experience you want for all your customers. But you're determined to deliver the best experience you can, and so start exploring what's technically possible.

This is where we need to be careful that your app doesn't do something because it's technically possible, rather than because it's good for your customers.

 

For example, say you look through the list of UI Automation (UIA) events that can be raised by your app, possibly through an interop call to a native function. You notice the UIA SystemAlert event, and find that your app can raise that event without too much work. What's more, Narrator reacts to the event, and so can make your customer aware of that time-sensitive information. But SystemAlert events are for events that are of a system-alert nature. It's going to be uncommon for an app to really be making an announcement that your customer feels is similar in nature to a system alert, and so that could be really confusing. So avoid raising a type of event which is not a good match for your app's scenario.

Another example of where specific action by your app could be problematic for your customers, relates to attempts to overload some control with information that's really not relevant to it. Say your customer invokes a "Buy" button. Your app begins the purchase transaction, which fails immediately, and you show text at the bottom of the app saying something like "No network connection, try again later". If Narrator is still interacting with the "Buy" button, you might be tempted to set that same status text on a UIA property on the button, perhaps using the UIA HelpText property. You try this out and find that Narrator notices the change in HelpText on the button, and so announces something like "Buy, button, No network connection, try again later". On the surface, that might seem helpful, but consider the following:

1. There's now duplication of information being conveyed by the app. Is it possible that in some scenarios a screen reader will make announcements that include this duplication?

2. The approach assumes the screen reader will still be located at a particular button at the time the HelpText property changed. Often there's no guarantee that this will be the case.

3. Having status information being conveyed like this relies that the information never becomes stale while still being accessible. Your customer would never want to reach the button and hear a status that no longer applies.

 

If state information is conveyed visually on a button, then it's essential that that information is also conveyed through UIA. But I'd not recommend trying to manipulate a control's UIA representation in order to trigger some announcement which is really not a natural part of the control itself.

 

Can a desktop app take a leaf out of the web's playbook?

While this post is focused on the experience at desktop apps, similar discussions apply for web UI around what actions result in a screen reader announcing a change in the UI. For web UI, in some cases screen readers might make announcements due to a web page reloading, or perhaps the page might use LiveRegions, and another approach used with web UI is to move keyboard focus over to text which needs to be announced. But when considering that last approach, it's important that by default, static text shouldn't be inserted into the app's tab order. Whenever your customer tabs or shift+tabs around your app's UI, they expect to reach interactable controls. If they reach static text, they'll wonder why they've been taken there. So by default, don't include static text in the tab order.

And yeah, I know I've said that before in other posts. But given that devs still insert static text into the tab order because they're under the impression that that's required in order for a screen reader to reach the text, I do think it's worth me saying it again:

A screen reader can access static text labels without the labels being in the tab order.

 

And this is where web UI can achieve the desired results through specific use of the "tabindex" attribute. A web page can make text focusable, and set focus to it in order for a screen reader to announce it, without inserting the text into the tab order. If you were to follow a similar approach in a desktop app, you'd want to deliver the same experience. That is, the text would have keyboard focus set on it programmatically by your app when the announcement is to be made, but the text element is still not in the tab order.

 

As far as I know, this approach is not applicable to WinForms apps, as Labels are not designed to get keyboard focus at all, regardless of whether the Label is inserted into the tab order.

 

But WPF app devs may be somewhat interested in this approach in specific circumstances. If you want Narrator to announce the text shown on a TextBlock, all you need to do is set the TextBlock's IsFocusable to true, and call its Focus(). And hey presto, the text on the TextBlock will be spoken. But taking that action leaves the TextBlock in the tab order, and as such, as your customer later tabs around your app, they keep reaching the TextBlock for no apparent reason. (I wouldn't really want to get involved with trying to set the TextBlock's IsFocusable back to being false later in the hope of delivering a smooth experience.) So by default, I do feel this approach can be problematic, and if another more straightforward approach is available, I'd not go with messing with a WPF TextBlock's IsFocusable.

So why am I even mentioning the WPF TextBlock IsFocusable thing? Well, I do know of one app that's taken this approach. The app's state would transition such that it briefly presented no focusable controls, and instead presented a static text string conveying the app's state. I forget what the text was, but let's assume it was something like "Please wait…". The app wanted the "Please wait…" string to be announced, and so it made the associated TextBlock focusable, and set focus on it. The fact that the TextBlock was then in the tab order had no negative impact because there was nothing else to tab to. Once the brief waiting period was over, the TextBlock was collapsed, and keyboard focus set on a newly-visible interactable control. The requirements that the app had around what versions of the .NET framework and Windows were available to the app, limited its options around having the text string announced, so this approach seemed the most attractive.

So if this really does seem to have potential to be helpful to your customers, consider the (unlocalized) demo code below. First we have a Button and a collapsed status TextBlock:

 

<Button x:Name="BuyButton" Width="300" Click="Button_Click">Buy</Button>

<TextBlock x:Name="BuyStatus" HorizontalAlignment="Center" Visibility="Collapsed">Please wait...</TextBlock>

 

In the Button's Click handler, we set keyboard focus to the TextBlock, and disable the Button.

 

BuyStatus.Visibility = Visibility.Visible;

BuyStatus.Focusable = true;

BuyStatus.Focus();

BuyButton.IsEnabled = false;

 

If I tab to the Button and invoke it with a press of the Space key, this is Narrator's announcement:

 

Buy, button,

Space

Please wait...,

 

Once the brief waiting period is over, we'd collapse the status text and re-enable whatever other controls are appropriate.

 

So what about LiveRegions?

Ok, so LiveRegions can be very helpful, but whenever anyone mentions them, people say that LiveRegion's are over-used or inappropriately used, and generally aren't used in ways that are helpful to your customers. So let's just assume that you've considered whether it's appropriate to use a LiveRegion in your scenario, and you've come to the conclusion that a LiveRegion would be a great way to help your customers.

The next consideration is whether it's practical for your desktop app to leverage a LiveRegion. The answer to that pretty much breaks down as follows:

UWP XAML

Natively supported through the AutomationProperties.LiveSetting property and LiveRegionChanged event.

WPF

Support for LiveRegions introduced with accessibility improvements in .NET 4.7.1, through AutomationProperties.LiveSetting property and LiveRegionChanged event.

WinForms

Not natively supported, but you can effectively turn a Label into a LiveRegion yourself, by following the steps described at Let your customers know of important status changes in your WinForms app.

Win32

Supported, but it requires knowledge of the detailed steps described at How to have important changes in your Win32 UI announced by Narrator.

 

Introducing the new UiaRaiseNotificationEvent

There's a common use of LiveRegions in desktop apps today which is contributing to the sentiment that LiveRegions are being used inappropriately. That is, LiveRegions are being used to trigger an announcement by Narrator, when the announcement has no related visuals on the screen. A LiveRegion is intended to mark some area on the screen as being "live", and so when that area changes, a screen reader can examine the updated area, and make a related announcement based on the content of the updated area. If some text has changed in the area, or an image or icon conveying status has changed, then the screen reader can announce that change.

However, some desktop apps wanted to trigger an important announcement by the screen reader, even when there was no such text, image, icon or any other specific element shown on the screen which provided the text associated with the announcement. In that case, the app might still create some UI and mark it as being a LiveRegion, and then take steps for the UI to have no visual representation on the screen. If this is done in a particular way, then it may indeed be possible for it to trigger a screen reader announcement. But it also may leave that element accessible to the screen reader later, still conveying the text associated with the earlier announcement. That can lead to a confusing or misleading experience, and so if this approach of a "hidden" LiveRegion is taken, it must be taken with great care.

 

And now, thanks to the introduction of UiaRaiseNotificationEvent() in the Windows 10 Fall Creators Update, you may have an additional option.

 

The UIA Notification event may provide a way for your app to raise a UIA event which leads to Narrator simply making an announcement based on text you supply with the event. In some scenarios, this could be a straightforward way to dramatically improve the accessibility of your app.

 

Important: If the UiaRaiseNotificationEvent() functionality is available to your app, it's more important than ever to consider the balance between not making your customers aware of important information, and providing so much information that it becomes an unhelpful distraction. If your app starts calling UiaRaiseNotificationEvent() too often, then your customers will soon find the experience irritating. So if you do use this helpful new functionality, use with care.

 

The question of whether it's practical for your desktop app to leverage UiaRaiseNotificationEvent() will depend on what type of app you've built. I expect most Win32 apps won't be leveraging it, because it requires your app to have an IRawElementProviderSimple available, and typically that won't be available unless you've implemented it yourself. If you have already done that, perhaps because you needed to make some custom UI accessible, then great. Just pass in your IRawElementProviderSimple to UiaRaiseNotificationEvent() and away you go. But many Win32 apps won't have done that, and as far as I know, it's not possible to access one provided by the UI framework through a standard Win32 control. If that's the case, then unless you're prepared to create an IRawElementProviderSimple just to pass into UiaRaiseNotificationEvent(), (which I doubt), you'll probably want to consider one of the other approaches described above, (such as using a LiveRegion).

 

Below are some thoughts on how other types of desktop apps can leverage the new UiaRaiseNotificationEvent().

 

UWP XAML

Well, when building a UWP XAML app for Windows 10 Build 16299 or later, this is easy. All your app needs to do is get some AutomationPeer for a UI element, and call its RaiseNotificationEvent(). At the time I write this, I've not found guidance on best practices around what you pass into RaiseNotificationEvent(), but it does seem you have a lot of control over specifying how you'd like Narrator to react.

 

Important: Historically, in some scenarios it could be impractical for you to deliver the experience you were striving for, when Narrator receives multiple UIA events from an app around the same time. For example, depending on the order that say a FocusChanged event and a LiveRegionChanged event arrived at Narrator, one announcement might be interrupted by the other announcement before the first announcement had even started. In practice, sometimes there was nothing you could do to improve the experience. The new UiaRaiseNotificationEvent() gives you much more control over specifying how you'd like a screen reader to react to the event, through the AutomationNotificationProcessing value that you supply. This is a really, really exciting development!

 

Another interesting point here is the event doesn't have to be raised off a TextBlock. If there is a TextBlock that is closely related to the announcement to be made, then it would seem logical to use the TextBlockAutomationPeer associated with the TextBlock with the call to RaiseNotificationEvent(). But if there is no related visible text element, another type of control could be used.

 

The code below shows a Button control being used to raise a notification event.

 

ButtonAutomationPeer peer = (ButtonAutomationPeer)

    FrameworkElementAutomationPeer.FromElement(BuyButton);

if (peer != null)

{

    // Todo: Replace these demo values with something appropriate to your scenarios.

    peer.RaiseNotificationEvent(

        AutomationNotificationKind.ActionAborted,

        AutomationNotificationProcessing.ImportantMostRecent,

        "Attempt to buy something did not complete, due to network connection being lost.",

        "270FA098-C644-40A2-A0BE-A9BEA1222A1E");

}

 

 

Whenever I'm interested in learning about what UIA events are being raised by an app, I always point the AccEvent SDK tool at the app, before trying out Narrator. If I pointed Narrator to the app first, and it didn't make the announcement I expect, then I can't be sure that my app actually raised an event that Narrator could react to. But AccEvent can make this clear to me, and the latest version of AccEvent will report details about a UIA event being raised in response to a call to the AutomationPeer's RaiseNotificationEvent().

 

The following string contains details reported by AccEvent in response to the event being raised by the demo code above. The highlighted text of "Windows.UI.Xaml.dll" in the UIA ProviderDescription property shows that the event was raised through the UWP XAML framework.

 

UIA:NotificationEvent    [NotificationKind_ActionCompleted, NotificationProcessing_ImportantMostRecent, displayString:" Attempt to buy something did not complete, due to network connection being lost.", activityId:"270FA098-C644-40A2-A0BE-A9BEA1222A1E"] Sender: ControlType:UIA_ButtonControlTypeId (0xC350), ProviderDescription:"[pid:3728,providerId:0x0 Main(parent link):Unidentified Provider (unmanaged:Windows.UI.Xaml.dll)]"

 

Having verified that an event is raised, I can then point Narrator to the app, and verify that Narrator makes a related announcement as expected. The following is the announcement made by Narrator as I invoke a button in a demo app which raises the UIA Notification event using the snippet shown above. Note that the event-related part of the announcement doesn't include any details of the element raising the event. Rather it is exactly the notification string supplied to RaiseNotificationEvent().

 

Raise UIA notification event, button, Alt, R, Alt+ R,

Space

Attempt to buy something did not complete, due to network connection being lost.

 

WPF

At the time of writing this, WPF does not natively support the new UIA Notification event. It is however, relatively straightforward for a C# app to access the functionality through interop.

A side note on LiveRegions: For WPF apps that can't leverage the new support for LiveRegions introduced in .NET 4.7.1, I wouldn't recommend trying to add support for LiveRegions yourself through interop. While raising a LiveRegionChanged event yourself through interop is straightforward, you would also need to expose a specific LiveSetting property through the element too, and that's not at all straightforward. But given that all an app leveraging the new UIA Notification event needs to do is raise the event, this is relatively straightforward through interop, and worth considering given the value that it brings to your app.

 

Interop code

Ok, so first add the code required to access the native UiaRaiseNotificationEvent() function.

 

internal class NativeMethods

{

    public enum AutomationNotificationKind

    {

        ItemAdded = 0,

        ItemRemoved = 1,

        ActionCompleted = 2,

        ActionAborted = 3,

        Other = 4

    }

 

    public enum AutomationNotificationProcessing

    {

        ImportantAll = 0,

        ImportantMostRecent = 1,

        All = 2,

        MostRecent = 3,

        CurrentThenMostRecent = 4

    }

 

    // Add a reference to UIAutomationProvider.

    [DllImport("UIAutomationCore.dll", CharSet = CharSet.Unicode)]

    public static extern int UiaRaiseNotificationEvent(

        IRawElementProviderSimple provider,

        AutomationNotificationKind notificationKind,

        AutomationNotificationProcessing notificationProcessing,

        string notificationText,

        string notificationGuid);

 

    [DllImport("UIAutomationCore.dll")]

    public static extern bool UiaClientsAreListening();

}

 

Add classes which will be used to raise the Notification event

In order for UiaRaiseNotificationEvent() to be called, you need a IRawElementProviderSimple to be available. You can get this from an AutomationPeer associated with some existing element in your app. The demo code below gets the IRawElementProviderSimple from a TextBlockAutomationPeer associated with a TextBlock. In order to do this, you'd need to create a new class derived from the TextBlock.

So create a new class called NotificationTextBlock, and add a public RaiseNotificationEvent() to that, which goes on to call the associated peer's RaiseNotificationEvent().

 

Important: If the code below is running on a version of Windows prior to the Windows 10 Fall Creators Update, then UiaRaiseNotificationEvent() will not be found, and unless this is accounted for, the app would crash. As such, the attempt to call UiaRaiseNotificationEvent() is wrapped in a try/catch, and if the attempt to call UiaRaiseNotificationEvent() fails due to it not being found, we won't try to call it again.

 

internal class NotificationTextBlock : TextBlock

{

    // This control's AutomationPeer is the object that actually raises the UIA Notification event.

    private NotificationTextBlockAutomationPeer _peer;

 

    // Assume the UIA Notification event is available until we learn otherwise.

    // If we learn that the UIA Notification event is not available, no instance

    // of the NotificationTextBlock should attempt to raise it.

    static private bool _notificationEventAvailable = true;

 

    public bool NotificationEventAvailable

    {

        get

        {

            return _notificationEventAvailable;

        }

        set

        {

            _notificationEventAvailable = value;

        }

    }

 

    protected override AutomationPeer OnCreateAutomationPeer()

    {

        this._peer = new NotificationTextBlockAutomationPeer(this);

 

        return this._peer;

    }

 

    public void RaiseNotificationEvent(string notificationText, string notificationGuid)

    {

        // Only attempt to raise the event if we already have an AutomationPeer.

        if (this._peer != null)

        {

            this._peer.RaiseNotificationEvent(notificationText, notificationGuid);

        }

    }

}

 

internal class NotificationTextBlockAutomationPeer : TextBlockAutomationPeer

{

    private NotificationTextBlock _notificationTextBlock;

 

    // The UIA Notification event requires the IRawElementProviderSimple

    // associated with this AutomationPeer.

    private IRawElementProviderSimple _reps;

 

    public NotificationTextBlockAutomationPeer(NotificationTextBlock owner) : base(owner)

    {

        this._notificationTextBlock = owner;

    }

 

    public void RaiseNotificationEvent(string notificationText, string notificationGuid)

    {

        // If we already know that the UIA Notification event is not available, do not

        // attempt to raise it.

        if (this._notificationTextBlock.NotificationEventAvailable)

        {

            // If no UIA clients are listening for events, don't bother raising one.

            if (NativeMethods.UiaClientsAreListening())

            {

                // Get the IRawElementProviderSimple for this AutomationPeer if we don't

                // have it already.

                if (this._reps == null)

                {

                    AutomationPeer peer = FrameworkElementAutomationPeer.FromElement(this._notificationTextBlock);

                    if (peer != null)

                    {

                        this._reps = ProviderFromPeer(peer);

                    }

                }

 

                if (this._reps != null)

                {

                    try

                    {

                        // Todo: The NotificationKind and NotificationProcessing values shown here

                        // are sample values for this snippet. You should use whatever values are

                        // appropriate for your scenarios.

 

                        NativeMethods.UiaRaiseNotificationEvent(

                            this._reps,

                            NativeMethods.AutomationNotificationKind.ActionCompleted,

                            NativeMethods.AutomationNotificationProcessing.ImportantMostRecent,

                            notificationText,

                            notificationGuid);

                    }

                    catch (EntryPointNotFoundException)

                    {

                        // The UIA Notification event is not not available, so don't attempt

                        // to raise it again.

                        _notificationTextBlock.NotificationEventAvailable = false;

                    }

                }

            }

        }

    }

}

 

Raise an event

Now that we have the NotificationTextBlock available, we can add it to the app's XAML, and at the appropriate place in the code-behind raise the event.

 

<!-- Note: This can be the source of the event without any text being set on it. -->

<local:NotificationTextBlock x:Name="StatusTextBlock" Width="300" />

 

// Raise a UIA Notification event.

StatusTextBlock.RaiseNotificationEvent(

    "Attempt to buy something did not complete, due to network connection being lost.",

    "B1980BCF-014D-4A47-9AB2-F23635B6F7FE"); // Todo: Replace this demo guid.

 

With the above code in place in my WPF app, I can point the AccEvent SDK tool at the app and verify that the UIA Notification event is being raised as expected. The following string is reported by AccEvent in response to the event being raised. The highlighted text of "PresentationCore" in the UIA ProviderDescription property shows that the event was raised through the WPF framework.

 

UIA:NotificationEvent    [NotificationKind_ActionCompleted, NotificationProcessing_ImportantMostRecent, displayString:"Attempt to buy something did not complete, due to network connection being lost.", activityId:"B1980BCF-014D-4A47-9AB2-F23635B6F7FE"] Sender: ControlType:UIA_TextControlTypeId (0xC364), ProviderDescription:"[pid:18948,providerId:0x0 Main(parent link):Unidentified Provider (managed:MS.Internal.Automation.ElementProxy, PresentationCore, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35)]"

 

WinForms

For a WinForms app to leverage the new UIA Notification event, things do get rather more involved. While a WPF app can get hold of a IRawElementProviderSimple provided by the WPF framework without too much effort, a WinForms app must create one itself. While this might be more work than is practical in some cases, I found one approach where I feel the work involved to be straightforward enough to be justified given the potential benefits to your customers.

 

Important: As with a WPF app, the approach described below will not work on versions of Windows prior to the Windows 10 Fall Creators Update, and so involves a try/catch to protect against an exception.

 

Interop code

First add the interop code required by the WinForms app.

 

public class NativeMethods

{

    public const int WM_GETOBJECT = 0x003D;

    public const int UiaRootObjectId = -25;

 

    public const int UIA_ControlTypePropertyId = 30003;

    public const int UIA_AccessKeyPropertyId = 30007;

    public const int UIA_IsKeyboardFocusablePropertyId = 30009;

    public const int UIA_IsPasswordPropertyId = 30019;

    public const int UIA_IsOffscreenPropertyId = 30022;

 

    public const int UIA_TextControlTypeId = 50020;

 

    public enum AutomationNotificationKind

    {

        ItemAdded = 0,

        ItemRemoved = 1,

        ActionCompleted = 2,

        ActionAborted = 3,

        Other = 4

    }

 

    public enum AutomationNotificationProcessing

    {

        ImportantAll = 0,

        ImportantMostRecent = 1,

        All = 2,

        MostRecent = 3,

        CurrentThenMostRecent = 4

    }

 

    // Add a reference to UIAutomationProvider.

    [DllImport("UIAutomationCore.dll", EntryPoint = "UiaRaiseNotificationEvent", CharSet = CharSet.Unicode)]

    public static extern int UiaRaiseNotificationEvent(

        IRawElementProviderSimple provider,

        AutomationNotificationKind notificationKind,

        AutomationNotificationProcessing notificationProcessing,

        string notificationText,

        string notificationGuid);

 

    [DllImport("UIAutomationCore.dll")]

    public static extern bool UiaClientsAreListening();

}

 

Add a class which will be used to raise the Notification event

Given that no existing IRawElementProviderSimple can be supplied by the UI framework to the app, the code below adds support for it.

 

Important: The implementation of IRawElementProviderSimple here is the minimum required, such that the UIA representation of a new NotificationLabel object is as close as possible to that of a standard Label control. The goal is to expose through UIA, an object which is effectively a standard Label control, but which can also raise the new UIA Notification event. As part of developing the NotificationLabel, I repeatedly pointed the Inspect SDK tool to both the NotificationLabel and to a standard Label, and updated the NotificationLabel's IRawElementProviderSimple implementation until the UIA representation of both classes were as similar as possible.

 

// Add support for the UIA IRawElementProviderSimple interface to a standard WinForms Label control.

public class NotificationLabel : Label, IRawElementProviderSimple

{

    // Assume the UIA Notification event is available until we learn otherwise.

    // If we learn that the UIA Notification event is not available, no instance

    // of the NotificationLabel should attempt to raise it.

    static private bool _notificationEventAvailable = true;

 

    static public bool NotificationEventAvailable

    {

        get

        {

            return _notificationEventAvailable;

        }

        set

        {

            _notificationEventAvailable = value;

        }

    }

 

    // Override WndProc to provide our own IRawElementProviderSimple provider when queried by UIA.

    protected override void WndProc(ref Message m)

    {

        // Is UIA asking for a IRawElementProviderSimple provider?

        if ((m.Msg == NativeMethods.WM_GETOBJECT) && (m.LParam == (IntPtr)NativeMethods.UiaRootObjectId))

        {

            // Return our custom implementation of IRawElementProviderSimple.

            m.Result = AutomationInteropProvider.ReturnRawElementProvider(

                Handle,

                m.WParam,

                m.LParam,

                (IRawElementProviderSimple)this);

 

            return;

        }

 

        base.WndProc(ref m);

    }

 

    // IRawElementProviderSimple implementation.

    ProviderOptions IRawElementProviderSimple.ProviderOptions

    {

        get

        {

            // Assume the UIA provider is always running in the server process.

            return ProviderOptions.ServerSideProvider | ProviderOptions.UseComThreading;

        }

    }

 

    IRawElementProviderSimple IRawElementProviderSimple.HostRawElementProvider

    {

        get

        {

            return AutomationInteropProvider.HostProviderFromHandle(this.Handle);

        }

    }

 

    public object GetPatternProvider(int patternId)

    {

        // The standard WinForms Label control only supports the IsLegacyIAccessible pattern,

        // and this custom control gets that for free.

        return null;

    }

 

    public object GetPropertyValue(int propertyId)

    {

        // All properties returned here are done so in order to replicate the

        // UIA representation of the standard WinForms Label control.

 

        // Note that the only difference between the UIA properties of the NotificationLabel and the

        // standard Label is the ProviderDescription. The standard Label's property will include:

        // "Microsoft: MSAA Proxy (unmanaged:uiautomationcore.dll)",

        // whereas the NotificationLabel's will include something related to the app's implementation

        // of IRawElementProviderSimple.

 

        switch (propertyId)

        {

            case NativeMethods.UIA_ControlTypePropertyId:

            {

                return NativeMethods.UIA_TextControlTypeId;

            }

            case NativeMethods.UIA_AccessKeyPropertyId:

            {

                // This assumes the control has no access key. If it does have an access key,

                // look for an '&' in the control's text, and return a string of the form

                // "Alt+<the access key character>".

                return "";

            }

            case NativeMethods.UIA_IsKeyboardFocusablePropertyId:

            {

                return false;

            }

            case NativeMethods.UIA_IsPasswordPropertyId:

            {

                return false;

            }

            case NativeMethods.UIA_IsOffscreenPropertyId:

            {

                // Assume the control is always visible on the screen.

                return false;

            }

            default:

            {

                return null;

            }

        }

    }

}

 

Raise an event

Now that the NotificationLabel is available, it can be inserted into the app's UI, and the required event raised through code-behind.

 

When the UI is created…

this.labelStatus = new <Your namespace>.NotificationLabel();

 

When the event is to be raised later…

// If we already know that the UIA Notification event is not available, do not attempt to raise it.

if (NotificationLabel.NotificationEventAvailable)

{

    // If no UIA clients are listening for events, don't bother raising one.

    if (NativeMethods.UiaClientsAreListening())

    {

        // Todo: Replace all these demo values passed into UiaRaiseNotificationEvent()

        // with whatever works best for your scenario.

 

        string notificationString = "Attempt to buy something did not complete, due to network connection being lost.";

        string guidStringDemo = "4F3A7213-6AF5-42D3-8DDD-C50AB83AE782";

 

        try

        {

            NativeMethods.UiaRaiseNotificationEvent(

                (IRawElementProviderSimple)labelStatus,

                NativeMethods.AutomationNotificationKind.ActionCompleted,

                NativeMethods.AutomationNotificationProcessing.All,

                notificationString,

                guidStringDemo);

        }

        catch (EntryPointNotFoundException)

        {

            // The UIA Notification event is not available, so don't attempt

            // to raise it again.

            NotificationLabel.NotificationEventAvailable = false;

        }

    }

}

 

 

Note: The Notification raised with the above code results in Narrator making an associated announcement, even when the NotificationLabel has a Visible property of false. With a Visible property of false, the element has no visual representation and is not exposed through the UIA tree.

 

With the above code in place in my WinForms app, I can point the AccEvent SDK tool at the app and verify that the UIA Notification event is being raised as expected. The following string contains details reported by AccEvent in response to the event being raised by the demo code above. The highlighted text of "NotificationLabel" in the UIA ProviderDescription property shows that the event was raised through the NotificationLabel class defined above.

 

UIA:NotificationEvent    [NotificationKind_ActionCompleted, NotificationProcessing_All, displayString:"Attempt to buy something did not complete, due to network connection being lost.", activityId:"4F3A7213-6AF5-42D3-8DDD-C50AB83AE782"] Sender: ControlType:UIA_TextControlTypeId (0xC364), ProviderDescription:"[pid:15796,providerId:0x10085E Main:Nested [pid:8732,providerId:0x10085E Main(parent link):Unidentified Provider (managed:UiaRaiseNotificationEvent_WinForms.NotificationLabel, UiaRaiseNotificationEvent_WinForms, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null)]; Hwnd(parent link):Microsoft: HWND Proxy (unmanaged:uiautomationcore.dll)]"

 

Summary

In some scenarios, the question of whether your app makes your customers aware of certain information, can be critically important to your customers. So once you're convinced that your customers would benefit from some announcement which isn't being made by default, please do consider options around how it might be practical to have that announcement made given your situation. Exactly what is practical will depend on the type of your desktop app, and the versions of Windows or the .NET framework that are available to you. But you'd not want to miss an opportunity to help as many of your customers as possible, and now, the UIA Notification event might be an important new option for you.

Guy

Changes to hashing algorithm for self-signed certificate in SQL Server 2017

$
0
0

Starting with SQL Server 2005, a self-signed certificate is created automatically during the startup to be used for channel encryption. By default, credentials in the login packet that are transmitted when a client application connects to SQL Server are always encrypted using this certificate if a certificate has not been explicitly provisioned for SSL/TLS. Optionally, the self-signed certificate can also be used to enable channel encryption. SSL/TLS connections that are encrypted using a self-signed certificate do not provide strong security, so it is strongly recommended that a certificate obtained from a trusted certification authority be used.

Until SQL Server 2016, the self-signed certificate was created using a SHA1 algorithm. However, SHA1 algorithm and many older algorithms have been deprecated beginning with SQL Server 2016. Refer to this books online article for more information.

Beginning with SQL Server 2017, the self-signed certificate now uses SHA256 algorithm which is more secure compared to SHA1 algorithm. Having said that, we still recommend using a certificate obtained from trusted certification authority to be used for channel encryption.

NAV Development Preview – November Update

$
0
0

 

Welcome to yet another update for the Developer Preview, we're now introducing the November update. As usual, we've fixed a lot of issues reported by you, for more information view the list of fixed issues, In addition to that we're announcing the following changes as you can see listed below. To jump directly to an updated image go to the Azure Gallery sign up at http://aka.ms/navdeveloperpreview.


Application dependency 

It is no longer necessary to specify application locale in app.json. Your AL package will be compiled against the application that is present on the server that you connect to. This allows you to write a single AL extension for multiple country versions as long as you don't depend on country-specific code. If you do depend on country-specific code you should only try to compile your app against a server set up for that country.

The application property in app.json has changed its format accordingly.

Old syntax:

"application": { 

    "version": "11.0.0.0", 

        "locale": "W1" 

} 

New simple syntax:

"application": "11.0.0.0" 

This new approach allows you to submit a single app to AppSource that works with multiple country application databases as long as the base application code is compatible with your extension code.

Runtime packages 

For distribution extensions it is now possible to generate "Runtime" packages that do not contain any AL code, but instead the compiled files that the server uses.  

First, you must have an extension developed and published to an on-premise instance. Then, for generating the package, connect to the server and find the extension by using the following powershell command: 

  Get-NavAppRuntimePackage 

To publish and install the package, use the following powershell commands: 

  Publish-NavApp 

  Install-NavApp 

Please note that this has the limitation that it only works for on-premise installations; the debugging experience is very limited since no source code is available and it cannot be used for submissions to AppSource. 

 

Translations     

We have improved the translation feature to include support for XLIFF files. To enable it, open your app.json file and add the following: 

  "features": "TranslationFile" 

Then, invoke the package command (Ctrl+Shift+B) in Visual Studio Code and you will get a /Translations folder with the .xlf files ready to be translatedThe generated .xlf file can be used in many of the free or commercial tools for translators. All labels, label properties (Caption, Tooltip etc.), and report labels will be included in the .xlf file and be ready for translation. 

When you have specified the target language and translated your labels, include the renamed .xlf file in the /Translations folder. Make sure that the name is not the same as the file which is being generated as it will be overwritten.  

When publishing your extension the next time, it will be translated to your language of choice. 

There are a few things to note: the ML properties (CaptionML, TooltipML etc.), the old report label syntax, and TextConst do not get included in the .xliff file and will not be translated. Make sure to update your code from the old ML syntax (=ENU='asd';DEU='qwe') to the new label syntax described below. 

The label syntax, in this example for the Caption property, is the following: 

Caption = 'Developer translation for %1',  Comment = '%1 is extension name', locked = false, MaxLength=999; 

The commentlocked, and maxLength attributes are optional and the order is not enforced. 

Use the same syntax for report labels  

labels
  {
        LabelName = 'Label Text', Comment='Foo', MaxLength=999, Locked=true;
  } 

And label data types:

var
    a : Label 'Label Text', Comment='Foo', MaxLength=999, Locked=true; 

Please note that you can use the new translation files approach only for objects from your extension. For translating the base application you still need to use the .TXT files.

For more information, see the documentation.

Running C/SIDE and AL side-by-side

We've implemented support for development using both C/SIDE and AL, as well as the in-client designer side-by-side. When new objects are added or changed in C/SIDE these changes must be reflected in the symbol download in Visual Studio Code using the AL Language extension. To enable this, new commands and arguments have been added to finsql.exe. For more information about running C/SIDE and AL side-by-side see the documentation.

As usual we encourage you to let us know how you like working with these additions and keep submitting suggestions and bugs. You can see all the filed bugs on our GitHub issues list (https://github.com/Microsoft/AL/issues). To get an updated image jump directly to the Azure Gallery signup at http://aka.ms/navdeveloperpreview.

For a list of our previous blog posts, see the links at the end of this post.

/NAV

Our earlier blog posts:

NAV Development Tools Preview - October Update

NAV Development Tools Preview - September Update

NAV Development Tools Preview - August Update

NAV Development Tools Preview - July Update

NAV Development Tools Preview - June Update

NAV Development Tools Preview - April Update

NAV Development Tools Preview - March Update

NAV Development Tools Preview - February Update

NAV Development Tools Preview - January Update

Announcing the Preview of Modern Development Tools for Dynamics NAV

 

Exploring Big Data: Course 5 – Delivering a Data Warehouse in the Cloud

$
0
0

Big Data with Sam Lester

 

(This is course #5 of my review of the Microsoft Professional Program in Big Data)

Course #5 of 10 – Delivering a Data Warehouse in the Cloud

Overview: The Big Data course titled “Delivering a Data Warehouse in the Cloud” walks you through the key concepts of a SQL Data Warehouse (DW) in Azure, including the steps to provision a DW, followed by lectures on designing tables and loading data, and completes with big data integration with Hadoop using Polybase. During the course, the four lab exercises require you to install numerous software applications used in a data warehouse environment, including SQL Server Management Studio (SSMS), SQL Server Data Tools (SSDT), Visual Studio Community, Visual Studio Code, Azure Feature Pack for SSIS, and Azure Storage Explorer. The download and installation of these tools is part of the lab exercise as opposed to the course providing a pre-built VM with the required software. As a result, you can’t complete labs #2-4 without going through this setup. I would have preferred an image with the required software since I’m very familiar with installing and configuring each of these applications. For those who don’t have experience with some of these tools, the course is a great way to walk through installation and basic functionality of data warehouse tools in addition to the Azure DW content.

Time Spent / Level of Effort: I spent about 10 hours total for this course. I watched the videos from part 1 on double speed and then finished the quiz from that section. At that point, I decided to do all four of the labs consecutively. This took me around 2.5-3 hours, but I felt like it was a great use of time to do them back-to-back since I could focus on everything in Azure, including the numerous tools introduced. After completing the labs, I went back to the videos to watch parts 2-4, followed by the quizzes, and then the final exam.

By completing the labs all at one time, I was able to minimize Azure costs by shutting down the VM when finished. Here is the resource usage for the SQL Data Warehouse VM while working through the labs.

Azure Data Warehouse with Sam Lester

Course Highlight: It felt like completing the labs took an exceptionally long time due to the required installation step, but the highlight after finishing the course is that I have a great Azure DW demo environment to continue to use for demos and presentations. I also enjoyed the videos on Polybase since I haven’t had a chance to explore this for a customer related project to date. To me, this chance to watch videos and build solutions around popular topics (such as Polybase) while going through the program is a huge benefit to help me remain relevant with so many interesting areas of the data industry. The other aspect of the labs that I enjoyed was the process of executing the same step through two different tools. For example, uploading data through bcp and Azure Storage Explorer as well as running TSQL through both Visual Studio and SSMS. 

Suggestions: The final exam for this course was harder than any of the previous 15 edx courses I have taken as it didn’t feel like the videos and labs prepared you directly for the questions. I found most of the answers when reading documentation and trying out the scenarios in the lab environment. Don’t forget to fill out the final survey / question after completing the course as this contributes to your score. Also notice that I took course #5 directly after completing course #1 since there is no requirement for the ordering of the classes as long as you can stay within the required schedule.

There are also a few small items that I encountered in the labs where the documentation is incorrect in areas where the product functionality has been updated. One example is in populating a table using Azure Data Factory in Lab 3. The default Data Factory version is now V2, but the lab instructions to use the Copy Data functionality are available in version 1 (V1). Creating the Data Factory using Version 1 allows you to continue with the lab as documented.

Azure Data Factory

There is another small issue when using TSQL with Polybase to load data. The provided TSQL code begins with the line "CREATE MASTER KEY;". Since this already exists, the script fails. You can work around this by removing this single line and continue with the lab.

Overall, it was a very educational course that covered a lot of material and introduced several software applications used in a data warehouse environment. If you have taken this course in the past or are going through it now, please leave a comment and share your experience.

Thanks,
Sam Lester (MSFT)


Automated deployments Performance Degradation in West Europe – 11/08 – Mitigated

$
0
0

Final Update: Wednesday, November 8th 2017 18:20 UTC

We’ve confirmed that all systems are back to normal as of 11/8/2017 17:36. Our logs show the incident started on 11/8/2017 12:00 and that during the 5 hours and 36 minutes that it took to resolve the issue. Customers experienced a delay between when their builds completing and the release gets triggered. Sorry for any inconvenience this may have caused.

  • Root Cause: The failure was due to a stuck job in one of our backend services.
  • Chance of Re-occurrence: High
  • Lessons Learned: We are working both minimizing resource-intensive activities in our post-deployment steps, and are also working targeting monitors specifically to detect post-deployment issues in the future.
  • Incident Timeline: 5 hours & 36 minutes – 11/8/2017 12:00 UTC through 11/8/2017 17:36

Sincerely,
Randy


Initial Update: Wednesday, November 8th 2017 16:34 UTC

  • We're investigating delay in builds triggering release managment releases in West Europe.
  • We have investigated this back to a stuck job which processes these triggers.
  • We have rebooted and collected a dump of this job in order to do further investigation of the issue.
  • Currently there is a backlog of triggers to process through so customers can expect a delay between when their builds completed and the release gets triggered.
  • We expect to be caught up with the backlog within 1 to 2 hours. We will update this post when complete.

Next Update: Before Wednesday, November 8th 2017 17:05 UTC

Sincerely,
Randy

Cumulative Update 25 for Microsoft Dynamics NAV 2016 has been released

$
0
0

Cumulative Update 25 includes all application and platform hotfixes and regulatory features that have been released for Microsoft Dynamics NAV 2016.

The cumulative update is intended mainly for solutions that are experiencing the problems described in the Knowledge Base article linked to below. However, you are advised to always keep your solution updated with the latest cumulative update. If you are in doubt about whether this cumulative update addresses your specific problem, or if you want to confirm whether any special compatibility, installation, or download issues are associated with this cumulative update, support professionals in Customer Support Services are ready to help you. For more information, see http://support.microsoft.com/contactus/.

The cumulative update includes hotfixes that apply to all countries and hotfixes specific to the following local versions:

Where to find Cumulative Update 25

You can see changes included in this cumulative update from KB4052194 – Cumulative Update 25 for Microsoft Dynamics NAV 2016.

You can press one of the countries in list above for a direct download or you can download the cumulative update from the Microsoft Download Center.

Microsoft Dynamics NAV 2016 is also available from containers. Containers are a way to wrap up an application into its own isolated box, you can pull the cumulative update containers from the Docker Hub.

To learn more about other Cumulative Updates already released for Microsoft Dynamics NAV 2016 please see KB  3108728.

Warning

Before you install a cumulative update in a production environment, take the following precautions:

  1. Test the cumulative update in a non-production environment.
  2. Make a backup of the system or computer where the cumulative update is to be installed.

Additional Information

For information about how to install the cumulative update, see How to Install a Microsoft Dynamics NAV 2016 Cumulative Update. Note that if you upgrade to this cumulative update from a version older than Microsoft Dynamics NAV 2016 Cumulative Update 6, you must run the development environment with elevated rights (run as administrator).

Visual Studio Toolbox: Dependency Injection

$
0
0

In this episode, I am joined by Miguel Castro, who explains dependency injection, which makes it easy to resolve classes at any level in any size object hierarchy, all the while making your application much easier to test. Miguel demystifies this important technique and explains why and how you should use it in every project.

Experiencing Data Access Issue in Azure Portal for Availability Data Type – 11/08 – Resolved

$
0
0
Final Update: Wednesday, 08 November 2017 18:14 UTC

We've confirmed that all systems are back to normal with no customer impact as of  11/08, 10:10 UTC. Our logs show the incident started on 11/08, 09:35 UTC and that during the 35 minutes that it took to resolve the issue 10% of customers experienced issues accessing data in Azure portal and App Analytics portal.
  • Root Cause: The failure was due to one of our back end services responsible for data retrieval in the portal.
  • Incident Timeline: 35 minutes - 11/08, 10:10 UTC through 11/08, 09:35 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Sapna


Initial Update: Wednesday, 08 November 2017 17:56 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Access Issue in Azure Portal and  Application Analytics portal. The following data types are affected: Availability.
  • Work Around: None
  • Next Update: Before 11/08 20:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.


-Sapna

Monitoring Team Foundation Server 2018

$
0
0

Monitoring on-premises Team Foundation Server deployments is an important part of keeping them running smoothly, especially for large enterprise deployments. Good monitoring can help administrators avoid issues before they impact end users, as well as react quickly when user impacts do occur.

TFS has shipped management packs for System Center Operations Manager since way back in 2008. See https://www.microsoft.com/download/details.aspx?id=14720 for that original download, and https://www.microsoft.com/download/details.aspx?id=54791 for the latest TFS 2017 version. These management packs have largely offered the same functionality for all these years.

More recently, we’ve learned a lot about monitoring critical services by continually improving our DevOps practices for Visual Studio Team Services. Some key lessons learned include:

  • Outside-in monitoring is not enough. It is reactive, rather than proactive. And it’s not fine-grained enough to give a good picture of end-user experience.
  • Signal to noise ratio is key – noisy or unreliable monitors end up getting disabled or ignored.

In VSTS we have internalized these lessons by supplementing outside-in monitoring with proactive monitoring of infrastructure health (application-tier CPU utilization, for example) and by fine-tuning our alerts to make sure they are actionable.

With all this in mind, we took a step back to think about what we should recommend (and what we should ship) for monitoring Team Foundation Server deployments.

TL;DR – we recommend using System Center with the SQL, IIS, and Windows management packs plus a few custom monitors/alerts configured using built-in System Center Operations Manager capabilities. We believe this configuration will be just as easy to set up and will provide better monitoring than the previous TFS management packs. As a result, we do not plan to ship a TFS management pack for TFS 2018.

If you are currently using one of the TFS management packs, please read the recommendations below for more details. If they do not ring true for you and you believe the TFS management packs provide significant value, please let me know in the comments below and/or reach out to me at aaronha at microsoft dot com.

SQL, IIS, and Windows Management Packs

These three management packs provide a wealth of information about the underlying software on which a TFS deployment relies. Each of them can be easily installed from the management pack catalog.

The SQL management packs (there are multiple, for the various versions of SQL Server) cover a lot of ground. Everything from checking for installation of the latest service packs through CPU utilization and disk space availability. To learn more, download the SQL 2016 management pack guide at https://www.microsoft.com/download/details.aspx?id=53008.

The IIS management packs (again there are multiple, for the various versions of IIS) primarily monitor the availability of your web sites and their associated application pools. They can also be used for performance monitoring scenarios. To learn more, download the IIS 10 management pack guide at https://www.microsoft.com/download/details.aspx?id=54445.

The Windows management packs (again there are multiple, for the various versions of Windows) cover a ton of ground. Monitoring and alerting includes disks and disk partitions, processors and CPU utilization, network adapters and bandwidth usage, and memory utilization.

Between these three types of management packs, you can get quite extensive monitoring of your TFS deployments, from the ASP.Net web layer through the SQL backend and all the way down to the underlying OS. Much of the data they provide can be used to fix issues – resource constraints, for example – before they start impacting end-users.

Web Application Transaction Monitoring

A simple availability monitor for your TFS deployments can be set up by following the general instructions at https://technet.microsoft.com/library/hh457553.aspx. The simplest approach is to start with a Single URL monitor. My recommendation would be to hit the ProjectCollections REST endpoint, which will interact with the configuration database and return the list of team project collections for the deployment. The full URL will look something like http://mytfs:8080/tfs/_apis/ProjectCollections.

Make sure to set up the properties of the web application with the appropriate User Account and Authentication Method. See https://technet.microsoft.com/library/hh457542.aspx for more information here. Typically, Authentication Method should be Negotiate, and the User Account should be a user who has read access to the monitored TFS deployment(s). If you want to get fancy, you can alert on slow performance, response content, and so forth.

Once you’ve set things up, you can configure additional requests to monitor your other TFS deployments, if you have more than one.

Monitoring the TFS Job Agent, and TFS Build Agents

Build resource monitoring was removed from the TFS 2017 management pack, and back then I wrote up a recommended process for monitoring these resources. See https://blogs.msdn.microsoft.com/devops/2017/03/28/monitoring-build-resources-with-the-tfs-2017-management-pack/.

The TFS Job Agent service, which is used to run long running background tasks, can be monitored using the same approach documented for monitoring XAML build resources – the Windows Service management pack template.

TFS Management Packs

If you follow the above recommendations, TFS management packs will not provide any significant additional value. The old TFS management packs all had the same basic capabilities:

  1. Outside-in monitoring through pinging of a variety of web service endpoints.
  2. Event log monitoring for a variety of errors.
  3. Windows Service availability monitoring for the TFS job agent process.

The outside-in monitoring in the TFS management packs does similar work to the web application transaction monitor recommended above, but in a nosier way – in virtually all failure scenarios, all seven of the web service endpoints it monitors will start failing at once.

The Event Log monitoring is not covered by the recommendations from above, but it is again quite noisy. Dozens of monitors are provided for individual events. Most of these are not actionable, meaning that no reasonable instructions are provided to fix the underlying issue. For example:

TFS Event 3076 occurred. This is raised by the catalog service when a catalog entry has a missing parent node. This could indicate database inconsistencies. Check the health of the SQL Server configuration database.

If for some reason you did want an additional layer of monitoring around TFS Event Log errors, it is easy enough to use NT Event Log alerts (a built in System Center Operations Manager capability – see https://technet.microsoft.com/library/ff730470.aspx) to create either a blanket alert for all TFS errors in the event log, or specific alerts for subsets of errors. This is not recommended unless you want to set the priority and severity of these issues to something low and then review them periodically.

Finally, the TFS Job Agent monitoring doesn’t provide any additional value beyond the Windows Service monitor recommended above.

Setup and discovery

Another part of the value of the TFS management packs was meant to be getting TFS monitoring out of the box without having to do all the manual setting up of monitors and alerts described above. Getting the TFS management packs up and running was a rather cumbersome process, however, which you can read all about in Appendix A of the management pack guide available for download at https://www.microsoft.com/download/details.aspx?id=54791. Discovery was not automatic, since each server on which TFS resources resides needed to be configured to allow it to act as a proxy. Given all of this, setting up the individual monitors discussed above should be comparatively straightforward.

I believe these new recommendations should provide monitoring of TFS deployments that is as good or better than the old TFS management packs, and that is just as easy to configure. If you are a current user of one of the existing TFS management packs and this assessment or these new recommendations don’t ring true – please let me know.

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>