subscription
Top 10 resources I use to keep up to date with new innovations on the Azure Platform
Guest post by Rachel Peck, Partner Business Evangelist, Microsoft Australia
On average, new functionality and innovations are released on the Microsoft Azure platform every 36 hours. Trying to keep abreast of all these new innovations is hard, and when you add to that the complexity of everyone’s individual learning preferences, there isn’t a “one size fits all” answer to keeping up to date.
Top 10 resources I use to keep up to date with new innovations on the Azure Platform:
- Microsoft Azure Blog
- My first point of call for all new announcements, product team points of view, insights and guides. This lands as an RSS feed on a daily basis in my inbox
https://azure.microsoft.com/en-gb/blog/
- Microsoft Azure Services Updates
- One place for daily updates on Azure services. This provides a more granular detail on every Azure feature announcement
https://azure.microsoft.com/en-us/updates/
- Microsoft Azure Roadmap
- This product roadmap is the place to find out what’s new, and what’s coming next. I know a lot of partners use this before they build out any of their own IP. You can also subscribe to notifications, so you’ll always be the in the know
https://azure.microsoft.com/en-us/roadmap/
- Microsoft Azure Feedback Forum
- Review ideas and feedback from other users of the Azure Platform. The Microsoft Azure Product team use this to prioritise new Azure services and features:
https://feedback.azure.com/forums/34192–general-feedback
- Microsoft Azure YouTube Video Channel
- Short sharp videos on Azure services, customers, and partners. From quick introductions to detailed “How to” guides. I use this to get the 101 on any of the New Azure services.
https://www.youtube.com/microsoftazure
- Microsoft Azure Podcast
- Short Podcasts on Azure covering a weekly roundup of announcements, and then a drill down into a specific service or feature. The podcast is released weekly, and it has a mixture of Microsoft, Customer and partner contributors. As an avid runner, I use these to clock up the miles, and learn at the same time.
http://azpodcast.azurewebsites.net/
- Microsoft Azure Newsletter
- Stay informed on the latest Azure features, events, and community activities. Issued on a monthly basis, the newsletter provides a nice monthly update.
https://azure.microsoft.com/en-us/community/newsletter/
- Endjin Azure Weekly newsletter
- It’s not just Microsoft issuing great content. Azure Weekly hits my inbox on a Monday morning, and it provides a great summary of the week’s top news in the Microsoft Azure ecosystem. Created by Microsoft Gold Cloud Platform partner Endjin (https://endjin.com/), it’s aimed at developers, architects, IT Managers, infrastructure folk, or anyone trying to keep on top of the latest Azure developments.
http://azureweekly.info/
- Microsoft Azure Advisors
- With Azure Advisors, you can learn about new product updates early, and influence the design and functionality. This is a private engineering feedback community for organizations that want a direct relationship with the engineering teams behind the Azure. It is designed to facilitate the exchange of feedback, ideas and best practices between engineers.
Microsoft Azure Advisory Council
- Use your FREE Azure time to discover New Azure Services
- All the PowerPoint slides, videos, blog posts, tweets, emails, podcasts, and newsletters aren’t going to show the real value in these services. Sometimes it’s just as easy to discover and trial these services directly from the Azure portal. Here are a couple of ways you can access FREE Azure:
- FREE Trial Azure Credit – $200 of FREE Azure Credit for All
- Visual Studio Subscriptions – Up to $150 of FREE Azure
- Visual Studio Dev Essentials – Up to $25 of FREE Azure per month
- Microsoft IT Pro Cloud Essentials – Up to $25 of FREE Azure per month
- Microsoft Partner Network – Silver or Gold Cloud Platform Competency Partners get a FREE allowance
- Not for Profit organisations – Up to $5000 in FREE credits annually
The Future of Partnering in the Channel | Guest post by Tamara Hodkinson – CompTIA
CompTIA is the largest not-for-profit independent IT Channel Trade Association in the world and has set up an IT channel community here in ANZ. Coming into their second year in ANZ, CompTIA are holding their 7th ANZ Channel Community meeting in Sydney on July 20, to explore the topic of ‘The Future of Partnering in the Channel’.
Karen Drewitt of The Missing Link, a valued Microsoft Gold Partner is the current Chair of the CompTIA Executive Council here in ANZ and Emma Tomlin of Microsoft sits on the Executive Council.
See below for what they have to say about why they got involved in CompTIA;
Karen, why did you get involved in CompTIA and what value do you see for people getting involved in the ANZ community?
I love the IT industry and feel very fortunate to be part of a vibrant channel. Involvement in CompTIA offers me the opportunity to give back to the IT community and help drive improvement.
I am very proud to be part of the ANZ community and of the level of involvement and commitment from many of the members. For some members, the value from membership can be through education, networking events, or the incredible number of tools, insights and legal documents that can be accessed. For others it may be the opportunity to get involved in initiatives such as Dream IT or our nominated charity for 2017, Young ICT Explorers. However members choose to get involved, I’m confident they will see value in the benefits CompTIA provides.
Emma, why did you get involved with CompTIA and since your involvement what have you seen CompTIA bring to the ANZ IT Channel community.
I have worked in the IT industry for 14 years across the UK and Australian markets, in that time I have seen the industry change beyond recognition already, as we truly begin this new digital age, the way our partner community engages and delivers solutions to end customers will irrevocably change.
As we move forward into the coming years, our industry will continue to be disrupted by the huge leap forward in the technological change we see in the world today. For me as an IT professional and someone who is passionate about supporting our partners through this journey, CompTIA is the association that will help guide and support our industry in these ambiguous times and enable our partners to succeed in the future, CompTIA for me is the place to help partners on that journey in ANZ.
Come along to the July 20th Community meeting in Sydney to see the community in action and learn what some of our industry leaders believe the future holds for Partnering in the Channel. It is FREE to attend for all IT Channel Professionals and not only offers you a fantastic opportunity to hear from some of our industry’s thought leaders but also to network with peers and colleagues.
Make practical use of NSG Flow Logs for troubleshooting
So, you’ve found the new Azure Network Watcher features.
Cool, now I can start getting some real information about what’s going on in my Azure Network.
The NSG flow logs section of the Network Watcher blade, in the Azure Portal, lets you specify a Storage Account for each NSG (Network Security Group) to output detailed information on all the traffic hitting the NSG.
Note: The Storage Account must be in the same region as the NSG
Awesome right!
Then you try and use that information and find there are a heap of sub-folders created for each Resource Group, NSG all with separate files for each hour of logging.
Also, they are all in JSON format and really hard to find anything specific.
I don’t know about you, but I found it really hard to troubleshoot a particular network communication issue with a specific VM and/or Subnet.
So, I’ve written the following Powershell script that pulls all these files from the Storage Account down to a local folder and then builds an array of these ‘Events’.
The properties built out into the array are –
ResourceGroup, NSG, Ref, SrcIP, DstIP, SrcPort, DstPort, Protocol ‘T(TCP)/U(UDP)’, Direction ‘O(Outbound)/I(Inbound)’, Type ‘A(Accept)/D(Denied)’, Time, RuleType, and RuleName.
This is then presented in a GridView that can be sorted and filtered to find what you are looking for.
I’ve also added an option at the end to dump it out to a CSV if you need to send the output to someone else.
#Enter the Storage Account Name where the NSG Flow Logs have been configured to be stored $StorageAccountName = "jrtnetworkwatcher" $LocalFolder = "$($env:TEMP)AzureNSGFlowLogs" #If you want to override where the local copy of the NSG Flow Log files go to, un-comment the following line and update #$LocalFolder = "C:TempAzureNSGFlowLogs" $TimeZoneStr = (Get-TimeZone).Id #If you want to override the Timezone to something other than the locally detected one, un-comment the following line and update #$TimeZoneStr = "AUS Eastern Standard Time" $StorageResource = Find-AzureRmResource -ResourceNameContains $StorageAccountName -ResourceType Microsoft.Storage/storageAccounts $ResourceGroupName = $StorageResource.ResourceGroupName $StoKeys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName $StorageAccountKey = $StoKeys[0].Value $SubscriptionID = (Get-AzureRmContext).Subscription.Id $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey #Function to create the sub folders as required function MakeFolders ( [object]$Path, [string]$root ) { if (!(Test-Path -Path $root)) { $rootsplit = $root.Split("") $tPath = '' foreach ($tPath in $rootsplit) { $BuildPath = "$($BuildPath)$($tPath)" $BuildPath if (!(Test-Path -Path $BuildPath)) { mkdir $BuildPath } } } $Build = "$($root)" foreach ($fld in $path) { $Build = "$($Build)$($fld)" #$Build mkdir "$($build)" -ErrorAction SilentlyContinue } } #End function MakeFolders #Function to get the Timezone information function GetTZ ( [string]$TZ_string ) { $r = [regex] "[([^[]*)]" $match = $r.match($($TZ_string)) # If there is a successful match for a Timezone ID if ($match.Success) { $TZId = $match.Groups[1].Value # Try and get a valid TimeZone entry for the matched TimeZone Id try { $TZ = [System.TimeZoneInfo]::FindSystemTimeZoneById($TZId) } # Otherwise assume UTC catch { $TZ = [System.TimeZoneInfo]::FindSystemTimeZoneById("UTC") } } else { try { $TZ = [System.TimeZoneInfo]::FindSystemTimeZoneById($TZ_string) } catch { $TZ = [System.TimeZoneInfo]::FindSystemTimeZoneById("UTC") } } return $TZ } #end function GetTZ #Set the $TZ variable up with the required TimeZone information $TZ = GetTZ -TZ_string $TimeZoneStr #Get all the blobs in the insights-logs-networksecuritygroupflowevent container from the specified Storage Account $blobs = Get-AzureStorageBlob -Container "insights-logs-networksecuritygroupflowevent" -Context $StorageContext #Build an array of the available selection criteria $AvailSelection = @() foreach ($blobpath in $blobs.name) { $PathSplit = $blobpath.Split("/") $datestr = Get-Date -Year $PathSplit[9].Substring(2) -Month $PathSplit[10].Substring(2) -Day $PathSplit[11].Substring(2) -Hour $PathSplit[12].Substring(2) -Minute $PathSplit[13].Substring(2) -Second 0 -Millisecond 0 -Format "yyyy-MM-ddTHH:mm:ssZ"#2017-07-06T02:53:02.3410000Z $blobdate = (get-date $datestr).ToUniversalTime() #Write-Output "RG: $($PathSplit[4])`tNSG:$($PathSplit[8])`tDate:$($blobdate)" $SelectData = New-Object psobject -Property @{ ResourceGroup = $PathSplit[4] NSG = $PathSplit[8] Date = $blobdate } $AvailSelection += $SelectData } #Prompt user to select the required ResourceGroup(s), NSG(s), and hourly sectioned files $SelectedResourceGroups = ($AvailSelection | Select-Object -Property "ResourceGroup" -Unique | Out-GridView -Title "Select required Resource Group(s)" -PassThru).ResourceGroup $selectedNSGs = ($AvailSelection | Where-Object {$_.ResourceGroup -in $SelectedResourceGroups} | Select-Object -Property "NSG" -Unique | Out-GridView -Title "Select required NSG(s)" -PassThru).NSG $SelectedDates = ($AvailSelection | Where-Object {$_.ResourceGroup -in $SelectedResourceGroups -and $_.NSG -in $selectedNSGs} | Select-Object -Property @{n="System DateTime";e={$_.Date}},@{n="Local DateTime";e={[System.TimeZoneInfo]::ConvertTimeFromUtc((Get-Date $_.Date),$TZ)}} -Unique | Sort-Object Date -Descending| Out-GridView -Title "Select required times (1 hour blocks)" -PassThru).'System DateTime' #Loop though blobs and download any that meet the specified selection and are newer than those already downloaded foreach ($blob in $blobs) { $PathSplit = $blob.Name.Split("/") $blobdate = get-date -Year $PathSplit[9].Substring(2) -Month $PathSplit[10].Substring(2) -Day $PathSplit[11].Substring(2) -Hour $PathSplit[12].Substring(2) -Minute $PathSplit[13].Substring(2) -Second 0 -Millisecond 0 if ($PathSplit[4] -in $SelectedResourceGroups -and $PathSplit[8] -in $selectedNSGs -and $blobdate -in $SelectedDates) { $fld = $blob.name.Replace("/","") $flds = $fld.Split("") | Where-Object {$_ -ne "PT1H.json"} $lcl = "$($LocalFolder)$($fld)" MakeFolders -Path $flds -root $LocalFolder if (Test-Path -Path $lcl) { $lclfile = Get-ItemProperty -Path $lcl $lcldate = (get-date $lclfile.LastWriteTimeUtc) } else { $lcldate = get-date "1 Jan 1970" } $blobdate = $blob.LastModified if ($blobdate -gt $lcldate) { Write-Output "Copied`t$($blob.Name)" Get-AzureStorageBlobContent -Container "insights-logs-networksecuritygroupflowevent" -Context $StorageContext -Blob $blob.Name -Destination $lcl -Force } else { Write-Output "Leave`t$($blob.Name)" } } } #Get a list of all the files in the specified local directory $Files = dir -Path "$($LocalFolder)resourceId=SUBSCRIPTIONS$($SubscriptionID)RESOURCEGROUPS" -Filter "PT1H.json" -Recurse #Loop through local files and build $Events array up with files that meet the selected criteria. $Events=@() foreach ($file in $Files) { $PathSplit = ($file.DirectoryName.Replace("$($LocalFolder)",'')).split("") $blobdate = get-date -Year $PathSplit[9].Substring(2) -Month $PathSplit[10].Substring(2) -Day $PathSplit[11].Substring(2) -Hour $PathSplit[12].Substring(2) -Minute $PathSplit[13].Substring(2) -Second 0 -Millisecond 0 $blobResourceGroup = $PathSplit[4] $blobNSG = $PathSplit[8] if ($blobResourceGroup -in $SelectedResourceGroups -and $blobNSG -in $selectedNSGs -and $blobdate -in $SelectedDates) { $TestFile = $file.FullName $json = Get-Content -Raw -Path $TestFile | ConvertFrom-Json foreach ($entry in $json.records) { $time = (get-date $entry.time).ToUniversalTime() $time = [System.TimeZoneInfo]::ConvertTimeFromUtc($time, $TZ) foreach ($flow in $entry.properties.flows) { $rules = $flow.rule.split("_") $RuleType = $rules[0] $RuleName = $rules[1] foreach ($f in $flow.flows) { $Header = "Ref","SrcIP","DstIP","SrcPort","DstPort","Protocol","Direction","Type" $o = $f.flowTuples | ConvertFrom-Csv -Delimiter "," -Header $Header $o = $o | select @{n='ResourceGroup';e={$blobResourceGroup}},@{n='NSG';e={$blobNSG}},Ref,SrcIP,DstIP,SrcPort,DstPort,Protocol,Direction,Type,@{n='Time';e={$time}},@{n="RuleType";e={$RuleType}},@{n="RuleName";e={$RuleName}} $Events += $o } } } } } #Open $Events in GridView for user to see and filter as required $Events | Sort-Object Time -Descending | Out-GridView #Prompt to export to CSV. $CSVExport = read-host "Do you want to export to excel? (Y/N)" #If CSV required, export and open CSV file if ($CSVExport.ToUpper() -eq "Y") { $FileName = "$($LocalFolder)NSGFlowLogs-$(Get-Date -Format "yyyyMMddHHmm").csv" $Events | Sort-Object Time -Descending | export-csv -Path $FileName -NoTypeInformation Start-Process "Explorer" -ArgumentList $FileName }
Part 5 : SCOM 2012 R2 HealthService Event Reference / ConfigurationManager
ConfigurationManager
EventID=1100
Severity=Error Message=Property reference with id:”%5″ in workflow “%4”, running for instance “%3″ with id:”%2” cannot be resolved Workflow will not be loaded Management group “%1”
EventID=1101
Severity=Error Message=Host reference in workflow “%4”, running for instance “%3″ with id:”%2” cannot be resolved Workflow will not be loaded Management group “%1”
EventID=1102
Severity=Error Message=Rule/Monitor “%4” running for instance “%3″ with id:”%2” cannot be initialized and will not be loaded Management group “%1”
EventID=1103
Severity=Warning Message=Summary: %2 rule(s)/monitor(s) failed and got unloaded, %3 of them reached the failure limit that prevents automatic reload Management group “%1” This is summary only event, please see other events with descriptions of unloaded rule(s)/monitor(s)
EventID=1104
Severity=Error Message=RunAs profile in workflow “%4”, running for instance “%3″ with id:”%2” cannot be resolved Workflow will not be loaded Management group “%1”
EventID=1105
Severity=Error Message=Type mismatch for RunAs profile in workflow “%4”, running for instance “%3″ with id:”%2” Workflow will not be loaded Management group “%1”
EventID=1106
Severity=Error Message=Cannot access plain text RunAs profile in workflow “%4”, running for instance “%3″ with id:”%2” Workflow will not be loaded Management group “%1”
EventID=1108
Severity=Error Message=An Account specified in the Run As Profile “%7” cannot be resolved Specifically, the account is used in the Secure Reference Override “%6” %n%nThis condition may have occurred because the Account is not configured to be distributed to this computer To resolve this problem, you need to open the Run As Profile specified below, locate the Account entry as specified by its SSID, and either choose to distribute the Account to this computer if appropriate, or change the setting in the Profile so that the target object does not use the specified Account %n%nManagement Group: %1 %nRun As Profile: %7 %nSecureReferenceOverride name: %6 %nSecureReferenceOverride ID: %4 %nObject name: %3 %nObject ID: %2 %nAccount SSID: %5
EventID=1109
Severity=Informational Message=All credential references resolved successfully %n%n %n%nManagement Group: %1
EventID=1200
Severity=Informational Message=New Management Pack(s) requested Management group “%1″, configuration id:”%2″
EventID=1201
Severity=Informational Message=New Management Pack with id:”%1″, version:”%2″ received
EventID=1202
Severity=Warning Message=New Management Pack with id:”%1″, version:”%2″ conflicts with cached Management Pack Condition indicates wrong server configuration
EventID=1203
Severity=Warning Message=Management Pack with id:”%1″, version:”%2″ has been changed locally Management Pack cache file will be deleted and re-requested from server
EventID=1204
Severity=Informational Message=Management Pack with id:”%1″, version:”%2” is no longer used by HealthService and will be deleted from cache
EventID=1205
Severity=Informational Message=Configuration reload request received Configuration for management group “%1” will be reloaded
EventID=1206
Severity=Error Message=Rule/Monitor “%2”, running for instance “%3″ with id:”%4” failed, got unloaded and reached the failure limit that prevents automatic reload Management group “%1”
EventID=1207
Severity=Warning Message=Rule/Monitor “%4” running for remote instance “%3″ with id:”%2” will be disabled as it is not remotable Management group “%1″
EventID=1208
Severity=Warning Message=Ignoring file %1 as it is not a Management Pack EventID=1209 Severity=Error Message=Management Pack with id:”%1″, version:”%2” has been requested “%3” times Management group “%4”
EventID=1210
Severity=Informational Message=New configuration became active Management group “%1″, configuration id:”%2”
EventID=1215
Severity=Informational Message=Suspending monitoring for instance “%3″ with id:”%2” as the instance maintenance mode is ON Management group “%1”
EventID=1216
Severity=Informational Message=Resuming monitoring for instance “%3″ with id:”%2” as the instance maintenance mode is OFF Management group “%1”
EventID=1217
Severity=Informational Message=The Microsoft Monitoring Agent running on computer “%1″ is suspended for the following reason:%n”%2”
EventID=1220
Severity=Error Message=Received configuration cannot be processed Management group “%1” The error is %2(%3)
EventID=1221
Severity=Informational Message=Received configuration successfully processed after failure(s) Management group “%1”
EventID=1230
Severity=Error Message=New configuration cannot be loaded, the error is %2(%3) Management group “%1”
EventID=1231
Severity=Informational Message=New configuration successfully loaded after failure(s) Management group “%1”
EventID=1232
Severity=Warning Message=Could not resolve override “%2” for workflow “%3” because the forbidden “$RunAs” token was found in the override value Management group “%1”
Source : Part 5 : SCOM 2012 R2 HealthService Event Reference / ConfigurationManager
Cengiz KUSKAYA
Part 6 : SCOM 2012 R2 HealthService Event Reference / ExecutionManager
ExecutionManager
EventID=4000
Severity=Error Message=A monitoring host is unresponsive or has crashed. The status code for the host failure was %1
EventID=4001
Severity=Error Message=ESE failure trying to remove module state. The status code for the failure was %1
Source : Part 6 : SCOM 2012 R2 HealthService Event Reference / ExecutionManager
Cengiz KUSKAYA
Part 7 : SCOM 2012 R2 HealthService Event Reference / HealthManager
HealthManager
EventID=5101
Severity=Error Message=Get the snapshot of current health states for selected instance failed with error: %7 %n%nInstance ID: %6 %nManagement Group ID: %5
EventID=5102
Severity=Error Message=Get the state of the monitor failed with error: %8 %n%nMonitor ID: %7 %nInstance ID: %6 %nManagement Group ID: %5
EventID=5200
Severity=Warning Message=One of the state change notification rules failed to process state change request %n%nManagement Group ID: %2 %nData item type: %1
EventID=5201
Severity=Warning Message=Failed to generate string representing post pending data item(s) This data will be lost %n%nManagement Group ID: %2 %nData item type: %1
EventID=5202
Severity=Warning Message=Failed to persist string representing post pending data item(s) This data will be lost %n%nManagement Group ID: %2 %nData item type: %1
EventID=5203
Severity=Warning Message=Failed to retrieve previously persisted data This data may be lost %n%nManagement Group ID: %2 %nData item type: %1
EventID=5204
Severity=Warning Message=Failed to store post pending data item(s), created from previously persisted string, internally This data will be lost %n%nManagement Group ID: %2 %nData item type: %1
EventID=5205
Severity=Warning Message=Failed to store post pending data item(s) internally while acknowledgement for previous post was pending delivery This data will be lost %n%nManagement Group ID: %2 %nData item type: %1
EventID=5206
Severity=Warning Message=In memory container (%1) had to drop data because it reached max limit Possible data loss
EventID=5207
Severity=Warning Message=Failed to store post pending data item(s) internally while no subscribed module was present This data will be lost %n%nManagement Group ID: %2 %nData item type: %1
EventID=5208
Severity=Warning Message=Failed to persist post pending data item(s) while shutting down local health service This data will be lost %n%nManagement Group ID: %2 %nData item type: %1
EventID=5209 Severity=Warning Message=Failed to store at least one posted data item internally It will be lost if acknowledgement is not delivered for original post %n%nManagement Group ID: %2 %nData item type: %1
EventID=5300
Severity=Error Message=Local health service is not healthy Entity state change flow is stalled with pending acknowledgement %n%nManagement Group: %2 %nManagement Group ID: %1
EventID=5301
Severity=Success Message=Entity state change flow in local health service resumed %n%nManagement Group: %2 %nManagement Group ID: %1
EventID=5302
Severity=Error Message=Local health service is not healthy Monitor state change flow is stalled with pending acknowledgement %n%nManagement Group: %2 %nManagement Group ID: %1
EventID=5303
Severity=Success Message=Monitor state change flow in local health service resumed %n%nManagement Group: %2 %nManagement Group ID: %1
EventID=5304
Severity=Error Message=Local health service is not healthy Alert flow is stalled with pending acknowledgement %n%nManagement Group: %2 %nManagement Group ID: %1
EventID=5305
Severity=Success Message=Alert flow in local health service resumed %n%nManagement Group: %2 %nManagement Group ID: %1
EventID=5399
Severity=Warning Message=A rule has generated %6 alerts in the last %7 seconds Usually, when a rule generates this many alerts, it is because the rule definition is misconfigured Please examine the rule for errors In order to avoid excessive load, this rule will be temporarily suspended until %8 %nRule: %2 %nInstance: %3 %nInstance ID: %4 %nManagement Group: %1
EventID=5400
Severity=Warning Message=Failed to replace parameter while creating the alert %n%nAlert: %6 %nWorkflow: %2 %nInstance: %3 %nInstance ID: %4 %nManagement Group: %1 %n%nFailing replacement: %7 EventID=5401 Severity=Warning Message=Failed to replace parameter while creating the alert for monitor state change %n%nWorkflow: %7 %nInstance: %8 %nInstance ID: %5 %nManagement Group: %6 %n%nFailing replacement: %4
EventID=5402
Severity=Warning Message=Parameter replacement during creation of the alert failed causing unexpected suppression used %n%nAlert: %6 %nWorkflow: %2 %nInstance: %3 %nInstance ID: %4 %nManagement Group: %1 %n%nFailing replacement: %7
EventID=5404
Severity=Warning Message=Invalid value for alert priority used with configuration of the rule It was outside of allowed range and had to be adjusted to closest valid value %n%nAlert: %5 %nWorkflow: %2 %nInstance: %3 %nInstance ID: %4 %nManagement Group: %1 %n%nUsed priority value: %6
EventID=5405
Severity=Warning Message=Invalid value for alert severity used with configuration of the rule It was outside of allowed range and had to be adjusted to closest valid value %n%nAlert: %5 %nWorkflow: %2 %nInstance: %3 %nInstance ID: %4 %nManagement Group: %1 %n%nUsed severity value: %6
EventID=5406
Severity=Warning Message=Incorrect value for health state was used while overriding monitor property ‘DefaultState’ It was not recognized and monitor registration will fail until it is corrected %nManagement Group: %5 %nManagement Group ID: %1 %nInstance: %7 %nInstance ID: %2 %nMonitor: %6 %n%nMonitor ID: %3 %n%nRequested State: %4
EventID=5407
Severity=Success Message=Invalid value for health state was used while overriding monitor property ‘DefaultState’ It was outside of allowed range and had been adjusted to use state ‘Success’ %nManagement Group: %5 %nManagement Group ID: %1 %nInstance: %7 %nInstance ID: %2 %nMonitor: %6 %n%nMonitor ID: %3 %n%nRequested State: %4
EventID=5408
Severity=Error Message=Failed to replace parameter while creating the alert It was possibly caused by incorrect XPATH and will result in rule unload %n%nAlert: %6 %nWorkflow: %2 %nInstance: %3 %nInstance ID: %4 %nManagement Group: %1 %n%nFailing replacement: %7
EventID=5409
Severity=Error Message=Failed to replace parameter while creating the alert for monitor state change It was possibly caused by incorrect XPATH and will result in monitor unload %n%nWorkflow: %7 %nInstance: %8 %nInstance ID: %5 %nManagement Group: %6 %n%nFailing replacement: %4 EventID=5500 Severity=Informational Message=Frequent state change requests caused the incoming state change request to be dropped due to it being older than the currently recorded state change for this monitor This could also be due to an invalid configuration for this monitor %n%nAffected monitor: %9 %nInstance: %10 %nInstance ID: %2 %nManagement Group: %8 %n%nRequest generated time: %4 %nRequested state: %6 %n%nRecorded time: %5 %nRecorded state %7
Source : Part 7 : SCOM 2012 R2 HealthService Event Reference / HealthManager
Cengiz KUSKAYA
Part 8 : SCOM 2012 R2 HealthService Event Reference / SecureStorageManager
SecureStorageManager
EventID=7000
Severity=Error Message=The Health Service could not log on the RunAs account %1%2 for management group %5 The error is %3(%4) This will prevent the health service from monitoring or performing actions using this RunAs account
EventID=7001
Severity=Warning Message=The password for RunAs account %1%2 for management group %5 is expiring on %3 If the password is not updated by then, the health service will not be able to monitor or perform actions using this RunAs account There are %4 days left before expiration
EventID=7002
Severity=Error Message=The Health Service could not log on the RunAs account %1%2 for management group %3 because it has not been granted the “%4” right
EventID=7004
Severity=Warning Message=The Health Service received a secure message from management group %1 which was encrypted using the wrong public key This message has been discarded and the public key will be re-published
EventID=7005
Severity=Error Message=The Health Service was unable to publish its public key to management group %1 and will be unable to receive secure messages until this key is published Attempts to publish the key will continue
EventID=7006
Severity=Success Message=The Health Service has published the public key [%2] used to send it secure messages to management group %1 This message only indicates that the key is scheduled for delivery, not that delivery has been confirmed
EventID=7007
Severity=Warning Message=The Health Service has defaulted to using the action account for RunAs Profile %1 because it did not receive any information about this RunAs Profile Depending on your configuration monitoring may not be able to proceed for modules which depend on this RunAs Profile
EventID=7008
Severity=Warning Message=The Health Service has defaulted to using the action account for RunAs Profile Id %1 requested by Rule %2 because it did not receive any information about this RunAs Profile Depending on your configuration, monitoring may not be able to proceed for modules which depend on this RunAs Profile Other rules may be affected and this event will not continue to be published for each failing rule
EventID=7009
Severity=Error Message=The Health Service cannot find the credential for the action account for management group %1 Monitoring will be significantly impacted
EventID=7010
Severity=Warning Message=The Health Service has downloaded new configuration for management group %1, and that configuration has specified a new action account, but the account is not of type ‘Action Account’ or ‘Windows Credential’ This change has been ignored as it would significantly impact monitoring Please fix the Action Account RunAs Profile to be an Action Account or Windows Credential
EventID=7011
Severity=Warning Message=The Health Service has downloaded a new account in management group %1, but the password is blank The Health Service does not support managing Windows credentials with blank passwords The account has not been updated If this account existed previously, it’s old password will be used and logon may fail If this is a new account, the action account will be used instead The name of the account has been withheld for security purposes
EventID=7012
Severity=Warning Message=The health service received a credential from management group %1 to run a task with, but the credential has a blank password and has been rejected for security purposes The account name has been withheld, also for security purposes
EventID=7013
Severity=Warning Message=The keypair that the health service uses to receive secure messages is expiring on %1 The health service will be recycled and the key regenerated
EventID=7014
Severity=Warning Message=The RunAs account %1%2 for management group %5 is expiring on %3 If the account expiration is not extended by then, the health service will not be able to monitor or perform actions using this RunAs account There are %4 days left
EventID=7015
Severity=Error Message=The Health Service cannot verify the future validity of the RunAs account %1%2 for management group %5 The error is %3(%4)
EventID=7016
Severity=Error Message=The Health Service cannot verify the future validity of the RunAs account %1%2 for management group %5 due to an error retrieving information from Active Directory (for Domain Accounts) or the local security authority (for Local Accounts) The error is %3(%4)
EventID=7017
Severity=Warning Message=The health service blocked access to the windows credential %1%2 because it is not authorized on management group %3 You can run the HSLockdown tool to change which credentials are authorized
EventID=7018
Severity=Error Message=The health service blocked access to the non-windows credential with ID %2 because it is not authorized on management group %2 You can run the HSLockdown tool to change which credentials are authorized
EventID=7019
Severity=Success Message=The Health Service has validated all RunAs accounts for management group %1
EventID=7020
Severity=Warning Message=The Health Service has validated all RunAs accounts for management group %1, except those we could not monitor
EventID=7021
Severity=Error Message=The Health Service was unable to validate any user accounts in management group %1
EventID=7022
Severity=Error Message=The Health Service has downloaded secure configuration for management group %1, and processing the configuration failed with error code %2(%3)
EventID=7023 Severity=Success Message=The Health Service has downloaded secure configuration for management group %1 successfully
EventID=7024
Severity=Success Message=The Health Service successfully logged on all accounts for management group %1
EventID=7025
Severity=Success Message=The Health Service has authorized all configured RunAs accounts to execute for management group %1
EventID=7026
Severity=Success Message=The Health Service successfully logged on the RunAs account %1%2 for management group %3
EventID=7027
Severity=Success Message=The Health Service authorized the RunAs account %1%2 to execute for management group %3
EventID=7028
Severity=Success Message=All RunAs accounts for management group %1 have the correct logon type EventID=7029 Severity=Error Message=The Health Service was detected that the private key for secure data processing has been removed or is invalid The certificate and key will be regenerated
Source : Part 8 : SCOM 2012 R2 HealthService Event Reference / SecureStorageManager
Cengiz KUSKAYA
Part 9 : SCOM 2012 R2 HealthService Event Reference / DiscoveryManager
DiscoveryManager
EventID=10000
Severity=Warning Message=A scheduled discovery task was not started because the previous task for that discovery was still executing %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group name: %1
EventID=10001
Severity=Warning Message=An error occurred starting a discovery task %nError Code: %1 %n%nDiscovery name: %3 %nInstance name: %4 %nManagement group name: %2 EventID=10002 Severity=Warning Message=Unable to write discovery data for discovery The discovery task will be run again at its next scheduled time %nError Code: %1 %n%nDiscovery name: %3 %nInstance name: %4 %nManagement group name: %2
EventID=10003
Severity=Warning Message=Non-empty incremental data submitted and will be dropped %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group name: %1
EventID=10004
Severity=Warning Message=Discovery manager has detected an unsupported topology On demand discovery is being disabled and the health service will be restarted in legacy mode
EventID=10005
Severity=Warning Message=An unexpected error has occurred in a discovery task %nError: %1 %n%nDiscovery name: %3 %nInstance name: %4 %nManagement group name: %2
EventID=10006
Severity=Warning Message=Discovery task has timed out %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group name: %1
EventID=10007
Severity=Warning Message=Discovery task has been canceled unexpectedly %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group name: %1
EventID=10008
Severity=Warning Message=Discovery task has been suspended unexpectedly %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group name: %1
EventID=11350
Severity=Error Message=Discovery failed to initialize due to invalid schedule configuration %n%nInvalid Configuration: %1 %n%nDiscovery name: %3 %nInstance name: %4 %nManagement group: %2
EventID=11351
Severity=Error Message=Discovery failed to initialize due to invalid schedule configuration %n%nRejected Item: %1 %n%nRejected Value: %2 %n%nDiscovery name: %4 %nInstance name: %5 %nManagement group: %3
EventID=11352
Severity=Error Message=Discovery failed to initialize because the recurring interval can not be greater than the full re-sync interval for the schedule %n%nRecurring Interval Length (seconds): %1 %n%nFull Re-Sync Interval Length (seconds): %2 %n%nDiscovery name: %4 %nInstance name: %5 %nManagement group: %3
EventID=11353
Severity=Error Message=Discovery failed to initialize because the maximum number of schedule windows has been reached %n%nNumber of windows: %1 %n%nMaximum number of windows allowed: %2 %n%nDiscovery name: %4 %nInstance name: %5 %nManagement group: %3
EventID=11354
Severity=Error Message=Discovery failed to initialize because there is no schedule window specified %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11355
Severity=Error Message=Discovery failed to initialize because some schedule window has a negative start %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11356
Severity=Error Message=Discovery failed to initialize because some schedule window has a non-positive length of time %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11357
Severity=Error Message=Discovery failed to initialize because some schedule windows overlap with each other %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11358
Severity=Error Message=Discovery failed to initialize because the last and first schedule windows overlap %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11359
Severity=Error Message=Discovery failed to initialize because the first schedule window is after the full interval %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11360
Severity=Error Message=Discovery failed to initialize because the number of excluded dates in the schedule is greater than the maximum allowed %n%nNumber of excluded dates: %1 %n%nMaximum number allowed: %2 %n%nDiscovery name: %4 %nInstance name: %5 %nManagement group: %3
EventID=11361
Severity=Error Message=Discovery failed to initialize because the exclude date interval in the schedule failed to parse using the MM/dd format %n%nDay of the year in error: %1 %n%nDiscovery name: %3 %nInstance name: %4 %nManagement group: %2
EventID=11362
Severity=Error Message=Discovery failed to initialize because the specified schedule is not recognized as a weekly or simple recurring one %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11363
Severity=Error Message=Discovery failed to initialize because the specified schedule interval number is out of range %n%nInterval: %1 %n%nMinimum Unit Number: %2 %n%nMaximum Unit Number: %3 %n%nDiscovery name: %5 %nInstance name: %6 %nManagement group: %4
EventID=11364
everity=Error Message=Discovery failed to initialize because the specified schedule interval is greater than the maximum allowed %n%nInterval (seconds): %1 %n%nMaximum (seconds): %2 %n%nDiscovery name: %4 %nInstance name: %5 %nManagement group: %3
EventID=11365
Severity=Error Message=Discovery failed to initialize because the specified schedule contains an invalid multiple days interval The start and end mask validation failed Make sure to choose only one day of the week for each, and that they differ from each other %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11366
Severity=Error Message=Discovery failed to initialize because some schedule window has no day of the week set %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11367
Severity=Error Message=Discovery failed to initialize because some “hour in the day” in the schedule could not be parsed on the HH:mm format %n%nHour in the day: %1 %n%nDiscovery name: %3 %nInstance name: %4 %nManagement group: %2
EventID=11368
Severity=Error Message=Discovery failed to initialize because in a single day window, the start hour is greater than the end hour %n%nDiscovery name: %2 %nInstance name: %3 %nManagement group: %1
EventID=11369
Severity=Error Message=Discovery failed to initialize because some value in its schedule configuration is too long %n%nValue: %1 %n%nDiscovery name: %3 %nInstance name: %4 %nManagement group: %2
EventID=11370
Severity=Error Message=Discovery failed to initialize because the specified schedule spread initialization interval number is out of range %n%nInterval: %1 %n%nMinimum Unit Number: %2 %n%nMaximum Unit Number: %3 %n%nDiscovery name: %5 %nInstance name: %6 %nManagement group: %4
EventID=11371
Severity=Error Message=Discovery failed to initialize because the specified schedule spread initialization interval is greater than the maximum allowed %n%nInterval (seconds): %1 %n%nMaximum (seconds): %2 %n%nDiscovery name: %4 %nInstance name: %5 %nManagement group: %3
Source : Part 9 : SCOM 2012 R2 HealthService Event Reference / DiscoveryManager
Cengiz KUSKAYA
Part 10 : SCOM 2012 R2 HealthService Event Reference / DataPublisherManager
DataPublisherManager
EventID=8000
Severity=Warning Message=A subscriber data source in management group %1 has posted items to the workflow, but has not received a response in %5 minutes Data will be queued to disk until a response has been received This indicates a performance or functional problem with the workflow%n Workflow Id : %2%n Instance : %3%n Instance Id : %4
EventID=8001
Severity=Success Message=A subscriber data source in management group %1 has caught up processing queued data%n Workflow Id : %2%n Instance : %3%n Instance Id : %4
EventID=8002
Severity=Warning Message=A subscriber data source in management group %1 has groomed data due to the queue size of %5 MB being exceeded%n Workflow Id : %2%n Instance : %3%n Instance Id : %4
Source : Part 10 : SCOM 2012 R2 HealthService Event Reference / DataPublisherManager
Cengiz KUSKAYA
Part 11 : SCOM 2012 R2 HealthService Event Reference / JobManager
JobManager
EventID=9000
Severity=Error Message=The task status was changed to Failed on restart and was unable to be resumed because the System Center Management service was shut down
EventID=9001
Severity=Warning Message=The task is not active, and the task status was not correctly updated There is no way to determine if the task has run
EventID=9002
Severity=Error Message=The task is not active, and the task status was not correctly updated There is no way to determine if the task has run
EventID=9003
Severity=Error Message=The task attempted to suspend itself but could not because no suspend capability was defined in the task definition
EventID=9004
Severity=Error Message=The task could not be delivered to the targeted object’s System Center Management service
Source : Part 11 : SCOM 2012 R2 HealthService Event Reference / JobManager
Cengiz KUSKAYA
Part 12 : SCOM 2012 R2 HealthService Event Reference / PoolManager
PoolManager
EventID=15000
Severity=Informational Message=The pool member has initialized %n%nManagement Group: %1 %nManagement Group ID: %2 %nPool Name: %3 %nPool ID: %4 %nPool Version: %5 %nNumber of Pool Members: %6 %nNumber of Observer Only Pool Members: %7 %nNumber of Members Added: %8 %nNumber of Members Removed: %9 %nNumber of Instances: %10 %nNumber of Instances Added: %11 %nNumber of Instances Removed: %12
EventID=15001
Severity=Informational Message=More than half of the members of the pool have acknowledged the most recent initialization check request The pool member will send a lease request to acquire ownership of managed objects assigned to the pool %n%nManagement Group: %1 %nManagement Group ID: %2 %nPool Name: %3 %nPool ID: %4 %nPool Version: %5 %nNumber of Pool Members: %6 %nNumber of Observer Only Pool Members: %7 %nNumber of Instances: %8
EventID=15002
Severity=Error Message=The pool member cannot send a lease request to acquire ownership of managed objects assigned to the pool because half or fewer members of the pool acknowledged the most recent initialization check request The pool member will continue to send an initialization check request %n%nManagement Group: %1 %nManagement Group ID: %2 %nPool Name: %3 %nPool ID: %4 %nPool Version: %5 %nNumber of Pool Members: %6 %nNumber of Observer Only Pool Members: %7 %nNumber of Instances: %8
EventID=15003
Severity=Informational Message=The availability of one or more members of the pool has changed The ownership for all managed objects assigned to the pool will be redistributed between available pool members %n%nManagement Group: %1 %nManagement Group ID: %2 %nPool Name: %3 %nPool ID: %4 %nPool Version: %5 %nLocal Pool Member Available: %6 %nNumber of Pool Members: %7 %nNumber of Observer Only Pool Members: %8 %nNumber of Members Available: %9 %nNumber of Instances: %10 %nNumber of Instances Locally Activated: %11 %nNumber of Instances Locally Deactivated: %12
EventID=15004
Severity=Error Message=The pool member no longer owns any managed objects assigned to the pool because half or fewer members of the pool have acknowledged the most recent lease request The pool member has unloaded the workflows for managed objects it previously owned %n%nManagement Group: %1 %nManagement Group ID: %2 %nPool Name: %3 %nPool ID: %4 %nPool Version: %5 %nNumber of Pool Members: %6 %nNumber of Observer Only Pool Members: %7 %nNumber of Instances: %8
Source : Part 12 : SCOM 2012 R2 HealthService Event Reference / PoolManager
Cengiz KUSKAYA
Breaking changes in HealthVault SDK for .NET Standard (prerelease version)
We have new build of HealthVault SDK for .NET Standard (prelease version). The version is 1.66.20706.2-preview.
There are breaking changes related with Action Plan namespace/methods, and returning NodaTime types instead of DateTime and DateTimeOffset.
Here are links to download new SDK for .NET Standard:
- HealthVault .NET Standard SDK
- HealthVault .NET Standard Client SDK
- HealthVault .NET Standard Web SDK
Document will be updated for the breaking changes.
Custom Memory Allocation in dxcompiler
This post describes the implementation of the custom memory allocator in dxcompiler. At some point, the information here will likely make it into the repo itself along with other design notes.
The DirectX Shader Compiler is mostly meant for offline usage, that is, compiling during the build process and not while the game or application is running (I’m going to use ‘game’ for the rest of this post because games tend to have some of the most demanding use cases, but everything is applicable to other kinds of programs).
That said, there are some scenarios today where compilation happens online, ie. when the game is running. For example, you may be running a design version of the game that an artist can modify in real-time, or you may be writing an extensible framework where you can’t anticipate all the shaders you might need to compile. Or you might be supporting some sort of scripting for game mods.
When compilation needs to occur while the game is running, it’s important that the compiler not interfere with the rest of the game execution. Games that carefully design their memory usage often need to control where allocations are made to avoid fragmentation or to fit with a given partitioning scheme.
To satisfy these requirements, we’ve modified dxcompiler to support custom memory allocators to allocate and free memory on behalf of an application. You don’t need to supply one, but the compiler is ready to use one if provided. Let’s dive in.
Typically, to create a compiler object, you make a call to DxcCreateInstance.
When you want to provide your own allocator, you can instead make a call to DxcCreateInstance2, and provide an implementation of IMalloc. IMalloc is a COM-style interface that allocates and frees memory. It’s simple to implement and like other COM interfaces, allows the lifetime of the object to be controlled.
The object you requested from DxcCreateInstance2 will be allocated from this allocator and hold a reference to it. Any methods you invoke on your compiler object will use this allocator as well, and any output parameters will be allocated and hold onto this allocator, too. When you release these objects, you’re free to clean up the allocator.
This section includes some internal implementation notes that are useful for people working on the compiler itself.
There are a number of IMalloc implementations. One is provided by COM via the CoGetMalloc function It’s very easy to build one on the heap functions as well.
Whenever possible, it’s preferable to be explicit about which IMalloc is being used. Typically these get passed around as arguments, but there are also two important cases: top-level objects (those created by DxcCreateInstance2) which need to store it for further activity, and objects that outlive top-level calls (typically blobs or result objects) that need to hold on to the allocator beyond the lifetime of the call that creates them.
The compiler is based on clang and LLVM, which don’t support custom allocation per se, but instead rely on malloc/free/realloc and operators new and delete. Rather than modifying every bit of code to pass allocators around, we store the active IMalloc in thread-local storage, and provide an implementation of the memory management functions that use it.
IMalloc can fail to allocate memory. clang and LLVM are designed more for console applications where the compiler owns in part or in whole the process under which it runs, and so it lets the operating system reclaim resources as needed. dxcompiler on the other hand is meant to be a library that can be loaded into any process for various scenarios, and so it should handle exceptions carefully, releasing allocated memory and references taken, and properly returning an error code.
Jon Kalb’s website at http://exceptionsafecode.com/ is an excellent resource for handle errors in C++ code.
In microcom.h you will find the following macros and helper functions. Note that ‘TM’ is used to refer to the threadlocal malloc mechanism.
- DXC_MICROCOM_TM_REF_FIELDS: replacement for DXC_MICROCOM_REF_FIELDS, includes a reference count and an owning m_pMalloc.
- DXC_MICROCOM_TM_ADDREF_RELEASE_IMPL: replacement for DXC_MICROCOM_ADDREF_RELEASE_IMPL, includes deallocating with the owning m_pMalloc and setting it up as the current threadlocal allocator when releasing the object.
- DXC_MICROCOM_TM_CTOR: defines an empty constructor and a helper static Alloc() that will take the owner IMalloc and set it up properly.
If you need arguments passed into the object, the inline CreateOnMalloc function can be used instead of the empty constructor; note that the allocator isn’t assigned to the object in that case.
Most of the declarations to support these can be found in the Global.h (yes, there’s a Global.h file – don’t ask). There are functions to do library initialization and cleanup, and hooking and cleaning up a threadlocal allocator.
Much of the per-call management is encapsulated in the DxcThreadMalloc RAII object, which can be declared on the stack to set the scope for a given allocator.
To actually opt into the threadlocal management, the DLL needs to both initialize and cleanup the mechanism, as well as make sure that new/delete and others are redirected properly. We don’t include this in any of the libraries we build, to make sure it’s a clear opt-in decision for targets.
Globals that get initialized on-demand (like many ManagedStatic values) are tricky, because they aren’t really associated with the currently-executing allocator. Instead, these should be initialized up-front on DllMain, and be alive through the lifetime of the library.
There are a few more interesting things we can conver, like how we use this as a fault-injection mechanism to make sure recovery is working properly, but there’s plenty to chew on here.
Enjoy!
Send Telemetry to Splunk Enterprise from Azure Resources via Azure Monitor, Part 2
For Part 1 of this blog series, which contains overview material, please click here.
There are several scenarios that must be addressed when thinking about getting telemetry from Azure resources to Splunk. What if Splunk is on premises? What if you’re using Splunk Cloud? You could be using a private network connection to Azure, or not. The Splunk add-on approach isn’t suited to all of these. In this blog and others in this series, I’ll introduce some new architectural elements and go into detail on each.
In this article, “the add-on” refers to this.
These specific scenarios will be dealt with:
- Cloud-based Splunk, using the add-on (this article)
- Cloud-based Splunk, using the HTTP Event Collector (Part 3)
- Premises-based Splunk, using private network (Part 4)
- Premises-based Splunk, via the internet (Part 5)
In this, Part 2, I’ll go into the already familiar Cloud-based Splunk using the add-on.
In this scenario, all resources reside in Azure. The Splunk VM is Splunk Enterprise and may be a cluster rather than a single box. Setting up the VM’s firewall and the network’s Network Security Group to allow outbound traffic on 443 is sufficient.
The simplest implementation has all monitored resources and monitoring resources in the same subscription and region, but this is seldom the case. Much more typical is many subscriptions with assets in multiple regions. While there is normally just one Splunk indexer (whether single VM or cluster), you may ingest all telemetry directly into that instance from multiple regional Event Hubs and REST APIs. Alternatively, you may use a Splunk Heavy Forwarder in the remote regions, each with the add-on loaded and configured to ingest that region’s data. Each of the forwarders send the aggregated data along to the main index. In either case, the data never leaves Microsoft’s network.
The decision to use a Heavy Forwarder in the regions vs sending raw data to the indexer isn’t trivial. Here’s an article by a Splunk consultant on the topic.
Another consideration is the Azure AD tenant. For various reasons, it’s important that the resources being monitored and the monitoring resources (event hubs, service principal, etc) all be in the same AAD tenant.
Send Telemetry to Splunk Enterprise from Azure Resources via Azure Monitor, Part 3
For Part 1 of this blog series, which contains overview material, please click here.
There are several scenarios that must be addressed when thinking about getting telemetry from Azure resources to Splunk. What if Splunk is on premises? What if you’re using Splunk Cloud? You could be using a private network connection to Azure, or not. The Splunk add-on approach isn’t suited to all of these. In this blog and others in this series, I’ll introduce some new architectural elements and go into detail on each.
In this article, “the add-on” refers to this.
These specific scenarios will be dealt with:
- Cloud-based Splunk, using the add-on (Part 2)
- Cloud-based Splunk, using the HTTP Event Collector (this article)
- Premises-based Splunk, using private network (Part 4)
- Premises-based Splunk, via the internet (Part 5)
In this, Part 3, I’ll go into Cloud-based Splunk using the HTTP Event Collector (HEC).
Splunk HEC is a “fast and efficient way to send data to Splunk Enterprise and Splunk Cloud.” Here is an Introduction to Splunk HTTP Event Collector. For the purposes of this article, think of HEC as an authenticated Splunk endpoint to which we can send events for indexing.
There are a couple of reasons that I know of that you might want to use HEC:
- You use Splunk Cloud. The add-on won’t install there.
- You don’t want to install the add-on into your Splunk Enterprise system for whatever reason.
To satisfy these requirements we need another mechanism to get the data from event hubs and send it along to Splunk. An Azure Function is a great solution for that. The function that has been built for this purpose is here. Use it with the HEC output binding.
Azure Monitor metrics are available via event hub, the same as diagnostic and activity logs. If the volume of messages increases (or decreases), the Azure Function App will autoscale to meet demand.
While I said initially “Cloud-based Splunk”, that’s just the typical scenario. In reality, as long as the HEC endpoint is reachable by the Azure Function, it’s a perfectly good approach. For example, you have an ExpressRoute connection from Azure to your premises and your Function App leverages an App Service Environment, this connectivity becomes an option. See this article for some of the details.
Send Telemetry to Splunk Enterprise from Azure Resources via Azure Monitor, Part 4
For Part 1 of this blog series, which contains overview material, please click here.
There are several scenarios that must be addressed when thinking about getting telemetry from Azure resources to Splunk. What if Splunk is on premises? What if you’re using Splunk Cloud? You could be using a private network connection to Azure, or not. The Splunk add-on approach isn’t suited to all of these. In this blog and others in this series, I’ll introduce some new architectural elements and go into detail on each.
In this article, “the add-on” refers to this.
These specific scenarios will be dealt with:
- Cloud-based Splunk, using the add-on (Part 2)
- Cloud-based Splunk, using the HTTP Event Collector (Part 3)
- Premises-based Splunk, using private network (this article)
- Premises-based Splunk, via the internet (Part 5)
In this, Part 4, I’ll go into Premises-based Splunk using a private network.
In this configuration, the Splunk Forwarder VM is just Splunk Enterprise with some features switched off. It receives and indexes the incoming events then passes them along to the centralized indexer (VM or cluster). With ExpressRoute (or a site-to-site VPN) configured to encapsulate the VNET where the Splunk VM resides, there is a clear channel to the Splunk Enterprise box on prem. So this configuration is really exactly the same as the one in Part 2 of this series, with the added Splunk instance on prem.
Send Telemetry to Splunk Enterprise from Azure Resources via Azure Monitor, Part 5
For Part 1 of this blog series, which contains overview material, please click here.
There are several scenarios that must be addressed when thinking about getting telemetry from Azure resources to Splunk. What if Splunk is on premises? What if you’re using Splunk Cloud? You could be using a private network connection to Azure, or not. The Splunk add-on approach isn’t suited to all of these. In this blog and others in this series, I’ll introduce some new architectural elements and go into detail on each.
In this article, “the add-on” refers to this.
These specific scenarios will be dealt with:
- Cloud-based Splunk, using the add-on (Part 2)
- Cloud-based Splunk, using the HTTP Event Collector (Part 3)
- Premises-based Splunk, using private network (Part 4)
- Premises-based Splunk, via the internet (this article)
In this, Part 5, I’ll go into Premises-based Splunk via the internet.
In this configuration, Splunk is on prem and behind a proxy server. The proxy server wants a static IP address to allow outbound communications, but Azure services such as Event Hub endpoints and ARM REST API endpoints are names, not static IPs. For this reason, the usual add-on techniques won’t work – the Splunk box on prem can’t get out to see the Azure-based API’s.
The solution is another Azure service: Azure Relay. With Azure Relay, the “listener” role establishes an “open phone line” in the cloud via a call over port 443. The “sender” role can see that open phone line and establish a channel for communications over it.
The components that you need to get going with this are:
- Azure Function for Splunk, located here.
- A Splunk add-on that knows how to work with Azure Relay. That’s here.
In this case, the Azure Function should be configured to run with the Relay output binding. The installation instructions in the README.md covers those details.
Create Bot for Microsoft Graph with DevOps 11: BotBuilder features – Global Message Hanlders
Users may want to say “help” in middle of a dialog. As a developer, you can implement global message handler to handle these “keywords”. Read the article here for more detail.
Implement cancel operation
Let’s implement one of the most common global handler, “cancel”.
1. Add Scorables folder in O365Bot project, and add CancelScorable.cs. In this class, you specify “cancal” as keyword and take action whenever user sends the keyword.
using System; using System.Threading; using System.Threading.Tasks; using Microsoft.Bot.Builder.Dialogs.Internals; using Microsoft.Bot.Builder.Internals.Fibers; using Microsoft.Bot.Connector; using Microsoft.Bot.Builder.Scorables.Internals; namespace O365Bot.Scorables { #pragma warning disable 1998 public class CancelScorable : ScorableBase<IActivity, string, double> { private readonly IDialogTask task; public CancelScorable(IDialogTask task) { SetField.NotNull(out this.task, nameof(task), task); } /// <summary> /// Compare user input with keyword. /// </summary> protected override async Task<string> PrepareAsync(IActivity activity, CancellationToken token) { var message = activity as IMessageActivity; if (message != null && !string.IsNullOrWhiteSpace(message.Text)) { if (message.Text.ToLower().Equals("cancel", StringComparison.InvariantCultureIgnoreCase)) { return message.Text; } } return null; } protected override bool HasScore(IActivity item, string state) { return state != null; } protected override double GetScore(IActivity item, string state) { return 1.0; } /// <summary> /// If keyword found, then reset the current dialog. /// </summary> protected override async Task PostAsync(IActivity item, string state, CancellationToken token) { this.task.Reset(); } protected override Task DoneAsync(IActivity item, string state, CancellationToken token) { return Task.CompletedTask; } } }
2. Add GlobalMessageHandlers.cs file in the root and replace the code. In this code, register the CancelScorable.
using Autofac; using Microsoft.Bot.Builder.Dialogs.Internals; using Microsoft.Bot.Builder.Scorables; using Microsoft.Bot.Connector; using O365Bot.Scorables; namespace O365Bot { public class GlobalMessageHandlers : Module { protected override void Load(ContainerBuilder builder) { base.Load(builder); builder .Register(c => new CancelScorable(c.Resolve<IDialogTask>())) .As<IScorable<IActivity, double>>() .InstancePerLifetimeScope(); } } }
3. Replace the Global.asax.cs to register the handler on startup. As this is part of Conversation Autofac, using Update method to directly insert the build information.
using Autofac; using Microsoft.Bot.Builder.Dialogs; using Microsoft.Bot.Builder.Internals.Fibers; using O365Bot.Services; using System.Configuration; using System.Web.Http; namespace O365Bot { public class WebApiApplication : System.Web.HttpApplication { public static IContainer Container; protected void Application_Start() { this.RegisterBotModules(); GlobalConfiguration.Configure(WebApiConfig.Register); AuthBot.Models.AuthSettings.Mode = ConfigurationManager.AppSettings["ActiveDirectory.Mode"]; AuthBot.Models.AuthSettings.EndpointUrl = ConfigurationManager.AppSettings["ActiveDirectory.EndpointUrl"]; AuthBot.Models.AuthSettings.Tenant = ConfigurationManager.AppSettings["ActiveDirectory.Tenant"]; AuthBot.Models.AuthSettings.RedirectUrl = ConfigurationManager.AppSettings["ActiveDirectory.RedirectUrl"]; AuthBot.Models.AuthSettings.ClientId = ConfigurationManager.AppSettings["ActiveDirectory.ClientId"]; AuthBot.Models.AuthSettings.ClientSecret = ConfigurationManager.AppSettings["ActiveDirectory.ClientSecret"]; var builder = new ContainerBuilder(); builder.RegisterType<GraphService>().As<IEventService>(); Container = builder.Build(); } private void RegisterBotModules() { var builder = new ContainerBuilder(); builder.RegisterModule(new ReflectionSurrogateModule()); builder.RegisterModule<GlobalMessageHandlers>(); builder.Update(Conversation.Container); } } }
Try with emulator
Run the application and try with emulator.
Implement Interruption
What if user wants to see the events while creating one? You can use same global message handler technic.
1. Add GetEventsScorable.cs in Scorables folder and replace code. This is very similar to previous one, but inserting new dialog when the keyword is detected, rather than canceling the current dialog.
using System; using System.Threading; using System.Threading.Tasks; using Microsoft.Bot.Builder.Dialogs; using Microsoft.Bot.Builder.Dialogs.Internals; using Microsoft.Bot.Builder.Internals.Fibers; using Microsoft.Bot.Connector; using Microsoft.Bot.Builder.Scorables.Internals; using O365Bot.Dialogs; namespace O365Bot.Scorables { #pragma warning disable 1998 public class GetEventsScorable : ScorableBase<IActivity, string, double> { private readonly IDialogTask task; public GetEventsScorable(IDialogTask task) { SetField.NotNull(out this.task, nameof(task), task); } protected override async Task<string> PrepareAsync(IActivity activity, CancellationToken token) { var message = activity as IMessageActivity; if (message != null && !string.IsNullOrWhiteSpace(message.Text)) { if (message.Text.Equals("get events", StringComparison.InvariantCultureIgnoreCase)) { return message.Text; } } return null; } protected override bool HasScore(IActivity item, string state) { return state != null; } protected override double GetScore(IActivity item, string state) { return 1.0; } /// <summary> /// If keyword found, then inset dialog /// </summary> protected override async Task PostAsync(IActivity item, string state, CancellationToken token) { var message = item as IMessageActivity; if (message != null) { var getEventsDialog = new GetEventsDialog(); var interruption = getEventsDialog.Void<bool, IMessageActivity>(); await this.task.Forward(interruption, null, message, CancellationToken.None); await this.task.PollAsync(token); } } protected override Task DoneAsync(IActivity item, string state, CancellationToken token) { return Task.CompletedTask; } } }
2. Add following method in GlobalMessageHandlers.cs
builder .Register(c => new GetEventsScorable(c.Resolve<IDialogTask>())) .As<IScorable<IActivity, double>>() .InstancePerLifetimeScope();
Try with emulator
Run the application and try with emulator.
Update tests
As I implemented new features, let’s update tests, too.
Unit Test
For unit test, adding Global Message Handler registration method and call it from every test. You need to be careful which Container to register the handler.
using Microsoft.Bot.Builder.Dialogs; using Microsoft.Bot.Builder.Tests; using Microsoft.Bot.Connector; using Microsoft.VisualStudio.TestTools.UnitTesting; using System; using System.Threading.Tasks; using Autofac; using O365Bot.Dialogs; using Microsoft.Bot.Builder.Dialogs.Internals; using Microsoft.Bot.Builder.Base; using System.Threading; using System.Collections.Generic; using Microsoft.QualityTools.Testing.Fakes; using O365Bot.Services; using Moq; using Microsoft.Graph; using System.Globalization; using Microsoft.Bot.Builder.Internals.Fibers; namespace O365Bot.UnitTests { [TestClass] public class SampleDialogTest : DialogTestBase { [TestMethod] public async Task ShouldReturnEvents() { // Instantiate ShimsContext to use Fakes using (ShimsContext.Create()) { // Return "dummyToken" when calling GetAccessToken method AuthBot.Fakes.ShimContextExtensions.GetAccessTokenIBotContextString = async (a, e) => { return "dummyToken"; }; var mockEventService = new Mock<IEventService>(); mockEventService.Setup(x => x.GetEvents()).ReturnsAsync(new List<Event>() { new Event { Subject = "dummy event", Start = new DateTimeTimeZone() { DateTime = "2017-05-31 12:00", TimeZone = "Standard Tokyo Time" }, End = new DateTimeTimeZone() { DateTime = "2017-05-31 13:00", TimeZone = "Standard Tokyo Time" } } }); var builder = new ContainerBuilder(); builder.RegisterInstance(mockEventService.Object).As<IEventService>(); WebApiApplication.Container = builder.Build(); // Instantiate dialog to test IDialog<object> rootDialog = new RootDialog(); // Create in-memory bot environment Func<IDialog<object>> MakeRoot = () => rootDialog; using (new FiberTestBase.ResolveMoqAssembly(rootDialog)) using (var container = Build(Options.MockConnectorFactory | Options.ScopedQueue, rootDialog)) { // Register global message handler RegisterBotModules(container); // Create a message to send to bot var toBot = DialogTestBase.MakeTestMessage(); toBot.From.Id = Guid.NewGuid().ToString(); toBot.Text = "get events"; // Send message and check the answer. IMessageActivity toUser = await GetResponse(container, MakeRoot, toBot); // Verify the result Assert.IsTrue(toUser.Text.Equals("2017-05-31 12:00-2017-05-31 13:00: dummy event")); } } } [TestMethod] public async Task ShouldCreateAllDayEvent() { // Instantiate ShimsContext to use Fakes using (ShimsContext.Create()) { // Return "dummyToken" when calling GetAccessToken method AuthBot.Fakes.ShimContextExtensions.GetAccessTokenIBotContextString = async (a, e) => { return "dummyToken"; }; // Mock the service and register var mockEventService = new Mock<IEventService>(); mockEventService.Setup(x => x.CreateEvent(It.IsAny<Event>())).Returns(Task.FromResult(true)); var builder = new ContainerBuilder(); builder.RegisterInstance(mockEventService.Object).As<IEventService>(); WebApiApplication.Container = builder.Build(); // Instantiate dialog to test IDialog<object> rootDialog = new RootDialog(); // Create in-memory bot environment Func<IDialog<object>> MakeRoot = () => rootDialog; using (new FiberTestBase.ResolveMoqAssembly(rootDialog)) using (var container = Build(Options.MockConnectorFactory | Options.ScopedQueue, rootDialog)) { // Register global message handler RegisterBotModules(container); // Create a message to send to bot var toBot = DialogTestBase.MakeTestMessage(); // Specify local as US English toBot.Locale = "en-US"; toBot.From.Id = Guid.NewGuid().ToString(); toBot.Text = "add appointment"; // Send message and check the answer. var toUser = await GetResponses(container, MakeRoot, toBot); // Verify the result Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); toBot.Text = "Learn BotFramework"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("What is the detail?")); toBot.Text = "Implement O365Bot"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("When do you start? Use dd/MM/yyyy HH:mm format.")); toBot.Text = "01/07/2017 13:00"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue((toUser[0].Attachments[0].Content as HeroCard).Text.Equals("Is this all day event?")); toBot.Text = "Yes"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("The event is created.")); } } } [TestMethod] public async Task ShouldCreateEvent() { // Instantiate ShimsContext to use Fakes using (ShimsContext.Create()) { // Return "dummyToken" when calling GetAccessToken method AuthBot.Fakes.ShimContextExtensions.GetAccessTokenIBotContextString = async (a, e) => { return "dummyToken"; }; // Mock the service and register var mockEventService = new Mock<IEventService>(); mockEventService.Setup(x => x.CreateEvent(It.IsAny<Event>())).Returns(Task.FromResult(true)); var builder = new ContainerBuilder(); builder.RegisterInstance(mockEventService.Object).As<IEventService>(); WebApiApplication.Container = builder.Build(); // Instantiate dialog to test IDialog<object> rootDialog = new RootDialog(); // Create in-memory bot environment Func<IDialog<object>> MakeRoot = () => rootDialog; using (new FiberTestBase.ResolveMoqAssembly(rootDialog)) using (var container = Build(Options.MockConnectorFactory | Options.ScopedQueue, rootDialog)) { // Register global message handler RegisterBotModules(container); // Create a message to send to bot var toBot = DialogTestBase.MakeTestMessage(); // Specify local as US English toBot.Locale = "en-US"; toBot.From.Id = Guid.NewGuid().ToString(); toBot.Text = "add appointment"; // Send message and check the answer. var toUser = await GetResponses(container, MakeRoot, toBot); // Verify the result Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); toBot.Text = "Learn BotFramework"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("What is the detail?")); toBot.Text = "Implement O365Bot"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("When do you start? Use dd/MM/yyyy HH:mm format.")); toBot.Text = "01/07/2017 13:00"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue((toUser[0].Attachments[0].Content as HeroCard).Text.Equals("Is this all day event?")); toBot.Text = "No"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("How many hours?")); toBot.Text = "4"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("The event is created.")); } } } [TestMethod] public async Task ShouldCancelCurrrentDialog() { // Instantiate ShimsContext to use Fakes using (ShimsContext.Create()) { // Return "dummyToken" when calling GetAccessToken method AuthBot.Fakes.ShimContextExtensions.GetAccessTokenIBotContextString = async (a, e) => { return "dummyToken"; }; // Mock the service and register var mockEventService = new Mock<IEventService>(); mockEventService.Setup(x => x.CreateEvent(It.IsAny<Event>())).Returns(Task.FromResult(true)); var builder = new ContainerBuilder(); builder.RegisterInstance(mockEventService.Object).As<IEventService>(); WebApiApplication.Container = builder.Build(); // Instantiate dialog to test IDialog<object> rootDialog = new RootDialog(); // Create in-memory bot environment Func<IDialog<object>> MakeRoot = () => rootDialog; using (new FiberTestBase.ResolveMoqAssembly(rootDialog)) using (var container = Build(Options.MockConnectorFactory | Options.ScopedQueue, rootDialog)) { // Register global message handler RegisterBotModules(container); // Create a message to send to bot var toBot = DialogTestBase.MakeTestMessage(); // Specify local as US English toBot.Locale = "en-US"; toBot.From.Id = Guid.NewGuid().ToString(); toBot.Text = "add appointment"; // Send message and check the answer. var toUser = await GetResponses(container, MakeRoot, toBot); // Verify the result Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); toBot.Text = "Learn BotFramework"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("What is the detail?")); toBot.Text = "Cancel"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser.Count.Equals(0)); toBot.Text = "add appointment"; toUser = await GetResponses(container, MakeRoot, toBot); // Verify the result Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); } } } [TestMethod] public async Task ShouldInterruptCurrentDialog() { // Instantiate ShimsContext to use Fakes using (ShimsContext.Create()) { // Return "dummyToken" when calling GetAccessToken method AuthBot.Fakes.ShimContextExtensions.GetAccessTokenIBotContextString = async (a, e) => { return "dummyToken"; }; // Mock the service and register var mockEventService = new Mock<IEventService>(); mockEventService.Setup(x => x.CreateEvent(It.IsAny<Event>())).Returns(Task.FromResult(true)); mockEventService.Setup(x => x.GetEvents()).ReturnsAsync(new List<Event>() { new Event { Subject = "dummy event", Start = new DateTimeTimeZone() { DateTime = "2017-05-31 12:00", TimeZone = "Standard Tokyo Time" }, End = new DateTimeTimeZone() { DateTime = "2017-05-31 13:00", TimeZone = "Standard Tokyo Time" } } }); var builder = new ContainerBuilder(); builder.RegisterInstance(mockEventService.Object).As<IEventService>(); WebApiApplication.Container = builder.Build(); // Instantiate dialog to test IDialog<object> rootDialog = new RootDialog(); // Create in-memory bot environment Func<IDialog<object>> MakeRoot = () => rootDialog; using (new FiberTestBase.ResolveMoqAssembly(rootDialog)) using (var container = Build(Options.MockConnectorFactory | Options.ScopedQueue, rootDialog)) { // Register global message handler RegisterBotModules(container); // Create a message to send to bot var toBot = DialogTestBase.MakeTestMessage(); // Specify local as US English toBot.Locale = "en-US"; toBot.From.Id = Guid.NewGuid().ToString(); toBot.Text = "add appointment"; // Send message and check the answer. var toUser = await GetResponses(container, MakeRoot, toBot); // Verify the result Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); toBot.Text = "Learn BotFramework"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("What is the detail?")); toBot.Text = "Get Events"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("2017-05-31 12:00-2017-05-31 13:00: dummy event")); toBot.Text = "Glbal Message Handler for O365Bot"; toUser = await GetResponses(container, MakeRoot, toBot); Assert.IsTrue(toUser[0].Text.Equals("When do you start? Use dd/MM/yyyy HH:mm format.")); } } } /// <summary> /// Send a message to the bot and get repsponse. /// </summary> public async Task<IMessageActivity> GetResponse(IContainer container, Func<IDialog<object>> makeRoot, IMessageActivity toBot) { using (var scope = DialogModule.BeginLifetimeScope(container, toBot)) { DialogModule_MakeRoot.Register(scope, makeRoot); // act: sending the message using (new LocalizedScope(toBot.Locale)) { var task = scope.Resolve<IPostToBot>(); await task.PostAsync(toBot, CancellationToken.None); } //await Conversation.SendAsync(toBot, makeRoot, CancellationToken.None); return scope.Resolve<Queue<IMessageActivity>>().Dequeue(); } } /// <summary> /// Send a message to the bot and get all repsponses. /// </summary> public async Task<List<IMessageActivity>> GetResponses(IContainer container, Func<IDialog<object>> makeRoot, IMessageActivity toBot) { using (var scope = DialogModule.BeginLifetimeScope(container, toBot)) { var results = new List<IMessageActivity>(); DialogModule_MakeRoot.Register(scope, makeRoot); // act: sending the message using (new LocalizedScope(toBot.Locale)) { var task = scope.Resolve<IPostToBot>(); await task.PostAsync(toBot, CancellationToken.None); } //await Conversation.SendAsync(toBot, makeRoot, CancellationToken.None); var queue = scope.Resolve<Queue<IMessageActivity>>(); while (queue.Count != 0) { results.Add(queue.Dequeue()); } return results; } } /// <summary> /// Register Global Message /// </summary> private void RegisterBotModules(IContainer container) { var builder = new ContainerBuilder(); builder.RegisterModule(new ReflectionSurrogateModule()); builder.RegisterModule<GlobalMessageHandlers>(); builder.Update(container); } } }
Function Test
Simple add following two tests.
[TestMethod] public void Function_ShouldCancelCurrrentDialog() { DirectLineHelper helper = new DirectLineHelper(TestContext); var toUser = helper.SentMessage("add appointment"); // Verify the result Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); toUser = helper.SentMessage("Learn BotFramework"); Assert.IsTrue(toUser[0].Text.Equals("What is the detail?")); toUser = helper.SentMessage("Cancel"); Assert.IsTrue(toUser.Count.Equals(0)); toUser = helper.SentMessage("add appointment"); Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); } [TestMethod] public void Function_ShouldInterruptCurrentDialog() { DirectLineHelper helper = new DirectLineHelper(TestContext); var toUser = helper.SentMessage("add appointment"); // Verify the result Assert.IsTrue(toUser[0].Text.Equals("Creating an event.")); Assert.IsTrue(toUser[1].Text.Equals("What is the title?")); toUser = helper.SentMessage("Learn BotFramework"); Assert.IsTrue(toUser[0].Text.Equals("What is the detail?")); toUser = helper.SentMessage("Get Events"); Assert.IsTrue(true); toUser = helper.SentMessage("Implement O365Bot"); Assert.IsTrue(toUser[0].Text.Equals("When do you start? Use dd/MM/yyyy HH:mm format.")); }
Checkin the code to make sure all tests are passed.
Summery
Global message handling is one of the key to make intelligent bot. In real scenario, you may want to put several keywords per scorables.
GitHub: https://github.com/kenakamu/BotWithDevOps-Blog-sample/tree/master/article11
Ken
KYC series – basic analysis
First question Contoso wanted to know is are we making money? Is that a surprise to you? Hopefully not. But we haven’t answered it yet! Primarily because though we have a strong handle on their sales side of things, but the cost data cannot be trusted. There is more business process work to do here before we can answer that question.
We did some basic analysis to get to some understanding of their business. A few plots will illustrate what we found.
- Chart 1 – Total amount and total margin move linearly as expected, though as mentioned later there are several customers of interest who fall in negative margin territory. DIG DEEPER.
- Chart 2 – Number of orders doesn’t really have a relationship with total amount. We have customers with more than 400 orders processed with total amounts spread over a wide range from 20k to 50k. UNCELAR PICTURE.
- Chart 3 – Percentage of total orders that are discounted falls as one moves towards the right territory of large total amount. Conversely, one can see that for orders with amount less than 10k, the percent of orders discounted could be anywhere between 0 -100%. UNCLEAR.
- Chart 4 – This clearly shows that customers whose last orders came in quite a while ago do a lot less business with Contoso. Conversely, customers who do a lot of business with Contoso have placed orders quite recently. NO SURPSIES HERE
- Chart 5 – This one also clearly shows that discount percent on discounted orders has a lot of variation only when we look at marginal customers. When we look at customers who do significant business with Contoso, mean discount is quite close to zero. HEALTHY
- Chart 6 – This one also clearly shows that customers who do a lot less business with Contoso can have a lot of gap between their orders. Conversely customers who have little gap between orders are more likely to make high spend with Contoso. NO SURPRISES HERE
keep reading.