Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

Extending Azure DevTest Lab environments in an Azure DevOps CI/CD Pipeline

$
0
0

The ability to use the Azure DevTest labs within a development inner loop has been documented, but this post will look at how the DevTest labs can be used in the Azure DevOps build and release pipelines. The basic flow is to have a build pipeline to that will build the application code, create the base environment in DevTest Labs, update the environment with custom information, deploy the application to the DevTest Lab environment, and then test the code. Once build has been completed successfully, the release pipeline will use the build artifacts to deploy staging, or production. One of the necessary premises is that all the information needed to recreate the “tested” ecosystem is available within the build artifacts, including the configuration of the Azure resources.  As Azure resources incur a cost when used, companies tend to want to either control or track the use of these resources. In some situations, the Azure RM templates which are used to create and configure the resources, may be managed by another department like IT which could be stored in a different repository. This leads to an interesting situation where a build will be created and tested, and both the code and the configuration will need to be stored within the build artifacts to properly recreate the system in production. Using DevTest Labs during the build/test phase we can add the correct ARM templates and supporting files to the build sources, so that during the release phase the exact configuration used to test with is deployed to production. The “Create Azure DevTest Labs Environment” task with the proper configuration will save the ARM templates within the build artifacts. For this example I’ll be using the code from the Tutorial: Build a .NET Core and SQL Database web app in Azure App Service, to deploy and test the web app in Azure.

Code and Configuration in separate repositories

Overall Flow

Setup Azure Resource

There are a couple of items that will need to be created beforehand:

  • Two repositories, the first with the code from the tutorial and an ARM template with two additional VMs (in a “ARMTemplates/VMInstance folder”, the second will contain the base ARM template (existing configuration).
  • A Resource Group for deployment of the production code and configuration.
  • A DevTest Lab (TestLab) will need to be setup with a connection to the configuration repository for the build pipeline. I’ve included the necessary ARM template that will create the Web App and SQL Server to support the Tutorial: Build a .NET Core and SQL Database web app in Azure App Service. The ARM template will need to be checked into the configuration repository as azuredeploy.json with the metadata.json to allow DevTest lab to recognize and deploy the template.

The DevTest Lab is where the build pipeline will create the environment and deploy the code for testing

Setup Build pipeline

In Azure DevOps create a new build pipeline using the code from the Tutorial: Build a .NET Core and SQL Database web app in Azure App Service using the “ASP.NET Core” template which will populate the necessary task to build, test, and publish the code.

Three additional tasks will need to be added to create the environment in DevTest Lab and deploy to the environment.

The “Create Azure DevTest Labs Environment” task before the “Test” task. In the create environment task use the pulldowns to select the appropriate Azure RM Subscription, Lab Name, Repository Name, and Template Name (which shows the folder name where the environment is stored). I would highly recommend using the pulldowns, if you manually enter the information you will need the fully qualified Azure Resource Id for this task to work. The task displays the “friendly” names instead of the resource Ids. The environment name is the displayed name shown within DevTest labs, this should be a unique name for each build ie “TestEnv$(Build.BuildId). Either the Parameters File or the Parameters section can be used to pass information into the ARM template – see Additional information / Azure Resource Management Parameters for an example. The “Create output variables based on the environment template output?” to allow the output to be recognized by the build pipeline. The “Create artifact based on the environment template output?” will need to be enabled with the Reference name for the output variables. For this example, the text “BaseEnv” is used.

The second task is to update the existing DevTest Lab Environment, the Create Environment will pass the “BaseEnv.environmentResourceId” out to the Azure DevOps pipeline as a variable that is used in this task as the Environment Name. The ARM template for this example has two parameter “adminUserName” and “adminPassword” which will need to be added to passed in via the “Source ARM Template Parameters” field.

The third task is the “Azure App Service Deploy” which will be added after the Create task above. The App type will be “Web App” and the App Service name set to $(WebSite) to deploy the app to the app service within the DTL Environment that was created.

Setup Release pipeline

In the release pipeline, the assumption is that the web app already exists, so the two tasks are the “Azure Deployment: Create Or Update Resource Group action” and “Deploy Azure App Service”.  The Resource Group Action will need the Azure Subscription where the production resource group is located, the action will be “Create or update resource group”, the name of the Resource Group, the location of the Resource Group, the template location is a “linked artifact”, the template is in the published drop artifact and the Override template parameters for the ARM template. The rest of the options can be left with the defaults. If the ARM template includes linked templates then a custom resource group deployment will need to be implemented. The second task “Deploy Azure App Service” will need the Azure Subscription, the App type will be Web App, and the App Service name which we’ve setup as $(WebSite), the rest can be left to the defaults.

Test Run

Now that both pipelines are setup, manually queue up a build and see it work. The next step is to set the appropriate trigger for the build and connect the build to the release pipeline.

Have a question? Check out the answers or ask a new one at MSDN forum.

Roger Best, Senior Software Engineer

Roger is part of the Visual Studio and .NET engineering team focused on Visual Studio and Azure customers.  He has been at Microsoft for over 20 years, focusing on developer technologies for the past decade or so.  In his spare time, he watches too many movies, and tries to survive triathlons

Additional information

Demo Build / Release variables

AdministratorLogin: Administrator Name
AdministratorPassword: Administrator Password – secret type
SqlDbName: SQL database name – lower case only
SqlSrvName: SQL server name
WebSite: App Service name

Azure Resource Management Parameters

-hostingPlanName 'hostplan$(Build.BuildId)' -webSiteName '$(WebSite)' -sqlServerName '$(SqlSrvName)' -administratorLogin '$(AdministratorLogin)' -administratorLoginPassword '$(AdministratorPassword)' -databaseName '$(SqlDbName)'

DevTest Lab Environment metadata information (metadata.json)

{
"itemDisplayName": "NET Core application with SQL Db",
"description": "This template creates an Azure Web App with SQL DB."
}

Configuration repository - Azure ARM Template for Web App with SQL Server (azuredeploy.json)

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"hostingPlanName": {
"type": "string",
"minLength": 1
},
"webSiteName": {
"type": "string",
"defaultValue": "testwebapp"
},
"sqlServerName": {
"type": "string",
"defaultValue": "testsqlsrv"
},
"skuName": {
"type": "string",
"defaultValue": "F1",
"allowedValues": [
"F1",
"D1",
"B1",
"B2",
"B3",
"S1",
"S2",
"S3",
"P1",
"P2",
"P3",
"P4"
],
"metadata": {
"description": "Describes plan's pricing tier and instance size. Check details at https://azure.microsoft.com/en-us/pricing/details/app-service/"
}
},
"skuCapacity": {
"type": "int",
"defaultValue": 1,
"minValue": 1,
"metadata": {
"description": "Describes plan's instance count"
}
},
"administratorLogin": {
"type": "string"
},
"administratorLoginPassword": {
"type": "securestring"
},
"databaseName": {
"type": "string"
},
"collation": {
"type": "string",
"defaultValue": "SQL_Latin1_General_CP1_CI_AS"
},
"edition": {
"type": "string",
"defaultValue": "Basic",
"allowedValues": [
"Basic",
"Standard",
"Premium"
]
},
"maxSizeBytes": {
"type": "string",
"defaultValue": "1073741824"
},
"requestedServiceObjectiveName": {
"type": "string",
"defaultValue": "Basic",
"allowedValues": [
"Basic",
"S0",
"S1",
"S2",
"P1",
"P2",
"P3"
],
"metadata": {
"description": "Describes the performance level for Edition"
}
},
"_artifactsLocation": {
"type": "string",
"defaultValue": ""
},
"_artifactsLocationSasToken": {
"type": "securestring",
"defaultValue": ""
}
},
"variables": {
},
"resources": [
{
"name": "[parameters('sqlserverName')]",
"type": "Microsoft.Sql/servers",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "SqlServer"
},
"apiVersion": "2014-04-01-preview",
"properties": {
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]"
},
"resources": [
{
"name": "[parameters('databaseName')]",
"type": "databases",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "Database"
},
"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/', parameters('sqlserverName'))]"
],
"properties": {
"edition": "[parameters('edition')]",
"collation": "[parameters('collation')]",
"maxSizeBytes": "[parameters('maxSizeBytes')]",
"requestedServiceObjectiveName": "[parameters('requestedServiceObjectiveName')]"
}
},
{
"type": "firewallrules",
"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/', parameters('sqlserverName'))]"
],
"location": "[resourceGroup().location]",
"name": "AllowAllWindowsAzureIps",
"properties": {
"endIpAddress": "0.0.0.0",
"startIpAddress": "0.0.0.0"
}
}
]
},
{
"apiVersion": "2015-08-01",
"name": "[parameters('hostingPlanName')]",
"type": "Microsoft.Web/serverfarms",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "HostingPlan"
},
"sku": {
"name": "[parameters('skuName')]",
"capacity": "[parameters('skuCapacity')]"
},
"properties": {
"name": "[parameters('hostingPlanName')]"
}
},
{
"apiVersion": "2015-08-01",
"name": "[parameters('webSiteName')]",
"type": "Microsoft.Web/sites",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Web/serverFarms/', parameters('hostingPlanName'))]"
],
"tags": {
"[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "empty",
"displayName": "Website"
},
"properties": {
"name": "[parameters('webSiteName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"
},
"resources": [
{
"apiVersion": "2015-08-01",
"type": "config",
"name": "connectionstrings",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]"
],
"properties": {
"DefaultConnection": {
"value": "[concat('Data Source=tcp:', reference(resourceId('Microsoft.Sql/servers/', parameters('sqlserverName'))).fullyQualifiedDomainName, ',1433;Initial Catalog=', parameters('databaseName'), ';User Id=', parameters('administratorLogin'), '@', parameters('sqlserverName'), ';Password=', parameters('administratorLoginPassword'), ';')]",
"type": "SQLServer"
}
}
}
]
},
{
"apiVersion": "2014-04-01",
"name": "[concat(parameters('hostingPlanName'), '-', resourceGroup().name)]",
"type": "Microsoft.Insights/autoscalesettings",
"location": "[resourceGroup().location]",
"tags": {
"[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "Resource",
"displayName": "AutoScaleSettings"
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]"
],
"properties": {
"profiles": [
{
"name": "Default",
"capacity": {
"minimum": 1,
"maximum": 2,
"default": 1
},
"rules": [
{
"metricTrigger": {
"metricName": "CpuPercentage",
"metricResourceUri": "[concat(resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "GreaterThan",
"threshold": 80.0
},
"scaleAction": {
"direction": "Increase",
"type": "ChangeCount",
"value": 1,
"cooldown": "PT10M"
}
},
{
"metricTrigger": {
"metricName": "CpuPercentage",
"metricResourceUri": "[concat(resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT1H",
"timeAggregation": "Average",
"operator": "LessThan",
"threshold": 60.0
},
"scaleAction": {
"direction": "Decrease",
"type": "ChangeCount",
"value": 1,
"cooldown": "PT1H"
}
}
]
}
],
"enabled": false,
"name": "[concat(parameters('hostingPlanName'), '-', resourceGroup().name)]",
"targetResourceUri": "[concat(resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]"
}
},
{
"apiVersion": "2014-04-01",
"name": "[concat('ServerErrors ', parameters('webSiteName'))]",
"type": "Microsoft.Insights/alertrules",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', parameters('webSiteName'))]"
],
"tags": {
"[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/', parameters('webSiteName'))]": "Resource",
"displayName": "ServerErrorsAlertRule"
},
"properties": {
"name": "[concat('ServerErrors ', parameters('webSiteName'))]",
"description": "[concat(parameters('webSiteName'), ' has some server errors, status code 5xx.')]",
"isEnabled": false,
"condition": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
"dataSource": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
"resourceUri": "[concat(resourceGroup().id, '/providers/Microsoft.Web/sites/', parameters('webSiteName'))]",
"metricName": "Http5xx"
},
"operator": "GreaterThan",
"threshold": 0.0,
"windowSize": "PT5M"
},
"action": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
"sendToServiceOwners": true,
"customEmails": []
}
}
},
{
"apiVersion": "2014-04-01",
"name": "[concat('ForbiddenRequests ', parameters('webSiteName'))]",
"type": "Microsoft.Insights/alertrules",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', parameters('webSiteName'))]"
],
"tags": {
"[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/', parameters('webSiteName'))]": "Resource",
"displayName": "ForbiddenRequestsAlertRule"
},
"properties": {
"name": "[concat('ForbiddenRequests ', parameters('webSiteName'))]",
"description": "[concat(parameters('webSiteName'), ' has some requests that are forbidden, status code 403.')]",
"isEnabled": false,
"condition": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
"dataSource": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
"resourceUri": "[concat(resourceGroup().id, '/providers/Microsoft.Web/sites/', parameters('webSiteName'))]",
"metricName": "Http403"
},
"operator": "GreaterThan",
"threshold": 0,
"windowSize": "PT5M"
},
"action": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
"sendToServiceOwners": true,
"customEmails": []
}
}
},
{
"apiVersion": "2014-04-01",
"name": "[concat('CPUHigh ', parameters('hostingPlanName'))]",
"type": "Microsoft.Insights/alertrules",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]"
],
"tags": {
"[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "Resource",
"displayName": "CPUHighAlertRule"
},
"properties": {
"name": "[concat('CPUHigh ', parameters('hostingPlanName'))]",
"description": "[concat('The average CPU is high across all the instances of ', parameters('hostingPlanName'))]",
"isEnabled": false,
"condition": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
"dataSource": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
"resourceUri": "[concat(resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
"metricName": "CpuPercentage"
},
"operator": "GreaterThan",
"threshold": 90,
"windowSize": "PT15M"
},
"action": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
"sendToServiceOwners": true,
"customEmails": []
}
}
},
{
"apiVersion": "2014-04-01",
"name": "[concat('LongHttpQueue ', parameters('hostingPlanName'))]",
"type": "Microsoft.Insights/alertrules",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]"
],
"tags": {
"[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "Resource",
"displayName": "AutoScaleSettings"
},
"properties": {
"name": "[concat('LongHttpQueue ', parameters('hostingPlanName'))]",
"description": "[concat('The HTTP queue for the instances of ', parameters('hostingPlanName'), ' has a large number of pending requests.')]",
"isEnabled": false,
"condition": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
"dataSource": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
"resourceUri": "[concat(resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
"metricName": "HttpQueueLength"
},
"operator": "GreaterThan",
"threshold": 100.0,
"windowSize": "PT5M"
},
"action": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleEmailAction",
"sendToServiceOwners": true,
"customEmails": []
}
}
},
{
"apiVersion": "2014-04-01",
"name": "[parameters('webSiteName')]",
"type": "Microsoft.Insights/components",
"location": "East US",
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', parameters('webSiteName'))]"
],
"tags": {
"[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/', parameters('webSiteName'))]": "Resource",
"displayName": "AppInsightsComponent"
},
"properties": {
"ApplicationId": "[parameters('webSiteName')]"
}
}
],
"outputs": {
"EnvironmentLocation":{
"type": "string",
"value": "[parameters('_artifactsLocation')]"
},
"EnvironmentSAS":{
"type": "string",
"value": "[parameters('_artifactsLocationSasToken')]"
},
"appServiceName":{
"type": "string",
"value": "[parameters('webSiteName')]"
},
"sqlSrvName":{
"type": "string",
"value": "[parameters('sqlserverName')]"
}
}
}

Code repository - Azure ARM template to add two VMs to environment

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"type": "string",
"minLength": 1,
"metadata": {
"description": "Admin username"
}
},
"adminPassword": {
"type": "securestring",
"metadata": {
"description": "Admin password"
}
},
"imageSKU": {
"type": "string",
"defaultValue": "2012-R2-Datacenter",
"allowedValues": [
"2008-R2-SP1",
"2012-Datacenter",
"2012-R2-Datacenter"
],
"metadata": {
"description": "The Windows version for the VM"
}
},
"vmSize": {
"type": "string",
"minLength": 1,
"defaultValue": "Standard_D2_v2",
"allowedValues": [
"Basic_A0",
"Basic_A1",
"Basic_A2",
"Basic_A3",
"Basic_A4",
"Standard_A0",
"Standard_A1",
"Standard_A2",
"Standard_A3",
"Standard_A4",
"Standard_A5",
"Standard_A6",
"Standard_A7",
"Standard_D1_v2",
"Standard_D2_v2",
"Standard_D3_v2",
"Standard_D4_v2",
"Standard_D11_v2",
"Standard_D12_v2",
"Standard_D13_v2",
"Standard_D14_v2"
],
"metadata": {
"description": "Size of the virtual machine, must be available in the virtual machine's location"
}
},
"numberOfInstances": {
"type": "int",
"minValue": 1,
"defaultValue": 2,
"metadata": {
"description": "Number of VM instances to be created behind internal load balancer control"
}
}
},
"variables": {
"vmNamePrefix": "[concat('A', uniqueString(resourceGroup().id))]",
"imagePublisher": "MicrosoftWindowsServer",
"imageOffer": "WindowsServer",
"availabilitySetName": "AvSet",
"vhdStorageType": "Standard_LRS",
"vhdStorageAccountName": "[concat('vhdstorage', uniqueString(resourceGroup().id))]",
"virtualNetworkName": "MyVNet",
"vnetId": "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"virtualNetworkSubnetName": "BackendSubnet",
"subnetRef": "[concat(variables('vnetId'), '/subnets/', variables('virtualNetworkSubnetName'))]",
"networkInterfaceNamePrefix": "BackendVMNic",
"loadBalancerName": "BackendLB",
"lbId": "[resourceId('Microsoft.Network/loadBalancers', variables('loadBalancerName'))]",
"diagnosticsStorageAccountName": "[variables('vhdStorageAccountName')]"
},
"resources": [
{
"apiVersion": "2016-01-01",
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('vhdStorageAccountName')]",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "StorageAccount"
},
"sku": {
"name": "[variables('vhdStorageType')]"
},
"kind": "Storage"
},
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Compute/availabilitySets",
"name": "[variables('availabilitySetName')]",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "AvailabilitySet"
},
"properties": {}
},
{
"apiVersion": "2016-03-30",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "VirtualNetwork"
},
"properties": {
"addressSpace": {
"addressPrefixes": [
"10.0.0.0/16"
]
},
"subnets": [
{
"name": "[variables('virtualNetworkSubnetName')]",
"properties": {
"addressPrefix": "10.0.2.0/24"
}
}
]
}
},
{
"apiVersion": "2016-03-30",
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('networkInterfaceNamePrefix'), copyindex())]",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "NetworkInterface"
},
"copy": {
"name": "nicLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[resourceId('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]",
"[resourceId('Microsoft.Network/loadBalancers/', variables('loadBalancerName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"subnet": {
"id": "[variables('subnetRef')]"
},
"loadBalancerBackendAddressPools": [
{
"id": "[concat(variables('lbId'), '/backendAddressPools/BackendPool1')]"
}
]
}
}
]
}
},
{
"apiVersion": "2016-03-30",
"type": "Microsoft.Network/loadBalancers",
"name": "[variables('loadBalancerName')]",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "LoadBalancer"
},
"dependsOn": [
"[variables('vnetId')]"
],
"properties": {
"frontendIPConfigurations": [
{
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAddress": "10.0.2.6",
"privateIPAllocationMethod": "Static"
},
"name": "LoadBalancerFrontend"
}
],
"backendAddressPools": [
{
"name": "BackendPool1"
}
],
"loadBalancingRules": [
{
"properties": {
"frontendIPConfiguration": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('loadBalancerName')), '/frontendIPConfigurations/LoadBalancerFrontend')]"
},
"backendAddressPool": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('loadBalancerName')), '/backendAddressPools/BackendPool1')]"
},
"probe": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('loadBalancerName')), '/probes/lbprobe')]"
},
"protocol": "Tcp",
"frontendPort": 80,
"backendPort": 80,
"idleTimeoutInMinutes": 15
},
"name": "lbrule"
}
],
"probes": [
{
"properties": {
"protocol": "Tcp",
"port": 80,
"intervalInSeconds": 15,
"numberOfProbes": 2
},
"name": "lbprobe"
}
]
}
},
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmNamePrefix'), copyindex())]",
"copy": {
"name": "virtualMachineLoop",
"count": "[parameters('numberOfInstances')]"
},
"location": "[resourceGroup().location]",
"tags": {
"displayName": "VirtualMachines"
},
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('vhdStorageAccountName'))]",
"nicLoop",
"[resourceId('Microsoft.Compute/availabilitySets/', variables('availabilitySetName'))]"
],
"properties": {
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]"
},
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[concat(variables('vmNamePrefix'), copyIndex())]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"imageReference": {
"publisher": "[variables('imagePublisher')]",
"offer": "[variables('imageOffer')]",
"sku": "[parameters('imageSKU')]",
"version": "latest"
},
"osDisk": {
"name": "osdisk",
"vhd": {
"uri": "[concat(reference(resourceId('Microsoft.Storage/storageAccounts', variables('vhdStorageAccountName')), '2016-01-01').primaryEndpoints.blob, 'vhds/osdisk', copyindex(), '.vhd')]"
},
"caching": "ReadWrite",
"createOption": "FromImage"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('networkInterfaceNamePrefix'), copyindex()))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts', variables('vhdStorageAccountName')), '2016-01-01').primaryEndpoints.blob]"
}
}
},
"resources": [
{
"type": "extensions",
"name": "Microsoft.Insights.VMDiagnosticsSettings",
"apiVersion": "2016-03-30",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "AzureDiagnostics"
},
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/', concat(variables('vmNamePrefix'), copyindex()))]"
],
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": "4096",
"DiagnosticInfrastructureLogs": {
"scheduledTransferLogLevelFilter": "Error"
},
"WindowsEventLog": {
"scheduledTransferPeriod": "PT1M",
"DataSource": [
{
"name": "Application!*[System[(Level = 1) or (Level = 2)]]"
},
{
"name": "Security!*[System[(Level = 1 or Level = 2)]]"
},
{
"name": "System!*[System[(Level = 1 or Level = 2)]]"
}
]
},
"PerformanceCounters": {
"scheduledTransferPeriod": "PT1M",
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\Processor(_Total)\% Processor Time",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "CPU utilization",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Processor(_Total)\% Privileged Time",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "CPU privileged time",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Processor(_Total)\% User Time",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "CPU user time",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Processor Information(_Total)\Processor Frequency",
"sampleRate": "PT15S",
"unit": "Count",
"annotation": [
{
"displayName": "CPU frequency",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\System\Processes",
"sampleRate": "PT15S",
"unit": "Count",
"annotation": [
{
"displayName": "Processes",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Process(_Total)\Thread Count",
"sampleRate": "PT15S",
"unit": "Count",
"annotation": [
{
"displayName": "Threads",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Process(_Total)\Handle Count",
"sampleRate": "PT15S",
"unit": "Count",
"annotation": [
{
"displayName": "Handles",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Memory\% Committed Bytes In Use",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "Memory usage",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Memory\Available Bytes",
"sampleRate": "PT15S",
"unit": "Bytes",
"annotation": [
{
"displayName": "Memory available",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Memory\Committed Bytes",
"sampleRate": "PT15S",
"unit": "Bytes",
"annotation": [
{
"displayName": "Memory committed",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\Memory\Commit Limit",
"sampleRate": "PT15S",
"unit": "Bytes",
"annotation": [
{
"displayName": "Memory commit limit",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\% Disk Time",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "Disk active time",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\% Disk Read Time",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "Disk active read time",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\% Disk Write Time",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "Disk active write time",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\Disk Transfers/sec",
"sampleRate": "PT15S",
"unit": "CountPerSecond",
"annotation": [
{
"displayName": "Disk operations",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\Disk Reads/sec",
"sampleRate": "PT15S",
"unit": "CountPerSecond",
"annotation": [
{
"displayName": "Disk read operations",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\Disk Writes/sec",
"sampleRate": "PT15S",
"unit": "CountPerSecond",
"annotation": [
{
"displayName": "Disk write operations",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\Disk Bytes/sec",
"sampleRate": "PT15S",
"unit": "BytesPerSecond",
"annotation": [
{
"displayName": "Disk speed",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\Disk Read Bytes/sec",
"sampleRate": "PT15S",
"unit": "BytesPerSecond",
"annotation": [
{
"displayName": "Disk read speed",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\PhysicalDisk(_Total)\Disk Write Bytes/sec",
"sampleRate": "PT15S",
"unit": "BytesPerSecond",
"annotation": [
{
"displayName": "Disk write speed",
"locale": "en-us"
}
]
},
{
"counterSpecifier": "\LogicalDisk(_Total)\% Free Space",
"sampleRate": "PT15S",
"unit": "Percent",
"annotation": [
{
"displayName": "Disk free space (percentage)",
"locale": "en-us"
}
]
}
]
},
"Metrics": {
"resourceId": "[resourceId('Microsoft.Compute/virtualMachines', concat(variables('vmNamePrefix'), copyindex()))]",
"MetricAggregation": [
{
"scheduledTransferPeriod": "PT1H"
},
{
"scheduledTransferPeriod": "PT1M"
}
]
}
}
}
},
"protectedSettings": {
"storageAccountName": "[variables('diagnosticsStorageAccountName')]",
"storageAccountKey": "[listkeys(resourceId('Microsoft.Storage/storageAccounts', variables('diagnosticsStorageAccountName')), '2016-01-01').keys[0].value]"
}
}
}
]
}
]
}


Windows Server 2019 and Containers

$
0
0

After a bit of noise around the October release of Windows 10 the corresponding server release, Windows Server 2019, was also removed from the download sites Microsoft provides. Just last week it was finally re-released and made available on MSDN.

"Whatever. This is IT Pro stuff and I'm a dev…"

Well, it is true that a lot of the new features in Windows Server 2019 is more aimed at infrastructure than development. There are however still a few things that you might be interested in as a coder as well. For instance the Docker and container support has been greatly improved upon.

The low-down on how to get Docker running can be found here:
https://blog.sixeyed.com/getting-started-with-docker-on-windows-server-2019/

Following the steps there you should be able to get to the test page as shown.

You know what else is neat with this integration? Docker integrates with Windows Firewall - this means that when you expose a port in Docker it is automatically opened for you - that I like!

You can find them in the firewall rules list with the "Compartment" prefix.

So, for the sake of argument, let's say you need to build a classic .NET app instead of .NET Core.

Step through the wizard in Visual Studio:


Check Enable Docker Compose support.

The wizard creates a dockerfile for an earlier Windows Server build, so modify the dockerfile to use the newest server bits:

FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2-windowsservercore-ltsc2019
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .

Bring up the publishing wizard and go for the container registry option (new one or existing):

Pushing the image will take a little bit of time since it's not as space optimized as the smallest .NET Core images.

Since this is a private repo you will need to authenticate on the server. You could install the Azure CLI on the server and use az acr login to integrate with Azure AD, or you can use docker login provided you enable Admin user in the Azure Portal for the registry you're using.

Once you're logged in you can pull down the image and run it:

docker run --name dockerfull -it  -p 4321:80 acrname.azurecr.io/fulldotnet:latest

That will throw you into interactive mode (which is fine for debugging, but not usually done in production).

Without further ado it is browsable (the container logs visible in the background):

It should be noted that although the default enables Windows containers it is also possible to run Linux containers:
https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/linux-containers

Granted, it is not quite as smooth as on Windows 10, but then again if you're in the process of installing a Windows-based host now it's not for the lack of viable Linux alternatives.

It may not be the announcement of the year for the general developer, but I find it to be a nice addition to the toolbox.

Using Visual Studio for Cross Platform C++ Development Targeting Windows and Linux

$
0
0

A great strength of C++ is the ability to target multiple platforms without sacrificing performance. If you are using the same codebase for multiple targets, then CMake is the most common solution for building your software. You can use Visual Studio for your C++ cross platform development when using CMake without needing to create or generate Visual Studio projects. Just open the folder with your sources in Visual Studio (File > Open Folder). Visual Studio will recognize CMake is being used, then use metadata CMake produces to configure IntelliSense and builds automatically. You can quickly be editing, building and debugging your code locally on Windows, and then switching your configuration to do the same on Linux all from within Visual Studio.

Teams working on these types of code bases may have developers who have different primary operating systems, e.g. some people are on Linux (and may be using the Visual Studio Code editor) and some are on Windows (probably using the Visual Studio IDE). In an environment like this, the choice of tools may be up to the developers themselves. You can use Visual Studio in an environment like this without perturbing your other team members or making changes to your source as is. If or when additional configuration is needed it is saved in flat json files that can be saved locally, or shared in source control with other developers using Visual Studio without impacting developers that are not using it.

Visual Studio isn’t just for Windows C and C++ development anymore. If you follow the tutorial below on your own machine, you will clone an open source project from GitHub, open it in Visual Studio, edit, build and debug on Windows with no changes to the project. Then Visual Studio will add a connection to a Linux machine and edit, build and debug it on that remote machine.

The next section shows you how to setup Visual Studio, followed by a section on how to configure your Linux target, and last the tutorial itself – have fun!

Setting up Visual Studio for Cross Platform C++ Development

First you need to have Visual Studio installed. If you have it installed already confirm that you have the Desktop development with C++ and Linux development with C++ workloads installed. If you don’t have Visual Studio installed use this link to install it with the minimal set of components for this tutorial selected. This minimal install is only a 3GB, depending on your download speed installation should not take more than 10 minutes.

Once that is done you are ready to go on Windows.

Configuring your Linux machine for cross platform C++ development

Visual Studio does not have a requirement for a specific distribution of Linux; use any you would like to. That can be on a physical machine, in a VM, the cloud, or even running on Windows Subsystem for Linux. The tools Visual Studio requires to be present on the Linux machine are: C++ compilers, GDB, ssh, and zip. On Debian based systems you can install these dependencies as follows.

sudo apt install -y openssh-server build-essential gdb zip

Visual Studio also of course requires CMake. However, it needs a recent version of CMake that has server mode enabled (at least 3.8). Our team produces a universal build of CMake that you can install on any Linux distro. We recommend using this build over what may be in your package manager as it is built from our fork of the CMake source. Using that fork ensures that you have the latest features in case they haven’t made it back up stream. We document how to configure CMake here, and you can get the CMake binaries from here. Go to that page and download the version that matches your system architecture on your Linux machine. Mark it as an executable:

wget chmod +x cmake-3.11.18033000-MSVC_2-Linux-x86_64.sh
chmod +x cmake-3.11.18033000-MSVC_2-Linux-x86_64.sh

You can see the options for running it with --help. We recommend that you use the --prefix option to specify installing in the /usr/local path as that is the default location Visual Studio looks for CMake at.

sudo ./cmake-3.11.18033000-MSVC_2-Linux-x86_64.sh --skip-license --prefix=/usr/local

Tutorial: Using the Bullet Physics SDK GitHub repo in Visual Studio

Now that you have Visual Studio and a Linux machine ready to go lets walk through getting a real open source C++ project working in Visual Studio targeting Windows and Linux. For this tutorial we are going to use the Bullet Physics SDK on GitHub. This is a library that provides collision detection and physics simulations for a variety of different applications. There are sample executable programs within it to use so we have something to interact with without having to write additional code. You will not have to modify any of this source or build scripts in the steps that follow.

Note that you can use any Linux distro for this tutorial, however using Windows Subsystem for Linux for this one is not a good idea since the executable we are going to run is graphical which is not supported officially there.

Step 1 – Clone and open the bullet3 repo

To start, clone the bullet3 repository from GitHub on the machine where you have Visual Studio installed. If you have git installed on your command line it will be as simple as running git clone wherever you would like to keep this repository.

git clone https://github.com/bulletphysics/bullet3.git

Now open the root project folder, bullet3, that was created by cloning the repo in Visual Studio. Use the menu option File > Open > Folder which will detect and use the CMakeLists.txt file or you can use File > Open > CMake to select the desired CMakeLists.txt file directly.

You can also clone a git repo directly within Visual Studio which will automatically open the folder when you are done.

Visual Studio menu for File > Open > CMake

As soon as you open the folder your folder structure will be visible in the Solution Explorer.

Visual Studio Solution Explorer Folder View

This view shows you exactly what is on disk, not a logical or filtered view. By default, it does not show hidden files. To see them, select the show all files button in the Solution Explorer.

Visual Studio Solution Explorer Show All Files

Step 2 – Use targets view

When you open a folder that uses CMake, Visual Studio will automatically generate the CMake cache. This will take a few moments, or longer depending on the size of your project. The status of this process is placed in the output window. It is complete when you see the message “Target info extraction done”.

Visual Studio Output window showing output from CMake

After this completes, IntelliSense is configured, the project can build, and you can launch the application and debug it. Visual Studio also now understands the build targets that the CMake project produces. This enables an alternate CMake Targets View that provides a logical view of the solution. Use the Solutions and Folders button in the Solution Explorer to switch to this view.

Solutions and Folders button in the Solution Explorer to show CMake targets view

Here is what that view looks like for the Bullet SDK.

Solution Explorer CMake targets view

This gives us a more intuitive view of what is in this source base. You can see some targets are libraries and others are executables. You can expand these nodes and see the source that comprises them independent of how it is represented on disk.

Step 3 - Set breakpoint, build and run

For this tutorial, use an executable to get something that can just run and get into the debugger. Select AppBasicExampleGui and expand it. Open the file BasicExample.cpp. This is an example program that demonstrates the Bullet Physics library by rendering a bunch of cubes arranged as a single block that are falling and smash apart on hitting a surface.  Next set a break point that will be triggered when you click in the running application. That is handled in a method within a helper class used by this application. To quickly get there select CommonRigidBodyBase that the struct BasicExample is derived from around line 30. Right click and choose Go to Definition. Now you are in the header CommonRigidBodyBase.h. In the browser view above your source you should see that you are in the CommonRigidBodyBase. To the right you can select members within to examine, drop that selection down and select mouseButtonCallbackwhich will take you to the definition of that function in the header.

Visual Studio member list toolbar

Place a breakpoint on the first line within this function. This will trigger when you click a mouse button within the window of the application when launched under the Visual Studio debugger.

To launch the application, select the launch dropdown with the play icon that says “Select Startup Item” in the toolbar.

Visual Studio toolbar launch drop down for Select Startup Item

In the dropdown select AppBasicExampleGui.exe. Now press the launch button. This will cause the project to build our application and necessary dependencies, then launch it with the Visual Studio debugger attached. It will take a few moments while this process starts, then the application will appear.

Visual Studio debugging a Windows application

Move your mouse into the application window, click a button, and the breakpoint will be triggered. This pause execution of your program, bring Visual Studio back to the foreground, and you will be at your breakpoint. You will be able to inspect the application variables, objects, threads, memory, and step through your code interactively using Visual Studio. You can click continue to let the application resume and exit it normally or cease execution within Visual Studio using the stop button.

What you have seen so far is by simply cloning a C++ repo from GitHub you can open the folder with Visual Studio and get an experience that provides IntelliSense, a file view, a logical view based on the build targets, source navigation, build, and debugging with no special configuration or Visual Studio specific project files. If you were to make changes to the source you would get a diff view from the upstream project, make commits, and push them back without leaving Visual Studio. There’s more though. Let’s use this project with Linux.

Step 4 – Add a Linux configuration

So far, you have been using the default x64-Debug configuration for our CMake project. Configurations are how Visual Studio understands what platform target it is going to use for CMake. The default configuration is not represented on disk. When you explicitly add a configuration a file CMakeSettings.json is created that has parameters Visual Studio uses to control how CMake is run, as well as when it is run on a remote target like Linux. To add a new configuration, select the Configuration drop down in the toolbar and select “Manage Configurations…”

The Add Configuration to CMakeSettings dialog will appear.

Add Configuration to CMakeSettings dialog

Here you see Visual Studio has preconfigured options for many of the platforms Visual Studio can be configured to use with CMake. If you want to continue to use the default x64-Debug configuration that should be the first one you add. You want that for this tutorial so can switch back and forth between Windows and Linux configurations. Select x64-Debug and click Select. This creates the CMakeSettings.json file with a configuration for “x64-Debug” and switches Visual Studio to use that configuration instead of the default. This happens very quickly as the provided settings are the same as the default. You will see the configuration drop down no longer says “(default)” as part of the name.

Launch drop down configured for X64-Debug

You can use whatever names you like for your configurations by changing the name parameter in the CMakeSettings.json.

Now that you have a configuration specified in the configuration dropdown Manage Configurations option opens the CMakeSettings.json file so you can adjust values there. To add a Linux configuration right click the CMakeSettings.json file in the solution explorer view and select Add Configuration.

CMakeSettings.json context menu for Add Configuration

This provides the same Add Configuration to CMakeSettings dialog you saw before. This time select Linux-Debug, then save the CMakeSettings.json file. Now in the configuration drop down select Linux-Debug.

Launch configuration drop down with X64-Debug and Linux-Debug options

Since this is the first time you are connecting to a Linux system the Connect to Remote System dialog will appear.

Visual Studio Connect to Remote System dialog

 

Provide the connection information to your Linux machine and click Connect. This will add that machine as your default remote machine which is what the CMakeSetttings.json for Linux-Debug is configured to use. It will also pull down the headers from your remote machine so that you get IntelliSense specific to that machine when you use it. Now Visual Studio will send your files to the remote machine, then generate the CMake cache there, and when that is done Visual Studio will be configured for using the same source base with that remote Linux machine. These steps may take some time depending on the speed of your network and power of your remote machine. You will know this is complete when the message “Target info extraction done” appears in the CMake output window.

Step 5 - Set breakpoint, build and run on Linux

Since this is a desktop application you need to provide some additional configuration information to the debug configuration. In the CMake Targets view right click AppBasicExampleGui and choose Debug and Launch settings.

Debug and Launch Settings context menu

This will open a file launch.vs.json that is in the hidden .vs subfolder. This file is local to your development environment. You can move it into the root of your project if you wish to check it in and save it with your team. In this file a configuration has been added for AppBasicExampleGui. These default settings work in most cases, as this is a desktop application you need to provide some additional information to launch the program in a way you can see it on our Linux machine. You need to know the value of the environment variable DISPLAY on your Linux machine, run this command to get it.

echo $DISPLAY

In my case this was :1. In the configuration for AppBasicExampleGui there is a parameter array “pipeArgs”. Within there is a line “${debuggerCommand}”. This is the command that launches gdb on the remote machine. Visual Studio needs to export the display into this context before that command runs. Do so by modifying that line as follows using the value of your display.

"export DISPLAY=:1;${debuggerCommand}",

Now in order to launch and debug our application, choose the “Select Startup Item” dropdown in the toolbar and choose AppBasicExampleGui.

Select Startup Item drop down options

Now press that button or hit F5. This will build the application and its dependencies on the remote Linux machine then launch it with the Visual Studio debugger attached. On your remote Linux machine, you should see an application window appear with the same falling bunch of cubes arranged as a single block.

Linux application launched from Visual Studio

Move your mouse into the application window, click a button, and the breakpoint will be triggered. This pause execution of your program, bring Visual Studio back to the foreground, and you will be at your breakpoint. You should also see a Linux Console Window appear in Visual Studio. This window provides output from the remote Linux machine, and it can also accept input for stdin. It can of course be docked where you prefer to see it and it’s position will be used again in future sessions.

Visual Studio Linux Console Window

You will be able to inspect the application variables, objects, threads, memory, and step through your code interactively using Visual Studio. This time on a remote Linux machine instead of your local Windows environment. You can click continue to let the application resume and exit it normally or cease execution within Visual Studio using the stop button. All the same things you’d expect if this were running locally.

Look at the Call Stack window and you will see this time the Calls to x11OpenGLWindow since Visual Studio has launched the application on Linux.

Call Stack window showing Linux call stack

What you learned and where to learn more

So now you have seen the same code base, cloned directly from GitHub, build, run, and debugged on Windows with no modifications. Then with some minor configuration settings build, run and debugged on a remote Linux machine as well. If you are doing cross platform development, we hope you find a lot to love here. Visual Studio C and C++ development is not just for Windows anymore.

Further articles

Documentation links

This section will be updated in the future with links to new articles on Cross Platform Development with Visual Studio.

Give us feedback

Use this link to download Visual Studio 2017 with everything you need to try the steps in this tutorial, then try it with your projects.

Your feedback is very important to us. We look forward to hearing from you and seeing the things you make.

As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with MSVC or have a suggestion for Visual Studio please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC).

What’s new in Azure DevOps Sprint 143 Update

$
0
0

Sprint 143 Update of Azure DevOps is rolling out to all organizations. In this update, draft pull requests is now available in Azure Repos which allows you to easily create work in progress that may not include everyone. We are also releasing new features in Azure Artifacts, including the ability to exclude files in artifact uploads and get provenance information on packages.  Watch the following video to learn more about these features.

Check out the release notes for more details plus more features released this sprint.

Azure Update in November 2018

$
0
0

Upcoming
Azure Sphere 18.11 release

Azure IoT Edge 1.0.4 release

OMS
Agent for Linux November release now available

Azure
Container Instances now available in Canada Central

Azure
portal November 2018 update

Azure API Management update November
8

Azure
Policy now audits applications installed inside virtual
machines

General
availability: Azure Availability Zones in Southeast Asia

General
availability: Azure Kubernetes Service in South India

Azure
CLI support for Azure Database for MySQL read replicas

Azure
Monitor log support for Azure SQL Data Warehouse

Additional
workload insights for advanced performance tuning with SQL Data
Warehouse

Azure
Network Watcher enabled by default for subscriptions that contain virtual
networks

Private
preview: Azure Kubernetes Service in Azure China

Draft
pull requests and new work item text editor: Sprint 143
Update

Azure
Event Hubs for Apache Kafka is now available

New
H-series Azure VMs for HPC workloads are in preview

Azure Functions
2.0 available in IoT Edge
                                                

In
development: Azure Service Fabric runtime version 6.4 & SDK
updates

In development:
Azure Service Fabric Mesh Fall 2018 refresh

Azure Bot Service
enforcing transport layer security (TLS) 1.2

Azure
Cognitive Services new enhancements

Azure
Cognitive Services Containers are in preview

Announcing
Bot Framework SDK and Tools 4.1 Release

Virtual
Assistant solution accelerator is in preview

Azure
Event Grid, Event Domains are in preview

Azure
JavaScript libraries preview release

Azure
HDInsight is now available in China North 2

Enhancements
to NSG flow logs for Azure Network Watcher

Update
18.11 for Azure Sphere in public preview

Blob
storage Germany resource GUID and name changes

Security
Center Workload Protection for App Service name changes

Azure
SignalR Service GUID changes

Service
Bus name changes

Azure
Site Recovery supports firewall-enabled storage accounts

Japanese
Era Update

General
availability: Zone-redundant SQL databases and elastic pools in additional
regions

Azure
Security Center update - November

Better template support and error detection in C++ Modules with MSVC 2017 version 15.9

$
0
0

Overview

It has been a long time since we last talked about C++ Modules. We feel it is time to revisit what has been happening under the hood of MSVC for modules.

The Visual C++ Team has been dedicated to pushing conformance to the standard with a focus on making the overall compiler implementation more robust and correct with the rejuvenation effort. This rejuvenation effort has given us the ability to substantially improve our modules implementation. We've mostly done this work transparently over the past few months until now. We are proud to say the work has reached a point where talking about it would hopefully provide developers with even more reasons to use C++ Modules with MSVC!

What is new?

  • Two-phase name lookup is now a requirement to use modules. We now store templates in a form that is far more structured than the previous token stream model preserving information like bound names at the point of declaration.
  • Better constexpr support allows users to write far more complex code in an exported module.
  • Improved diagnostics provide users with more safety and correctness when using modules with MSVC.

Two-Phase Name Lookup Support

This point is best illustrated through example. Let’s take the VS2017 15.7 compiler and build some modules. Given a module m.ixx:

#include <type_traits>
export module m;

export
template <typename T>;
struct ptr_holder {
  static_assert(std::is_same_v<T, std::remove_pointer_t<T>>);
};

Now, let’s use the module in a very simple program to try and trigger the static_assert to fail, main.cpp:

import m;

int main() {
  ptr_holder<char*> p;
}

Using the command line, we can build this module and program like so:

cl /experimental:module /std:c++17 /c m.ixx
cl /experimental:module /std:c++17 /module:reference m.ifc main.cpp m.obj

However, you will quickly find this results in a failure:

m.ixx(7): error C2039: 'is_same_v': is not a member of 'std'
predefined C++ types (compiler internal)(238): note: see declaration of 'std'
main.cpp(4): note: see reference to class template instantiation 'ptr_holder' being compiled
m.ixx(7): error C2065: 'is_same_v': undeclared identifier
m.ixx(7): error C2275: 'T': illegal use of this type as an expression
m.ixx(7): error C2039: 'remove_pointer_t': is not a member of 'std'
predefined C++ types (compiler internal)(238): note: see declaration of 'std'
m.ixx(7): error C2061: syntax error: identifier 'remove_pointer_t'
m.ixx(7): error C2238: unexpected token(s) preceding ';'
main.cpp(35): fatal error C1903: unable to recover from previous error(s); stopping compilation
INTERNAL COMPILER ERROR in 'cl.exe'
    Please choose the Technical Support command on the Visual C++
    Help menu, or open the Technical Support help file for more information

It failed to compile, just not in the way we expected. It appears as though the compiler did not handle this scenario. The good news is that 15.9 is here and it is coming with some much-needed improvement! Let’s build this module and program with the 15.9 compiler:

main.cpp(4): error C2607: static assertion failed
main.cpp(4): note: see reference to class template instantiation 'ptr_holder' being compiled

This! This is what we are looking for! So what gives here? Why is 15.9 able to handle this scenario while the 15.7 compiler fails in the way that it does? It all comes down to how modules works with two-phase name lookup.

As mentioned in our two-phase name lookup blog, templates were historically stored in the compiler as streams of tokens which does not save information about what identifiers were seen during the parsing of that template declaration.

The 15.7 modules implementation did not have any awareness of two-phase name lookup so template code compiled with it would suffer from many of the same problems that are described in that blog along with the by-design lookup hiding nature of non-exported module code (in our case is_same_v was a non-exported declaration).

Due to MSVC now supporting two-phase name lookup, our modules implementation is now able to handle much more complex and correct code!

Better Constexpr Support

Constexpr is something that is very important to C++ now and having support for it in combination with new language features can dramatically impact usability of that feature. As such we have made some significant improvements to the constexpr handling of our modules implementation. Once again, let us start with a concrete example, given a module m.ixx:

export module m;

struct internal { int value = 42; };

export {

struct S {
  static constexpr internal value = { };
  union U {
    int a;
    double b;
    constexpr U(int a) : a{ a } { }
    constexpr U(double b) : b{ b } { }
  };
  U u = { 1. };
  U u2 = { 1 };
};
constexpr S s;
constexpr S a[2] = { {}, {.2, 2} };

}

Using the module in a program, main.cpp:

import m;

int main() {
  static_assert(S::value.value == 42);
  static_assert(s.u.b == 1. && s.u2.a == 1);
  static_assert(a[1].u.b == .2 && a[1].u2.a == 2);
  return s.u2.a + a[1].u2.a;
}

You will identify another problem:

main.cpp(5): error C3865: '__thiscall': can only be used on native member functions
main.cpp(5): error C2028: struct/union member must be inside a struct/union
main.cpp(5): fatal error C1903: unable to recover from previous error(s); stopping compilation
Internal Compiler Error in cl.exe.  You will be prompted to send an error report to Microsoft later.
INTERNAL COMPILER ERROR in 'cl.exe'
    Please choose the Technical Support command on the Visual C++
    Help menu, or open the Technical Support help file for more information

Well those are some cryptic errors... One might discover that once you rewrite "constexpr S s;" in the form of "constexpr S s = { };" that the errors go away and you then face a new runtime failure in that the return value from main is 0, rather than the expected 3. In general, prior 15.9 constexpr objects and arrays have been a source of numerous bugs in modules. Failures such as the ones mentioned above are completely gone due in part to our recent rework of the constexpr implementation.

Improved Diagnostics

The MSVC C++ Modules implementation is not just about exporting/importing code correctly, but also providing a safe and user-friendly experience around it.

One such feature is the ability for the compiler to diagnose if the module interface unit has been tampered with. Let us see a simple example:

C:> cl /experimental:module /std:c++latest /c m.ixx
m.ixx
C:> echo 1 >> "m.ifc"
C:> cl /experimental:module /std:c++latest main.cpp
main.cpp
main.cpp(1): error C7536: ifc failed integrity checks. Expected SHA2: '66d5c8154df0c71d4cab7665bab4a125c7ce5cb9a401a4d8b461b706ddd771c6'

Here the compiler refuses to try and use an interface file which has failed a basic integrity check. This protects users from malicious interface files attempting to be processed by MSVC.

Another usability feature we have added is the capability to issue warnings whenever compiler flags differ from a built module to the compiler flags used to import that module. Having some command line switches omitted from the import side could produce an erroneous scenario:

C:> cl /experimental:module /std:c++17 /MDd /c m.ixx
m.ixx
C:> cl /experimental:module /std:c++14 /MD main.cpp
main.cpp
main.cpp(1): warning C5050: Possible incompatible environment while importing module 'm': _DEBUG is defined in module command line and not in current command line
main.cpp(1): warning C5050: Possible incompatible environment while importing module 'm': mismatched C++ versions. Current "201402" module version "201703"

The compiler is telling us that the macro “_DEBUG” was defined when the module was built, that macro is implicitly defined when using the /MDd switch. The presence of this macro can affect how libraries like the STL behave; it can even affect their binary interface (ABI). Additionally, the standard C++ versions between these two components don’t agree so a warning is produced to inform the user.

What now (call to action)?

Download Visual Studio 2017 Version 15.9 today and try out C++ Modules with your projects. Export your template metaprogramming libraries and truly hide your implementation details behind non-exported regions! No longer fear exporting constexpr objects from your interface units! Finally, enjoy using modules with better diagnostics support to create a much more user-friendly experience!

What's next...

  • Modules standardization is in progress: Currently, MSVC supports all of the features of the current TS. As the C++ Modules TS evolves, we will continue to update our implementation and feature set to reflect the new proposal. Breaking changes will be documented as per usual via the release notes.
  • Throughput is improving: One consequence of using old infrastructure with new infrastructure is newer code often depends on older behavior. MSVC is no exception here. We are constantly updating the compiler to be faster and rely less on outdated routines and as this begins to happen our modules implementation will speedup as a nice side-effect. Stay tuned for a future blog on this.

As always, we welcome your feedback. Feel free to send any comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp.

If you encounter other problems with MSVC in VS 2017 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through DevComm. Thank you!

INTUNE – Intune and Autopilot Part 4 – Enroll your first device

$
0
0

In the last blog posts,

we guided you through all the necessary steps to get your Azure trial Tenant up and running, and how to prepare your Intune environment further. Now it is time that we enroll our first device with Autopilot.

We would recommend using a virtual machine for this first step.

When you are going to follow these steps along, the end result will be that the Out-of-Box experience (OOBE) will be customized with our company name and logo, as shown in the image below:

The requirements to enroll a device with Autopilot:

  • Windows 10 Build 1703 Professional, Enterprise or Education
  • Internet Access

 

If your Virtual Machine is located behind a Firewall or Proxy Server, ensure that the following URLs are reachable and ports are open so the device used for Autopilot is able to connect to the required cloud services:

 

URLs:

  • ctldl.windowsupdate.com
  • download.windowsupdate.com

Ports:

  • HTTPS 443
  • HTTP 80

 

If you run into issues while following this guide, please retry all the steps with a virtual machine directly connected to the internet and ensure all the URL's and ports listed in the following article are reachable https://docs.microsoft.com/en-us/intune/network-bandwidth-use

 

Now, with all preparations taken care of, we login into our virtual machine and use a Powershell Script provided by Michael Niehaus to harvest the Hardware details. The Script can be found at: https://www.powershellgallery.com/packages/Get-WindowsAutoPilotInfo

The PowerShell script will gather all the required Information and puts it into an csv file that needs to be uploaded. This Script will only run in Windows 10 1703 and higher - it cannot run in any earlier version of Windows nor WindowsPE.

Execute the Script in an elevated powershell prompt

Install-Script -Name Get-WindowsAutoPilotInfo

You'll be prompted 3 times. Please answer with Y to continue

It is also possible to download the script. After we have installed the Script we need to modify the execution policy to be able to successfully run the script:

Set-ExecutionPolicy bypass

And finally run it to collect the data:

With the CSV file prepared, we can now log in to our Azure Tenant and upload the file to Intune. Login to your Azure Tenant and navigate to the Windows enrollment page within Intune, click on the "Import" button:

Select the file and upload it by pressing "import" on the bottom of this page:

The file will now be uploaded. This could take up to 15 minutes.

Once the upload and sync process have finished successfully we need to assign a Autopilot profile to the newly added device.

As this is our first enrollment we need to create a new Autopilot profile. Please navigate to the deployment profiles within Intune and click the "Create profile" button.

Now we need to provide a Name, select "User-Driven" as our Deployment Method and select "Azure AD joined" as Join to Azure. Those are our required fields. In the OOBE configuration we can configure the behavior we want. Just keep in mind - the more you show - the more user interaction we have:

Once you have configured your Autopilot profile click on "create" and head over to Azure Active Directory to create a new Security Group and make the device member of this group:

Once we finish setting up the group, and made the device member of the group, we can proceed assigning the Autopilot Profile to this security group. Head over to the Autopilot Deployment Profiles blade in Intune, select the Autopilot profile we just created, and on the details tab of this profile click on Assignments to add the newly created security group:

Optional: If preferred you can also assign a specific user to that device:

Now we need to wait for the sync in the background to complete. Once that's done, we're ready to deploy.

You can check the Devices Tab if the profile is showing as Assigned for the device. This might take a bit of time

Once that's completed we can start our Virtual Machine and enroll it automatically into Azure AD and Intune with Autopilot.

To reset the machine to the OOBE Phase I use Sysprep and take a Snapshot afterwards. When the VM starts again you have to select region and keyboard layout. If you have done all preparations correctly you should end up with the custom Login of your Azure tenant. In our example I assigned it to a specific user so  it  prompts me for the password of the user I assigned to that device in the optional step:

After the Setup has been completed you can verify the successful Azure Join of your device as well as the Intune enrollment within the Azure Dashboard

Congratulations - you just enrolled your first device with Autopilot! In following blog entries we will have a more detailed look into some Autopilot scenarios like setting up Kiosk & Multi-Kiosk, Firstline workers, shared devices, using dynamic groups and so on. If you have any question just leave them in the comments.

 

Matthias Herfurth, Ingmar Oosterhoff and Johannes Freundorfer

 

.NET Framework November 2018 Preview of Quality Rollup

$
0
0

Today, we are releasing the November 2018 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Addressed an issue with KB4096417 where we switched to CLR-implemented write-watch for pages. The GC will no longer call VirtualAlloc when running under workstation GC mode. [685611]

SQL

  • Provides an AppContext flag for making the default value of TransparentNetworkIPResolutionfalse in SqlClient connection strings. [690465]

WCF

  • Addressed a System.AccessViolationException due to accessing disposed X509Certificate2 instance in a rare race condition to defer the service certificate cleanup to GC. The impacted scenario is WCF NetTcp bindings using reliable sessions with certificate authentication. [657003]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup. The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Preview Quality Rollup KB
Windows 10 1803 (April 2018 Update) Catalog
4467682
.NET Framework 3.5, 4.7.2 4467682
Windows 10 1709 (Fall Creators Update) Catalog
4467681
.NET Framework 3.5, 4.7.1, 4.7.2 4467681
Windows 10 1703 (Creators Update) Catalog
4467699
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4467699
Windows 10 1607 (Anniversary Update) Catalog
4467684
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 4467684

The following table is for earlier Windows and Windows Server versions.

>

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4467226
.NET Framework 3.5 Catalog
4459935
.NET Framework 4.5.2 Catalog
4459943
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 Catalog
4467087
Windows Server 2012 Catalog
4467225
.NET Framework 3.5 Catalog
4459932
.NET Framework 4.5.2 Catalog
4459944
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 Catalog
4467086
Windows 7
Windows Server 2008 R2
Catalog
4467224
.NET Framework 3.5.1 Catalog
4459934
.NET Framework 4.5.2 Catalog
4459945
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 Catalog
4467088
Windows Server 2008 Catalog
4467227
.NET Framework 2.0, 3.0 Catalog
4459933
.NET Framework 4.5.2 Catalog
4459945
.NET Framework 4.6 Catalog
4467088

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:


Experiencing Data Access Issue in Azure and OMS portal for Log Analytics – 11/27 – Resolved

$
0
0
Final Update: Tuesday, 27 November 2018 17:38 UTC
We've confirmed that all systems are back to normal with no customer impact as of 11/27, 16:30 UTC. Our logs show the incident started on 11/27, 16:15 UTC and that during the 15 minutes that it took to resolve the issue 5% of customers in South UK and East US regions might have experienced data access issues in OMS Portal for Log Analytics.
Root Cause: The failure was due to one of the dependent service.
Incident Timeline: 15 minutes - 11/27, 16:15 UTC through 11/27, 16:30 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.
-Sindhu



Delays using Hosted Ubuntu Pools in West Europe – 11/27 – Mitigated

$
0
0

Final Update: Tuesday, November 27th 2018 23:58 UTC

We’ve confirmed that all systems are back to normal as of 2018/11/27 23:21 UTC. Our logs show the incident started on 2018/11/27 21:39 UTC and that during the 1 hour and 42 minutes that it took to resolve the issue, some of customers would see delays in obtaining available Hosted Ubuntu Agents in the West Europe.

Sincerely,
Vitaliy


Initial Update: Tuesday, November 27th 2018 22:38 UTC

We're investigating potential customer impact in form of delays obtaining available Hosted Ubuntu Agents in West Europe.

  • Customers might see delays when using Hosted Ubuntu Pools in West Europe.
  • Other regions and hosted pools are not affected.
  • Next Update: Before Tuesday, November 27th 2018 23:10 UTC

Sincerely,
Vitaliy

Issues when syncing to BPM using a custom Azure DevOps (VSTS) process template

$
0
0

When setting up your Azure DevOps (VSTS) project in Lifecycle Services (LCS), if you use an inherited process template with custom fields, you may receive the following warning message: "The selected VSTS project is based on a custom process template. Custom process templates are not supported and may cause errors when synchronizing Business Process Modeler libraries with VSTS. Lifecycle Services supports the standard Agile, CMMI, and Scrum process templates."

This error occurs if there are syncing issues related to changes in the custom process template that causes issues between Business Process Modeler (BPM) libraries and Azure DevOps. If you follow the best practices noted below, you can safely ignore this warning message and continue using the custom template inherited from Agile, CCMI, or Scrum process templates.

Best practices for using custom process templates include:

  • Do not delete any work item types or out-of-the-box fields. You can add custom work item types or fields, but do not delete any default work item types or fields inherited from process templates.
  • Do not delete any state of a work item type. You can add additional state to a work item type, but do not delete any default state inherited from a process template.
  • Do not add any required fields to a work item type. That is, if you added custom fields, do not make them mandatory fields.

We are working on removing this warning for connections that do not make any breaking changes.

 

Mapping of requirements

If use a custom process template, you can map requirements to “Tasks” or “Bugs” only. It's helpful to know that requirements are only stored in Azure DevOps (they are not stored in BPM and then synchronized). This mapping is only a shortcut to allow creation of requirements from the BPM user interface. If you don’t want to use “Tasks” to track requirements and prefer to use “User Story” for example, you can do this directly in the Azure DevOps. For example, you can create the work item in Azure DevOps as a child of the desired Feature or Epic. To follow this best practice, you can add the following tag to your requirement work items in Azure DevOps (LCS:RequirementsGap or LCS:RequirementsFit).

You can re-establish the LCS Project settings connection to Azure DevOps mapping via Restore to default mappings during the setup. This will only restore the default mapping between LCS and Azure DevOps, you will not lose any data in Azure DevOps.

We are working to enable the selection of “User story”, “Backlog Item”, or “Requirement” when mapping requirements to the respective custom process template (Agile, Scrum, or CCMI), but we do not have timeline as to when this will be available. This should not block you from proceeding with your implementation.

 

Visual Studio Toolbox: Building Web APIs Part 1

$
0
0

This is the first of a three part series where I am joined by Chris Woodruff who shows how to build ASP.NET Core Web APIs. In this episode, Chris shows how to kick off your first project and what ASP.NET Core 2.1 offers that makes our developer lives better! He looks at Dependency Injection, Entity Framework Core and other .NET Core goodies to make your APIs the best for all platforms.

Episodes:

  • Part 1: Creating a RESTful API with ASP.NET Core 2.1 Web API (this episode)
  • Part 2: Creating the Best Architecture for ASP.NET Core 2.1 Web API
  • Part 3: Testing ASP.NET Core 2.1 Web API Solutions

Experiencing Data Access Issue in Azure and OMS portal for Log Analytics in FairFax region – 11/28 – Resolved

$
0
0
Final Update: Wednesday, 28 November 2018 01:32 UTC
We've confirmed that all systems are back to normal with no customer impact as of 11/28, 01:21 UTC. Our logs show the incident started on 11/27, 22:41 UTC and that during the 2 hours 42 minutes that it took to resolve the issue all the customers experienced data access issues in OMS and Azure Portal in Fairfax region.
Root Cause: The failure was due to issue in one of our backend services.
Incident Timeline: 2 Hours & 42 minutes - 11/27, 22:41 UTC through 11/28, 01:21 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.
-Sindhu

Initial Update: Wednesday, 28 November 2018 00:13 UTC
We are aware of issues within Log Analytics and are actively investigating. All customers may experience data access issues in OMS and Azure Portal in Fairfax region.
Work Around: None
Next Update: Before 11/28 02:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Sindhu

Blocking malicious versions of event-stream and flatmap-stream packages

$
0
0

On November 26, 2018, the npm package manager released security advisory 737 regarding the flatmap-stream package. It was determined that this package was malicious, and contained harmful code. In addition, the popular event-stream package was modified to make use of the harmful flatmap-stream package.

These malicious packages were apparently attempting to locate bitcoin wallets stored on the computer running the packages and exfiltrate the coins. npm has removed the flatmap-stream package from their registry. Visual Studio Code has also taken steps to block affected extensions.

In response to this incident, we changed Azure DevOps to block the harmful flatmap-stream 0.1.0 package and the versions of event-stream newer than version 3.3.4 which make use of the flatmap-stream package.

We will also be contacting customers whose feeds contain the malicious packages. After deploying the block, you will not be able to download these packages or publish them to Azure DevOps.

Version 3.3.4 of event-stream, and versions prior to that, were not affected by this security advisory and have not been blocked.  We advise users of the event-stream package to ensure that they remain on version 3.3.4.

We will provide an update when the packages have been blocked in Azure DevOps.

UPDATE: We've deployed the block.

Elster neuer Fehler: Eine Objektinstanz wurde nicht auf einen Objektverweis festgelegt. (Error 10000)

$
0
0

Die von der Elster Schnittstelle verwendete notwendige Mindestversion wurde erhöht. Die Datei, wie wir Sie im Moment erzeugen, wird nicht mehr akzeptiert obwohl die Schnittstelle selbst noch bis Januar zur Verfügung steht. Leider ist es nicht damit getan, nur die Version des Transferheaders zu erhöhen, da auch das verwendete Signaturverfahren durch die Anhebung der Schnittstellenversion geändert wurde und damit das von uns verwendete Verfahren nicht mehr unterstützt wird. (Siehe Beschreibung Elster Schnittstelle)

 

Mit der Einführung der TransferHeader-Version 11 existiert nur noch eine Art der Authentifizierung, diese  erfolgt über den verschlüsselten, komprimierten, base64-Kodierten DatenTeil und wird im TransferHeader abgelegt (Vorgang: „send-Auth“). 

Die Signatur im NutzdatenHeader (Vorgang: „send-Sig“) wird nicht mehr unterstützt.

 

Die aktualisierten Objekte werden im Dezember Cumulative Update bereitgestellt. Der Upload muss dann manuell erfolgen.

Mit freundlichem Gruß

Andreas Günther

Escalation Engineer
Microsoft Dynamics 365 Business Central
CSS EMEA Dynamics and SMS&P


Where’s my node? Logic Apps Flat File Decode beware suppress empty nodes unintended effect

$
0
0

The flat file schema property 'suppress_empty_nodes' is described as "Indicates whether or not to remove empty XML nodes after the parser generates XML instance data." in the documentation https://msdn.microsoft.com/en-us/library/aa559329.aspx

Sandro Pereira blogs about how this can be helpful in his post Teach me something new about Flat Files.

However you will not want to suppress W3C-empty nodes like (i.e. has no value, even if it has attributes):

  <Header BatchDate="2018-11-28" RecordCount="123" PostAmt="123456.1200|123"></Header>

When the node is required in the schema as in:

       <xs:element minOccurs="1" maxOccurs="1" name="Header">

If you run and XML validation on the output which empty node was suppressed you will get an error like this:

       System.Xml.XmlException: The element 'MyRootNode' has invalid child element 'Loop'. List of possible elements expected: 'Header'.

So in the schemaInfo beware

       suppress_empty_nodes="true"

and as appropriate change it to

       suppress_empty_nodes="false"

How to Encrypt SQL communication on the wire

$
0
0

Senior App Dev Manager Sanket Bakshi discusses techniques to secure data across the wire when moving SQL workloads to the cloud.


While on-premises, most applications did not take advantage of securing communications to the database on the wire just because of the inherent isolation advantages of having the application ONLY on the internal LAN. However, as more and more of these applications start moving to the cloud, the security of data over the wire is rightfully starting to gain increased importance.

Here is a quick guide to how you can setup your databases and database clients to encrypt communications over the wire. Do note that if you are moving your databases to Azure SQL database, you don’t really need to worry about this. Azure SQL Database enforces encryption (SSL/TLS) at all times for all connections, which ensures all data is encrypted "in transit" between the database and the client. There are some minor considerations with Azure SQL databases that we will cover.

Continue reading on Sanket’s blog.

Announcing .NET Framework 4.8 Early Access build 3694

$
0
0

We are happy to let you know that .NET Framework 4.8 is now feature complete and we have an early access build to share with you all! We will continue to stabilize this release and take more fixes over the coming months, and we would greatly appreciate it if you could help us ensure this is a high-quality release by trying it out and providing feedback on the new features via .NET Framework Early Access GitHub repository.

This build includes an updated .NET 4.8 runtime as well as the .NET 4.8 Developer Pack (a single package that bundles the .NET Framework 4.8 runtime, the .NET 4.8 Targeting Pack and the .NET Framework 4.8 SDK). Please note: this build is not supported for production use.

Next steps:
To explore the new features, download the .NET 4.8 Developer Pack build 3694. Instead, if you want to try just the .NET 4.8 runtime, you can download either of these:

This preview build 3694 includes improvements/fixes in the following areas:

  • [BCL] – Reducing FIPS Impact on Cryptography
  • [CLR] – Antimalware scanning for all assemblies
  • [WCF] – ServiceHealthBehavior
  • [WPF] – Support for UIAutomation ControllerFor property
  • [WPF] – Tooltips on keyboard access
  • [WPF] – Added Support for SizeOfSet and PositionInSet UIAutomation properties

You can see the complete list of improvements in this build here.

.NET Framework build 3694 is also included in the next update for Windows 10. You can sign up for Windows Insiders to validate that your applications work great on the latest .NET Framework included in the latest Windows 10 releases.

BCL – Reducing FIPS Impact on Cryptography

.NET Framework 2.0+ have cryptographic provider classes such as SHA256Managed, which throw a CryptographicException when the system cryptographic libraries are configured in “FIPS mode”. These exceptions are thrown because the managed versions have not undergone FIPS (Federal Information Processing Standards) 140-2 certification (JIT and NGEN image generation would both invalidate the certificate), unlike the system cryptographic libraries. Few developers have their development machines in “FIPS mode”, which results in these exceptions being raised in production (or on customer systems). The “FIPS mode” setting was also used by .NET Framework to block cryptographic algorithms which were not considered an approved algorithm by the FIPS rules.

For applications built for .NET Framework 4.8, these exceptions will no longer be thrown (by default). Instead, the SHA256Managed class (and the other managed cryptography classes) will redirect the cryptographic operations to a system cryptography library. This policy change effectively removes a potentially confusing difference between developer environments and the production environments in which the code runs and makes native components and managed components operate under the same cryptographic policy.

Applications targeting .NET Framework 4.8 will automatically switch to the newer, relaxed policy and will no longer see exceptions being thrown from MD5Cng, MD5CryptoServiceProvider, RC2CryptoServiceProvider, RIPEMD160Managed, and RijndaelManaged when in “FIPS mode”. Applications which depend on the exceptions from previous versions can return to the previous behavior by setting the AppContext switch “Switch.System.Security.Cryptography.UseLegacyFipsThrow” to “true”.

Runtime – Antimalware Scanning for All Assemblies

In previous versions of .NET Framework, Windows Defender or third-party antimalware software would automatically scan all assemblies loaded from disk for malware. However, assemblies loaded from elsewhere, such as by using Assembly.Load(byte[]), would not be scanned and could potentially carry viruses undetected.

.NET Framework 4.8 on Windows 10 triggers scans for those assemblies by Windows Defender and many other antimalware solutions that implement the Antimalware Scan Interface. We expect that this will make it harder for malware to disguise itself in .NET programs.

WCF - ServiceHealthBehavior

Health endpoints have many benefits and are widely used by orchestration tools to manage the service based on the service health status. Health checks can also be used by monitoring tools to track and alert on the availability and performance of the service, where they serve as early problem indicators. 

ServiceHealthBehavior is a WCF service behavior that extends IServiceBehavior.  When added to the ServiceDescription.Behaviors collection, it will enable the following: 

  • Return service health status with HTTP response codes: One can specify in the query string the HTTP status code for a HTTP/GET health probe request.
  • Publication of service health: Service specific details including service state and throttle counts and capacity are displayed using an HTTP/GET request using the “?health” query string. Knowing and easily having access to the information displayed is important when trouble-shooting a mis-behaving WCF service.
Config ServiceHealthBehavior:

There are two ways to expose the health endpoint and publish WCF service health information: by using code or by using a configuration file.

    • Enable health endpoint using code
    • Enable health endpoint using config
Return service health status with HTTP response codes:

Health status can be queried by query parameters (OnServiceFailure, OnDispatcherFailure, OnListenerFailure, OnThrottlePercentExceeded). HTTP response code (200 – 599) can be specified for each query parameter. If the HTTP response code is omitted for a query parameter, a 503 HTTP response code is used by default.

Query parameters and examples:

1. OnServiceFailure:

  • Example: by querying https://contoso:81/Service1?health&OnServiceFailure=450, a 450 HTTP response status code is returned when ServiceHost.State is greater than CommunicationState.Opened.

2. OnDispatcherFailure:

  • Example: by querying https://contoso:81/Service1?health&OnDispatcherFailure=455, a 455 HTTP response status code is returned when the state of any of the channel dispatchers is greater than CommunicationState.Opened.

3. OnListenerFailure:

  • Example: by querying https://contoso:81/Service1?health&OnListenerFailure=465, a 465 HTTP response status code is returned when the state of any of the channel listeners is greater than CommunicationState.Opened. 

4. OnThrottlePercentExceeded: Specifies the percentage {1 – 100} that triggers the response and its HTTP response code {200 - 599}.

  • Example: by querying https://contoso:81/Service1?health&OnThrottlePercentExceeded= 70:350,95:500, when the throttle percentage is equal or larger than 95%, 500 the HTTP response code is returned; when the percentage is equal or larger than 70% and less then 95%,   350 is returned; otherwise, 200 is returned.
Publication of service health:

After enabling the health endpoint, the service health status can be displayed either in html (by specifying the query string: https://contoso:81/Service1?health) or xml (by specifying the query string: https://contoso:81/Service1?health&Xml) formats. https://contoso:81/Service1?health&NoContent returns empty html page.

Note:

It’s best practice to always limit access to the service health endpoint. You can restrict access by using the following mechanisms:

  1. Use a different port for the health endpoint than what’s used for the other services as well as use a firewall rule to control access.
  2. Add the desirable authentication and authorization to the health endpoint binding.

WPF - Support for UIAutomation ControllerFor property.

UIAutomation’s ControllerFor property returns an array of automation elements that are manipulated by the automation element that supports this property. This property is commonly used for Auto-suggest accessibility. ControllerFor is used when an automation element affects one or more segments of the application UI or the desktop. Otherwise, it is hard to associate the impact of the control operation with UI elements. This feature adds the ability for controls to provide a value for ControllerFor property.

A new virtual method has been added to AutomationPeer:

To provide a value for the ControllerFor property, simply override this method and return a list of AutomationPeers for the controls being manipulated by this AutomationPeer:

WPF – Tooltips on keyboard access

Currently tooltips only display when a user hovers the mouse cursor over a control. In .NET Framework 4.8, WPF is adding a feature that enables tooltips to show on keyboard focus, as well as via a keyboard shortcut.

To enable this feature, an application needs to target .NET Framework 4.8 or opt-in via AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” and “Switch.UseLegacyToolTipDisplay”.

Sample App.config file: 

Once enabled, all controls containing a tooltip will start to display it once the control receives keyboard focus. The tooltip can be dismissed over time or when keyboard focus changes. Users can also dismiss the tooltip manually via a new keyboard shortcut Ctrl + Shift + F10. Once the tooltip has been dismissed it can be displayed again via the same keyboard shortcut.

Note: RibbonToolTips on Ribbon controls won’t show on keyboard focus - they will only show via the keyboard shortcut.

WPF – Added Support for SizeOfSet and PositionInSet UIAutomation properties

Windows 10 introduced new UIAutomation properties SizeOfSet and PositionInSet which are used by applications to describe the count of items in a set. UIAutomation client applications such as screen readers can then query an application for these properties and announce an accurate representation of the application’s UI.

This feature adds support for WPF applications to expose these two properties to UIAutomation. This can be accomplished in two ways:

  1. DependencyProperties 

New DependencyProperties SizeOfSet and PositionInSet have been added to the System.Windows.Automation.AutomationProperties namespace. A developer can set their values via XAML:

  1. AutomationPeer virtual methods 

Virtual methods GetSizeOfSetCore and GetPositionInSetCore have also been added to the AutomationPeer class. A developer can provide values for SizeOfSet and PositionInSet by overriding these methods:

Automatic values 

Items in ItemsControls will provide a value for these properties automatically without additional action from the developer. If an ItemsControl is grouped, the collection of groups will be represented as a set and each group counted as a separate set, with each item inside that group providing it’s position inside that group as well as the size of the group. Automatic values are not affected by virtualization. Even if an item is not realized, it is still counted towards the total size of the set and affects the position in the set of it’s sibling items.

Automatic values are only provided if the developer is targeting .NET Framework 4.8 or has set the AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” - for example via App.config file:

Previous .NET Framework Early Access Build

Closing

Thanks for your continued support of the Early Access Program. We will do our best to ensure these builds are stable and compatible but if you see bugs or issues please take the time to report these to us on Github so we can address these issues before the official release.

Thank you!

 

DSC Resource Kit Release November 2018

$
0
0

We just released the DSC Resource Kit!

This release includes updates to 9 DSC resource modules. In the past 6 weeks, 61 pull requests have been merged and 67 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • AuditPolicyDsc
  • DFSDsc
  • NetworkingDsc
  • SecurityPolicyDsc
  • SharePointDsc
  • StorageDsc
  • xBitlocker
  • xExchange
  • xHyper-V

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our latest community call for the DSC Resource Kit was supposed to be today, November 28, but the public link to the call expired, so the call was cancelled. I will update the link for next time. If there is interest in rescheduling this call, the new call time will be announced on Twitter (@katiedsc or @migreene) The call for the next release cycle is also getting moved a week later than usual to January 9 at 12PM (Pacific standard time). Join us to ask questions and give feedback about your experience with the DSC Resource Kit.

The next DSC Resource Kit release will be on Wednesday, January 9.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don't forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

Please see our documentation here for information on the support of these resource modules.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or CHANGELOG.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes
AuditPolicyDsc 1.3.0.0
  • Update LICENSE file to match the Microsoft Open Source Team standard.
  • Added the AuditPolicyGuid resource.
DFSDsc 4.2.0.0
  • Add support for modifying staging quota size in MSFT_DFSReplicationGroupMembership - fixes Issue 77.
  • Refactored module folder structure to move resource to root folder of repository and remove test harness - fixes Issue 74.
  • Updated Examples to support deployment to PowerShell Gallery scripts.
  • Remove exclusion of all tags in appveyor.yml, so all common tests can be run if opt-in.
  • Added .VSCode settings for applying DSC PSSA rules - fixes Issue 75.
  • Updated LICENSE file to match the Microsoft Open Source Team standard - fixes Issue 79
NetworkingDsc 6.2.0.0
  • Added .VSCode settings for applying DSC PSSA rules - fixes Issue 357.
  • Updated LICENSE file to match the Microsoft Open Source Team standard - fixes Issue 363
  • MSFT_NetIPInterface:
    • Added a new resource for configuring the IP interface settings for a network interface.
SecurityPolicyDsc 2.6.0.0
  • Added SecurityOption - Network_access_Restrict_clients_allowed_to_make_remote_calls_to_SAM
  • Bug fix - Issue 105 - Spelling error in SecurityOption"User_Account_Control_Behavior_of_the_elevation_prompt_for_standard_users"
  • Bug fix - Issue 90 - Corrected value for Microsoft_network_server_Server_SPN_target_name_validation_level policy
SharePointDsc 3.0.0.0
  • Changes to SharePointDsc
    • Added support for SharePoint 2019
    • Added CredSSP requirement to the Readme files
    • Added VSCode Support for running SharePoint 2019 unit tests
    • Removed the deprecated resources SPCreateFarm and SPJoinFarm (replaced in v2.0 by SPFarm)
  • SPBlobCacheSettings
    • Updated the Service Instance retrieval to be language independent
  • SPConfigWizard
    • Fixed check for Ensure=Absent in the Set method
  • SPInstallPrereqs
    • Added support for detecting updated installation of Microsoft Visual C++ 2015/2017 Redistributable (x64) for SharePoint 2016 and SharePoint 2019.
  • SPSearchContentSource
    • Added support for Business Content Source Type
  • SPSearchMetadataCategory
    • New resource added
  • SPSearchServiceApp
    • Updated resource to make sure the presence of the service app proxy is checked and created if it does not exist
  • SPSecurityTokenServiceConfig
    • The resource only tested for the Ensure parameter. Added more parameters
  • SPServiceAppSecurity
    • Added support for specifying array of access levels.
    • Changed implementation to use Grant-SPObjectSecurity with Replace switch instead of using a combination of Revoke-SPObjectSecurity and Grant-SPObjectSecurity
    • Added all supported access levels as available values.
    • Removed unknown access levels: Change Permissions, Write, and Read
  • SPUserProfileProperty
    • Removed obsolete parameters (MappingConnectionName, MappingPropertyName, MappingDirection) and introduced new parameter PropertyMappings
  • SPUserProfileServiceApp
    • Updated the check for successful creation of the service app to throw an error if this is not done correctly The following changes will break v2.x and earlier configurations that use these resources:
  • Implemented IsSingleInstance parameter to force that the resource can only be used once in a configuration for the following resources:
    • SPAntivirusSettings
    • SPConfigWizard
    • SPDiagnosticLoggingSettings
    • SPFarm
    • SPFarmAdministrators
    • SPInfoPathFormsServiceConfig
    • SPInstall
    • SPInstallPrereqs
    • SPIrmSettings
    • SPMinRoleCompliance
    • SPPasswordChangeSettings
    • SPProjectServerLicense
    • SPSecurityTokenServiceConfig
    • SPShellAdmin
  • Standardized Url/WebApplication parameter to default WebAppUrl parameter for the following resources:
    • SPDesignerSettings
    • SPFarmSolution
    • SPSelfServiceSiteCreation
    • SPWebAppBlockedFileTypes
    • SPWebAppClientCallableSettings
    • SPWebAppGeneralSettings
    • SPWebApplication
    • SPWebApplicationAppDomain
    • SPWebAppSiteUseAndDeletion
    • SPWebAppThrottlingSettings
    • SPWebAppWorkflowSettings
  • Introduced new mandatory parameters
    • SPSearchResultSource: Added option to create Result Sources at different scopes.
    • SPServiceAppSecurity: Changed parameter AccessLevel to AccessLevels in MSFT_SPServiceAppSecurityEntry to support array of access levels.
    • SPUserProfileProperty: New parameter PropertyMappings
SharePointDsc 3.1.0.0
  • Changes to SharePointDsc
    • Updated LICENSE file to match the Microsoft Open Source Team standard.
  • ProjectServerConnector
    • Added a file hash validation check to prevent the ability to load custom code into the module.
  • SPFarm
    • Fixed localization issue where TypeName was in the local language.
  • SPInstallPrereqs
    • Updated links in the Readme.md file to docs.microsoft.com.
    • Fixed required prereqs for SharePoint 2019, added MSVCRT11.
  • SPManagedMetadataServiceApp
    • Fixed issue where Get-TargetResource method throws an error when the service app proxy does not exist.
  • SPSearchContentSource
    • Corrected issue where the New-SPEnterpriseSearchCrawlContentSource cmdlet was called twice.
  • SPSearchServiceApp
    • Fixed issue where Get-TargetResource method throws an error when the service application pool does not exist.
    • Implemented check to make sure cmdlets are only executed when it actually has something to update.
    • Deprecated WindowsServiceAccount parameter and moved functionality to new resource (SPSearchServiceSettings).
  • SPSearchServiceSettings
    • Added new resource to configure search service settings.
  • SPServiceAppSecurity
    • Fixed unavailable utility method (ExpandAccessLevel).
    • Updated the schema to no longer specify username as key for the sub class.
  • SPUserProfileServiceApp
    • Fixed issue where localized versions of Windows and SharePoint would throw an error.
  • SPUserProfileSyncConnection
    • Corrected implementation of Ensure parameter.
StorageDsc 4.3.0.0
  • WaitForDisk:
    • Added readonly-property isAvailable which shows the current state of the disk as a boolean - fixes Issue 158.
xBitlocker 1.3.0.0
  • Update appveyor.yml to use the default template.
  • Added default template files .gitattributes, and .vscode settings.
  • Fixes most PSScriptAnalyzer issues.
  • Fix issue where AutoUnlock is not set if requested, if the disk was originally encrypted and AutoUnlock was not used.
  • Add remaining Unit Tests for xBitlockerCommon.
  • Add Unit tests for MSFT_xBLTpm
  • Add remaining Unit Tests for xBLAutoBitlocker
  • Add Unit tests for MSFT_xBLBitlocker
  • Moved change log to CHANGELOG.md file
  • Fixed Markdown validation warnings in README.md
  • Added .MetaTestOptIn.json file to root of module
  • Add Integration Tests for module resources
  • Rename functions with improper Verb-Noun constructs
  • Add comment based help to any functions without it
  • Update Schema.mof Description fields
  • Fixes issue where Switch parameters are passed to Enable-Bitlocker even if the corresponding DSC resource parameter was set to False (Issue 12)
xExchange 1.25.0.0
  • Opt-in for the common test flagged Script Analyzer rules (issue 234).
  • Opt-in for the common test testing for relative path length.
  • Removed the property PSDscAllowPlainTextPassword from all examples so the examples are secure by default. The property PSDscAllowPlainTextPassword was previously needed to (test) compile the examples in the CI pipeline, but now the CI pipeline is using a certificate to compile the examples.
  • Opt-in for the common test that validates the markdown links.
  • Fix typo of the word "Certificate" in several example files.
  • Add spaces between array members.
  • Add initial set of Unit Tests (mostly Get-TargetResource tests) for all remaining resource files.
  • Add WaitForComputerObject parameter to xExchWaitForDAG
  • Add spaces between comment hashtags and comments.
  • Add space between variable types and variables.
  • Fixes issue where xExchMailboxDatabase fails to test for a Journal Recipient because the module did not load the Get-Recipient cmdlet (335).
  • Fixes broken Integration tests in MSFT_xExchMaintenanceMode.Integration.Tests.ps1 (336).
  • Fix issue where Get-ReceiveConnector against an Absent connector causes an error to be logged in the MSExchange Management log.
  • Rename poorly named functions in xExchangeDiskPart.psm1 and MSFT_xExchAutoMountPoint.psm1, and add comment based help.
xHyper-V 3.14.0.0
  • MSFT_xVMHost:
    • Added support to Enable / Disable VM Live Migration. Fixes Issue 155.

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module's name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available starting in WMF 5.0) to find modules with DSC Resources:

# To list all modules that tagged as DSCResourceKit
Find-Module -Tag DSCResourceKit 
# To list all DSC resources from all sources 
Find-DscResource

Please note only those modules released by the PowerShell Team are currently considered part of the 'DSC Resource Kit' regardless of the presence of the 'DSC Resource Kit' tag in the PowerShell Gallery.

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module -Name < module name >

For example:

Install-Module -Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the CertificateDsc module, go to:
https://github.com/PowerShell/CertificateDsc.

All DSC modules are also listed as submodules of the DscResources repository in the DscResources folder and the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you're looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Kragenbrink
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

Introducing Azure DevOps Service Status Portal

$
0
0

Today, we’re happy to introduce Azure DevOps service status portal which helps with real time insights into active service events and provides further details on the event being investigated. This portal replaces our current experience using this Service blog. No new posts will be posted to this blog and existing subscribers are encouraged to use the rss feed that exists in the new portal.

To help clarify what specific aspects of the service are affected, we will communicate impact of all active events in a 2-dimensional service matrix mapped between services and geographic regions of impacted organizations.

An associated “event log” will be shown for all active events which will supply added context for the event being investigated. Information on past events can be found in the Status History section.

If you are experiencing problems with any of our Azure DevOps services, you can check the service status portal to determine whether this is a known issue with resolution in progress before you call support or spend time troubleshooting.

REST APIs will be made available soon for users looking to build automated solutions to watch service status in a programmatic way. Please stay tuned for more updates in that area.

For more information on the new portal, please refer to the Azure DevOps Service Status documentation.

 

Sincerely,
Sri Harsha Kalavala
SRE Program Manager, Azure DevOps

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>