I thought this was a solved problem, people. Agile has come out of the shadows and dominated the techie buzzword machine for years now. Everyone is doing Scrum now, right?
Well, OK, they’re usually doing “ScrumBut”. “We’re doing Scrum, but we’re not demoing our software to the customer.” Or, “We’re doing Scrum, BUT we’re not doing the team code ownership thing.” Usually, it just means that some of the team gathers on a semi-regular basis to do a “standup”. But that’s a rant for another time.
Unit testing. Why are we still not writing tests? I was invited by a US state a couple years ago to review some software for which they were getting ready to shell out a substantial sum of money. They wanted to know if the code was worth the dough. They owned the code – but there was not a single unit test in all of the 200K+ lines of code for the product. When we presented our findings, the lack of tests, which could assure the customer that the code was executing according to the requirements, was a primary factor in their rejection of the product.
Another customer asked us to assist them in migrating several dozen apps from on-premises to Azure. We went through dozens of interviews and not one of those apps had unit testing. Each interview sounded they same type of refrain, “We’re going to start testing soon.” “That’s something we’re working on.” “We know that we haven’t done the best job with testing.” If they are to take advantage of the DevOps scenarios we’re proposing, they’ll need to be assured that their software is performing they way they think it should be (eg. it’s tested) before they can even consider DevOps.
We all see the benefits of testing, right? Are we lazy? Are we pressured by management to “just build features” and “no time for testing”? Unless you just don’t care if your code works, that’s got to cause all sorts of anxiety. Sometimes this happens, though, even at Microsoft Consulting Services. Several months ago I was asked to review one of our projects that was experiencing… “customer dissatisfaction”, let’s say. No tests. Even at MCS? We have our own Playbook that stresses the importance of testing, and that if testing is not supported by the project or customer, it is reason to walk away from that customer.
Got tests? Ah, I can finally sleep at night. Peace. Assurance – and dare I say, “rigor”?
I’ve been doing this agile thing for about 14 years now. I watch trends, I practice the craft – and doing so has its momentum. I admit, I am not objective when it comes to testing. You better have some major facts behind your assertions (no testing pun intended) that testing is contraindicated for software and they better be easy to see, because I’ve lived this, like I said, for the past 14 years. I see the benefits. I am comforted by the fact that people maintaining the code I’ve written over the years – my code, the team’s code – can be assured that it’s still working, because all they have to do is to run the tests. Green is good.
Maintenance has been completed on infrastructure for Application Insights Availability Web Test feature. Necessary updates were installed successfully on all nodes, supporting Availability Web Test feature.
-Deepesh
Planned Maintenance: 17:00 UTC, 20 June 2017 – 00:00 UTC, 24 June 2017
The Application Insights team will be performing planned maintenance on Availability Web Test feature. During the maintenance window we will be installing necessary updates on underlying infrastructure.
During this timeframe some customers may experience very short availability data gaps in one test location at a time. We will make every effort to limit the amount of impact to customer availability tests, but customers should ensure their availability tests are running from at least three locations to ensure redundant coverage through maintenance. Please refer to the following article on how to configure availability web tests: https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/
One of the features of the new setup engine is to enable Visual Studio to be installed side-by-side with other minor releases, including minor upgrades and preview releases. After introducing the Visual Studio Preview channel a short while ago, some partners asked for the ability to filter out those Preview releases from vswhere.exe. Because this is a breaking change in behavior, following semantic version guidelines I have bumped the major version to 2. I also realized that in the past I have added new features without bumping the minor version and will be more diligent going forward.
vswhere.exe version 2.0.2 released last night to nuget.org (great for automating build and test environments, which the vswhere project itself uses), chocolatey.org, and on its own GitHub release page adds a new switch parameter -prerelease which will include prereleases like Preview builds. This will only work when using the query API version 1.11.2290 or newer, however, which is included in the logo for vswhere.exe when executed (starting with version 1.0.62). Specifically, it is included in the header after the query API is loaded, which currently for performance reasons is not when displaying the help text using the switch parameter -help or when a syntax error is detected.
This new query API will be released in Visual Studio 2017 version 15.3.
The output of vswhere.exe when using the new query API will also include a boolean property IsPrerelease. This is also true of the VSSetup PowerShell cmdlets, which also added a -Prerelease switch parameter to the Get-VSSetupInstance cmdlet. The PowerShell cmdlets also include information about the catalog that was last used to install or update the instance, including the semver productSemanticVersion which supports the IsPrelease property.
We expect that this change in behavior will affect very few developers based on telemetry and, it seems, most desired this behavior anyway. While our Preview releases are well-tested, we want to provide our partners using vswhere.exe the ability to select releases that have been out a while.
This article is great, and I encourage you to review its content and consider using it when you have a connectivity problem in the future. However, I’d also like to explain how I go about troubleshooting customer issues when connecting to Azure DB is a problem, and some of the common causes.
The first question I always ask is ‘where are you connecting from?’ at first sight this may seem to be a trivial question but it makes a lot of difference. As a first step I usually advise SSMS as a good tool to test with.
The three obvious answers are
a VM within Azure itself
a machine located inside a corporate network
a machine connected to the internet
Scenario 1:
Connection from 1 fails and from 2 and 3 works without error. This is quite a common issue and catches a lot of users. Fortunately, it’s a straight forward issue to resolve, in this situation we need to remember that from within Azure there is a setting ‘allow Azure Services to connect’ but even with this set we might not be good to go. If connection is not possible with this setting in most cases the problem is going to be that the ports on the VM are locked down. We all know that port 1433 needs to be open which it undoubtedly does, but while connections outside Azure will probably be happy with this, inside Azure you need other sets of ports to be open 11000-11999 and 14000-14999 should also be open, this is because connections from inside Azure will make an initial connection over port 1433 but then negotiate the port from the ranges given which they are actually going to use, connections from outside of Azure go via the gateway which masks this behaviour.
This is great if your client running on the VM supports TDS 7.4 AND also supports re-direction, I’ve found that not all drivers supporting TDS 7.4 do. In this case we can set the connection policy to Proxy to avoid re-direction.
Connection from 2 fails, but from 1 and 3 work without error. This is almost certainly going to be down to the corporate firewall you need to ensure that port 1433 is open but also that the IP address ranges for Azure are open. They can be found here :-
Be aware that the ranges are different for each data centre so you need to confirm the right range is open.
Nothing works. This is a worst case scenario, but often the easiest to resolve, this important thing is to understand the error. It may be that you see something like an error 40, or 53, this is usually down to name resolution. So the first test is a simple ping of the server name, from a command prompt.
The above example shows that we have resolved the name back to an IP address and a data centre (useful for knowing which IP ranges you need). The timeout is fine and can be ignored, it is the name resolving we are looking for here.
Azure DB has its own firewall, usually from inside Azure this doesn’t come into play unless you have set ‘Don’t Allow Azure Services to connect’. These firewalls are at two levels, one at the server and one at the database. A troubleshooting step I often recommend is to open all addresses (0.0.0.0 – 255.255.255.255) that should wave everyone through. A useful feature if SSMS is that if you are admin and you attempt to connect to Azure DB and your IP address is not allowed, you will be asked if you want to add a rule to allow your IP or subnet to connect. Something like this…..
In SSMS you can get as far as connecting and an error such as 18456 is thrown up. This is caused by a number of things, indeed 18456 is traditionally a ‘catch all’ authentication issue. The state of the error can give a good idea of what is going on. You can see the list of states here :-
Often this will be down to a bad user name or password, or it may be that the Login has been created in Master but a user has not been created in the Master DB or the required user DB.
Scenario 6:
The above are all ‘hard’ failures, in so far that you will either connect or you won’t, this is ‘easy’ it is when you get connection issues that come and go that your work is harder.
As we know databases can scale in Azure DB, allowing for some quite large DBs to be hosted, but this creates an issue. Take for example an Azure DB server with several Premium tier databases, with 1000’s of connections occurring all OK I hear you say they are all Premium DBs what could go wrong. Well there is a problem, it may be that you are sending all of the authentication via Master DB, and at times of stress you can overload Master DB. This is not good at all, but there is a solution, which is Contained Databases, this is where the Master DB and the user DB users are separated. It means that there is much less load on the Master DB. There is another really important side effect of this, the Geo-replication feature allows DBs to replicate across regions, cos we all want our data to be safe, if you use contained DB then your logins/users get replicated too, and you don’t need to worry about moving users between master DBs (we call that a win). As an aside geo-repl now has a listener which can automate failover.
Now having covered these scenarios the one single bit of advice I can give is to ensure that you have re-try logic implemented within your application. There are a number of situations that will require your database to be taken off-line for short periods of time, if you get your retry logic in place most of these will simply never appear as a problem to your applications
Image may be NSFW. Clik here to view.Here we are, just a couple weeks from my favorite week of the year, our annual Microsoft partner conference, now known as Inspire. Over the course of my career at Microsoft, I have had the extreme pleasure to work with so many amazing Partners, both individuals at these partners and the organizations themselves. A couple of years ago, I put up my “It has been a fun 15 years,” post, covering many of these incredible opportunities I have had, including the creation of the “Fantastic People of WPC,” celebrating the opportunity to meet so many amazing partners from around the world during “Worldwide Partner Conference.”
Over the years at Worldwide Partner Conference, there have been so many fun times, such as:
The annual contest to see who would be the Top 5 live teeters covering the WPC keynotes and conference
Hanging out and connecting with all of the press and influencers while covering the WPC keynotes from the WPC press box
Hosting the MPN Live shows talking with partners from around the world
Annual tweet-ups, connecting partners and partner influentials
Virtual scavenger hunts I’ve run using social media and/or my Microsoft Partner mobile app
All of the partner receptions and meet ups
And much more…
As this year’s Partner conference approaches, now known as Inspire instead of Worldwide Partner Conference, I’ve been talking with several partners and received some comments and questions such as:
“We need to make sure we grab our annual pic this year”
“Looking forward to being part of the Fantastic People this year”
“Are you changing the name of your Fantastic People of WPC now that the conference is renamed?”
You know what they say, “All good things eventually come to an end,” and this year brings with it the end of an era. With the transition of WPC to Inspire this year, I can still say that I have been to every single Worldwide Partner Conference Microsoft has ever held; however, for the first time since the turn of the century, this is the very first Microsoft annual partner conference I will not be attending.
So why won’t I be attending this year? You may recall the “Cloud Partner Insights (CPI): Information –> Insights –> Impact” post I put up here on the blog about the Cloud Partner Insights project I was leading, which was an entirely new way to drive partner impact and insights across our business that we leveraged across the U.S. business. Well since that time, the platform has grown beyond its initial set of products to many more, beyond sales to include consumption, beyond the partner business to partner and customer, and beyond our SMS&P segment to include our EPG and Pub Sec segments as well. Given the expansion, the platform went through an initial rebranding from “Cloud Partner Insights,” to “Cloud Performance Insights” to better reflect the broader scope. Then as it continued to grow and expand, it transformed from a reporting suite into a data and analytics Business Insights backend platform bringing together dozens of master data sets from across Microsoft to power and enable in-depth and custom insights capabilities delivered through Power BI across Microsoft, which led to it becoming known as simply the “CPI” platform.
As this project continued to expand and morph, so has my role and the scope of my role. In fact, at the beginning of our Fiscal Year 2017, my role moved from our US SMS&P Partner team over to our US National Sales Excellence Team, focused across the entire US business including customer and partner, in addition to partnering with our Microsoft Worldwide teams on data analytics and insights. Because of this, for the first time in my Microsoft career, my role is no longer technically a “partner role,” which means I am not one of the individuals who will be attending our annual Microsoft partner conference.
Now even though I won’t be there in person, I’ll be following along through social media and online, like all of our partners around the world who are unable to attend in person are invited and encouraged to do. Also, even though I will not be running my “Fantastic People of” collection of photos this year (since I won’t be there), Please feel free to send over and share your photos from Inspire 2017 with me via social media as I would love to see them!
Here’s wishing you all an amazing Microsoft Inspire Conference!
There are a lot of questions from our customers about certificate issues during the TLS connection between IoT devices and IoT hub. So here I write this article to reveal something that you need to know when you are trying to connect your IoT devices to Azure IoT hub.
IoT Hub requires all device communication to be secured using TLS/SSL (hence, IoT Hub doesn’t support non-secure connections over port 1883). The supported TLS version is TLS 1.2, TLS 1.1 and TLS 1.0, in this order. Support for TLS 1.0 is provided for backward compatibility only. It is recommended to use TLS 1.2 since it provides the most security.
The client sends a “Client hello” message to the server, along with the client’s random value and supported cipher suites.
The server responds by sending a “Server hello” message to the client, along with the server’s random value.
The server sends its certificate to the client for authentication and may request a certificate from the client. The server sends the “Server hello done” message.
If the server has requested a certificate from the client, the client sends it.
The client creates a random and encrypts it with the public key from the server’s certificate, sending the encrypted Pre-Master Secret to the server.
The server receives the Pre-Master Secret. The server and client each generate the Master Secret and session keys based on the Pre-Master Secret.
The client sends “Change cipher spec” notification to server to indicate that the client will start using the new session keys for hashing and encrypting messages. Client also sends “Client finished” message.
Server receives “Change cipher spec” and switches its record layer security state to symmetric encryption using the session keys. Server sends “Server finished” message to the client.
Client and server can now exchange application data over the secured channel they have established. All messages sent from client to server and from server to client are encrypted using session key.
In step 3, the server would send server certificate and may request client certificate. There are two types of certificate involved during the TLS handshake:
Server side certificate. Sent by Azure IoT Hub Server to client.
Client side certificate. Sent by client to server. It’s optional.
Since IoT Hub doesn’t support mutual authentication, here we just need to discuss server authentication. There are 3 certificates in play here during the server authentication. These form part of cert chain and are linked together as a chain.
Type
Issued by
Other info
Expiration
Root CA
Baltimore CyberTrust Root
azure-iot-masterccertsms.der, certs.c, certs.h (part of Azure IoT SDK)
5/13/2025
Intermediate CA
Baltimore CyberTrust
Sent by Server
12/20/2017
Wild Card certificate (leaf)
Microsoft IT SSL SHA2
Sent to all host names ending in *.azure-devices.net (wildcard certificate)
8/26/2017
Only 2 and 3 are sent by the server as part of TLS handshake. Client will normally validate Root CA of the chain and will determine if it is trusted. If the device doesn’t pass the verification of the server certificate, it will generally report “Unknown CA” error in the network log. You can see from above table that Root CA is included in Azure IoT SDK. So you can either install this Root CA in your device or trust this CA in your device application code explicitly. Generally, CyberTrust Root CA has existed in your Windows and most of Linux desktop version. You don’t need to install the CA to trust the certificate from IOT hub. But in a lot of embedded Linux devices which are tailored CyberTrust Root CA is not provided. And you might not even install it to the system. In this case, you should explicitly trust the certificate in you code. In IoT SDK for C,the sample application has included related code like this but not enabled by default.
#ifdef MBED_BUILD_TIMESTAMP
#include “certs.h”
#endif // MBED_BUILD_TIMESTAMP
#ifdef MBED_BUILD_TIMESTAMP
// For mbed add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, “TrustedCerts”, certificates) != IOTHUB_CLIENT_OK)
{
printf(“failure to set option “TrustedCerts”rn”);
}
#endif // MBED_BUILD_TIMESTAMP
Here MBED_BUILD_TIMESTAMP is only defined in mBED system. If you want to enable it in other platform which doesn’t include CyberTrust Root CA, you should remove this ifdef or add the definition before compilation.
The certificates data is hardcoded in certs.c. In your real code, you should provide an interface to adjust it in case there is any certificate change. There aren’t many cases which need to do the change. The certificates sent from IoT hub is not changeable and configurable by the users. But Microsoft might change the certificate for some reason. And the root certificates still may expire or be revoked though they are long-lived. If there is no way of updating the certificate on the device, the device may not be able to subsequently connect to the IoT Hub (or any other cloud service). Having a means to update the root certificate once the IoT device is deployed will effectively mitigate this risk.
Note that China Azure didn’t use CyberTrust Root CA. Instead it still uses WoSign Root CA. This certificate is also included in certs.c so that you don’t need to change anything for China Azure connection. But it only works for IoT SDK for C. In IoT SDK for other language, this certificate might not be included in the sample code. You should add the correct certificate data explicitly.
You can also bypass the certificate validation although it’s definitely not recommended. We don’t provide the surface in the IoT SDK APIs. You can modify the code in OpenSSL or websocket layer to disable SSL verification if needed for development or if you know exactly what your case is.
That’s all I want to discuss in this post. I hope it helps to solve connection issue between device and IoT Hub. .
Today we’re excited to announce the release of TypeScript 2.4!
If you haven’t yet heard of TypeScript, it’s a superset of JavaScript that brings static types and powerful tooling to JavaScript. These static types are entirely optional and get erased away – you can gradually introduce them to your existing JavaScript code, and get around to adding them when you really need. At the same time, you can use them aggressively to your advantage to catch painful bugs, focus on more important tests that don’t have to do with types, and get a complete editing experience. In the end, you can run TypeScript code through the compiler to get clean readable JavaScript. That includes ECMAScript 3, 5, 2015, and so on.
To get started with the latest stable version of TypeScript, you can grab it through NuGet, or use the following command with npm:
Built-in support for 2.4 should be coming to other editors very soon, but you can configure Visual Studio Code and our Sublime Text plugin to pick up any other version you need.
Dynamic import expressions are a new feature in ECMAScript that allows you to asynchronously request a module at any arbitrary point in your program. These modules come back as Promises of the module itself, and can be await-ed in an async function, or can be given a callback with .then.
What this means in short that you can conditionally and lazily import other modules and libraries to make your application more efficient and resource-conscious. For example, here’s an async function that only imports a utility library when it’s needed:
Many bundlers have support for automatically splitting output bundles (a.k.a. “code splitting”) based on these import()expressions, so consider using this new feature with the esnext module target. Note that this feature won’t work with the es2015 module target, since the feature is anticipated for ES2018 or later.
String enums
TypeScript has had string literal types for quite some time now, and enums since its release. Having had some time to see how these features were being used, we revisited enums for TypeScript 2.4 to see how they could work together. This release of TypeScript now allows enum members to contain string initializers.
String enums have the benefit that they’re much easier to debug with, and can also describe existing systems that use strings. Like numeric enums and string literal types, these enums can be used as tags in discriminated unions as well.
TypeScript 2.4 has improvements in how types are inferred when generics come into play, as well as improved checking when relating two generic function types.
Return types as inference targets
One such improvement is that TypeScript now can let types flow through return types in some contexts. This means you can decide more freely where to put your types. For example:
it used to be the case that s would need to be explicitly annotated or its type would be inferred as {}. While lengths could be left unannotated in that case, it felt surprising to some users that information from that type wasn’t used to infer the type of s.
In TypeScript 2.4, the type system knows s is a string from the type of lengths, which could better fit your stylistic choices.
This also means that some errors will be caught, since TypeScript can find better candidates than the default {} type (which is often too permissive).
let x:Promise<string> =newPromise(resolve=> {
resolve(10);
// ~~ Now correctly errors!
});
Stricter checking for generic functions
TypeScript now tries to unify type parameters when comparing two single-signature types. As a result, you’ll get stricter checks when relating two generic signatures which may catch some bugs.
TypeScript has always compared parameters in a bivariant way. There are a number of reasons for this, and for the most part it didn’t appear to be a major issue until we heard more from users about the adverse effects it had with Promises and Observables. Relating two Promises or Observables should use the type arguments in a strictly covariant manner – a Promise<T> can only be related to a Promise<U> if T is relatable to U. However, because of parameter bivariance, along with the structural nature of TypeScript, this was previously not the case.
TypeScript 2.4 now tightens up how it checks two function types by enforcing the correct directionality on callback parameter type checks. For example:
interfaceMappable<T> {
map<U>(f: (x:T) =>U):Mappable<U>;
}
declarelet a:Mappable<number>;
declarelet b:Mappable<string|number>;
a=b; // should fail, now does.b=a; // should succeed, continues to do so.
In other words, TypeScript now catches the above bug, and since Mappable is really just a simplified version of Promise or Observable, you’ll see similar behavior with them too.
Note that this may be a breaking change for some, but this more correct behavior will benefit the vast majority of users in the long run.
Stricter checks on “weak types”
TypeScript 2.4 introduces the concept of “weak types”. A weak type is any type that contains nothing but all-optional properties. For example, this Options type is a weak type:
In TypeScript 2.4, it’s now an error to assign anything to a weak type when there’s no overlap in properties. That includes primitives like number, string, and boolean.
For example:
function sendMessage(options:Options) {
// ...
}
const opts = {
payload: "hello world!",
retryOnFail: true,
}
// Error!sendMessage(opts);
// No overlap between the type of 'opts' and 'Options' itself.// Maybe we meant to use 'data'/'maxRetries' instead of 'payload'/'retryOnFail'.
This check also catches situations like classes that might forget to implement members of an interface:
interfaceFoo {
someMethod?():void;
someOtherMethod?(arg:number):string;
}
// Error! Did 'Dog' really need to implement 'Foo'?classDogimplementsFoo {
bark() {
return"woof!";
}
}
This change to the type system may introduce some breakages, but in our exploration of existing codebases, this new check primarily catches silent errors that users weren’t aware of.
If you really are sure that a value should be compatible with a weak type, consider the following options:
Declare properties in the weak type that are always expected to be present.
Add an index signature to the weak type (i.e. [propName: string]: {}).
Use a type assertion (i.e. opts as Options).
In the case above where the class Dog tried to implement Foo, it’s possible that Foo was being used to ensure code was implemented correctly later on. You can get around this by declaring them as optional properties of the type never.
classDogimplementsFoo {
// These properties should never exist.
someMethod?:never;
someOtherMethod?:never;
bark() {
return"woof!";
}
}
Enjoy!
You can read up our full what’s new in TypeScript page on our wiki for some more details on this new release. To also see a full list of breaking changes, you can look at our breaking changes page as well.
Keep in mind that any sort of constructive feedback that you can give us is always appreciated, and used as the basis of every new version of TypeScript. Any issues you run into, or ideas that you think would be helpful for the greater TypeScript community can be filed on our GitHub issue tracker.
If you’re enjoying TypeScript 2.4, let us know on Twitter with the #iHeartTypeScript hashtag on Twitter.
Thanks for reading up on this release, and happy hacking!
An Operations Management Suite (OMS) Repository can be associated to a single Azure Subscription. Companies that host their Products on separate per-tenant Azure Subscription for their Customers have this need to consolidate the Health data from all these Subscriptions for ease of monitoring and tracking. OMS provides the ability to configure Alerts that would call a Web Hook configured on a Central Incident Management System. The IT Support Team could then track and diagnose issues in their Solution deployment. However, when there are no such Systems implemented, an alternative approach would be to host a Common OMS Workspace to which the individual, per tenant Workspaces would send the Health data to, using the Data Collector APIs.
Aggregation of Heartbeat data from Compute Resources in Azure
The PowerShell script VMsHeartbeatAggregator.ps1 that implements the aggregation logic is called from an Azure Runbook. The Runbook executes under an Azure Automation Account and has a scheduler configured to recurrently trigger the PowerShell script.
The PowerShell script executes a Dynamic Query on OMS to retrieve aggregated Heartbeat Data from the VMs and VMSS deployed in that Subscription. This data is then pushed to the Common OMS Workspace using the Data Collector API. This Runbook would be deployed in each of the Azure Subscriptions and the data captured from these OMS Workspaces is aggregated to a single, common OMS Workspace.
The Data Collector API (REST API) does not insert or update data into the Record Types that OMS provides by default, namely Heartbeat, Event, etc. It is required to create a custom Record Type into which the data is pushed using the API. Assign different Record Type names in the Common OMS Workspace corresponding to each Source OMS Workspaces that sends the data.
View Designer and Custom Views can be used to create the Dashboard in the common OMS Workspace, one for each Source OMS Workspace. Provide hyperlinks in these Dashboards back to the Dashboards in the respective Source OMS Workspace. This lets the Help Desk Team drill down into the raw data in the Source Subscriptions to investigate an issue.
The PowerShell scripts used to implement this scenario can be downloaded from the GitHub Repository here It is based on the elaborate guidance on the invocation of the Data Collector API using PowerShell, which is covered in the Documentation here
The key steps to be performed to implement this scenario are:
Configure a Log Analytics Resource and add the VMs and VMSS Resources to be monitored to it. An OMS Workspace corresponding to the Log Analytics Resource is created where the Heartbeat information is sent to. Refer to the Azure documentation here to get started
Create an Azure Automation Account from the Azure Portal
Within the Automation Account, navigate to Runbooks> Add a new Runbook
Edit the runbook and paste the PowerShell script from VMsHeartbeatAggregator.ps1 into it
To execute this script, certain PowerShell Modules need to be imported first, as shown in the screenshot below. At the prompt, acknowledge the installation of the Dependent Modules as well.
Add a Scheduler to the Runbook created above. Since the PowerShell script uses input parameters, the Scheduler configuration form would prompt for these values. In the scenario implemented here, the input parameters used are:
[Parameter (Mandatory=$true)]
[String] $ResourceGroupName, -> Resource Group that contains the VMs and VMSS being monitored
[Parameter (Mandatory=$true)]
[String] $WorkspaceName, -> Source OMS Workspace Name where the Heartbeat data is queried upon
[Parameter (Mandatory=$true)]
[String] $CustomerName, # e.g. CustomerX for whom the solution is deployed
[Parameter (Mandatory=$true)]
[String] $EnviromentName # e.g. staging
The values of $CustomerName and $EnvironmentName are concatenated to form the name of the Record Type created in the Common OMS Workspace.
In the PowerShell script, set the values of the following variable pertaining to the Target Common OMS Workspace
# This is the Unique identifier of the Common Workspace in OMS
$CustomerId = “[Enter the Common OMS Workspace ID]”
# The access key to connect to the common OMS Workspace
$SharedKey = “[Enter the Access key required to invoke the Data Collector API on the common OMS Workspace]”
These values can be obtained from the OMS Workspace Data Source Settings Page, as shown in the screenshot below
Execute the Runbook manually, or use the Scheduler to trigger it. View the Console output and ensure that there are no errors.
Navigate to the Common OMS Workspace and use Log Search to view the data inserted using the Data Collector API. Note that the Record Type name is suffixed with _CL by the API. See Screenshot below.
Likewise execute the Runbook in each of the Source Subscriptions and push the data to the corresponding Record Type in the Common OMS Workspace.
Use View Designer or Custom Views in the Common OMS Workspace to create a Dashboard for data inserted for each Record Type
Azure Web App Monitoring with Alerts
The PowerShell script WebAppHealthcheckRunbook.ps1 in this sample is deployed in a Runbook that exposes a Web hook.
An Azure Application Insight Resource is configured to execute Web App URL Ping Tests
When the outcome is a failure an Alert can be raised which would call the Web hook configured for the Runbook and pass the details like the Web App Name, Resource Name, etc to it
The Power Shell script would then invoke the Data Collector API of the common OMS Workspace and insert the Ping Test result data into a custom Record Type
Guidance on working with Webhook in a Runbook is covered in the Azure Documentation Link here
The screenshot below shows how Application Insights can be configured to execute Ping Health Checks on a Web App, and how an Alert could be configured to invoke the Webhook exposed by a Runbook.
This post is provided by App Dev Managers, Mariusz Kolodziej and Francis Lacroix who discuss how to automagically deploy a VSTS Private Agent with Azure Resource Manager (ARM) and some PowerShell.
Image may be NSFW. Clik here to view.My customers love to use VSTS to enable their DevOps capabilities, but in some cases they are not able to use the Hosted Agents due to security restrictions. In that case, the alternative is to use Private Agents. For detailed description on differences between the two configurations checkout this article. In this blog we’ll discuss how to automagically deploy a VSTS Private Agent with Azure Resource Manager (ARM) and some PowerShell.
Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.
This time, I create a Release Definition to complete the CI/CD pipeline. While build definition defines how to build the source, release definition defines how to release the compiled bits. I include following three tasks.
Infrastructure as a Code (ARM template for Azure) to automate infrastructure.
I already created App Service via Visual Studio, but I will automate it by using ARM Template.
Get Template
It’s tedious to write a template from scratch, and you can get it from Azure Portal.
1. Login to https://portal.azure.com. To start from scratch, let’s delete the resource group provisioned previously. * Before delete the environment, please take a note for each application settings as you need them later.
5. Extract the template.zip file. You find template.json, which contains service definitions and parameters.json, which contains values for each parameters.
3. Select your build for this release and enable [Continuous deployment]. As you can see, Jenkins is also supported Image may be NSFW. Clik here to view..
5.Select the Azure subscription, and click [Authorize]. Select [Create or update resource group] for Action which create the environment if not exist, otherwise update the settings to match with templates.
Record timing data into a circular buffer (rather than just capturing a fixed duration), then use the System Monitor graph view to select a time region of interest and open that as a timing capture
Timing capture event list can now be ordered by either CPU or GPU execution time
Timing capture GPU timeline uses flame graphs to display nested marker regions
More robust pixel history (many bugfixes)
Fixed crashes caused by HLSL syntax highlighting
Improved callstack resolution performance when opening timing captures
Support for Function Summary, Callgraph, Memory and File IO captures of packaged titles
Private Sub Workbook_Open()
Worksheets(1).Cells.SpecialCells(xlCellTypeFormulas) '回避コード : ダミーの SpecialCells メソッド実行
MsgBox "Sheet1:" & Worksheets(1).Cells.SpecialCells(xlCellTypeFormulas).Address
End Sub
You can download patches from the table below. See .NET Framework Monthly Rollups Explained for an explanation on how to use this table to download patches from Microsoft Update Catalog.
Our development team has been working hard on implementing a much requested automation scenario for PowerShell in Configuration Manager, and that’s being able to create and modify task sequence steps.
Task sequence editing has three separate pieces: groups of steps, commands (such as “Install Application” and “Partition Disk”), and conditions. With 1706 Current Branch and Technical Preview we now have PowerShell support for creating and removing groups, all conditional statements, and what have been identified as the most commonly used task sequence steps.
The typical flow is something like this:
Create your task sequence steps and groups (New-CMTaskSequenceStepCommand)
Create a task sequence (New-CMTaskSequence)
Add the steps you’ve created to the task sequence ($ts | Add-CMTaskSequenceStep –Step ($step1, $step2, $step3)
In 1706, the following Step types are supported for Get, New, Remove, and Set operations:
Run command line (Verb-CMTaskSequenceStepRunCommandLine)
select CIG.[Description], STK.[Name], STS.[Progress], CASE STS.[StateType] WHEN 3 THEN ‘3-Processing’ WHEN 5 THEN ‘5-Complete’ WHEN 7 THEN ‘7-Error’ END AS StateType, DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), STS.[LastRunTime]) as LocalLastRunTime, DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), STS.[NextRunTime]) as LocalNextRunTime, STRG.[Interval], CASE STRG.[UnitOfMeasure] WHEN 1 THEN ‘Seconds’ WHEN 2 THEN ‘Minutes’ WHEN 3 THEN ‘Hours’ WHEN 4 THEN ‘Days’ END AS UnitOfMeasure, STRG.[IsEnabled] from [Scheduling].[Task] STK with (nolock) inner join [Scheduling].[TaskState] STS with (nolock) on STK.[Id] = STS.[TaskId] inner join [Connector].[IntegrationGroup] CIG with (nolock) on CIG.[IntegrationId] = STK.[CategoryId]
inner join [Scheduling].[Trigger] STRG with (nolock) on STK.[TriggerId] = STRG.[Id]
order by CIG.[Description], STK.[Name];
[タスクの実行結果(対象件数の確認)]
select CIG.[Description], ST.[Name], SM.[Text], SM.[KEY] as MsKey, DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), SL.[StartTime]) as LocalStartTime, DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), SL.[EndTime]) as LocalEndTime, SL.[TotalRetryNumber], SL.[IsFailed], STT.[Name] as TaskType from [Scheduling].[Log] SL with (nolock) inner join [Scheduling].[Task] ST with (nolock) on SL.TaskId = ST.Id inner join [Scheduling].[Message] SM with (nolock) on SL.Id = SM.LogId inner join [Scheduling].[TaskType] STT with (nolock) on ST.TypeId = STT.Id inner join [Connector].[IntegrationGroup] CIG with (nolock) on CIG.[IntegrationId] = ST.[CategoryId] order by SL.[StartTime] desc;
One of the value propositions of using containers with Service Fabric is that you can now deploy IIS based applications to the SF cluster. In this blog post, we will see how to leverage docker containers to deploy IIS apps to Service Fabric. I will skip the image creation and publish to the docker hub in this blog. Please reference my earlier blog to learn more about creating and pushing images.
For this blog, I will be using an already pushed IIS image to the docker Hub. The application image uses microsoft/iis as a base image.
Let’s get started.
Step 1. Open Visual Studio 2017 and create a Service Fabric Application.
Step 2. Now choose the Guest Container feature and provide valid Image Name and Click Ok. The image name is the one we published to Docker Hub in the previous exercise.
Step 5. We are ready to Publish our application to Service Fabric Cluster now. Right click on the Application and Publish to Azure Service Fabric. Make sure when you create the Service Fabric cluster, you pick the option to create it with Windows Server 2016 with Containers.
Step 7 Now, lets browse to our application on Service Fabric. You should see your IIS application running on Service Fabric.
As you saw in this blog post we can use Service Fabric for Container Orchestration for newer and legacy applications. More coming in next blog on Container Orchestration capabilities like DNS, Scaling, Labels etc.. Stay tuned!
지난 주 On .NET에는 사업가이자 기업의 임원이면서 Microsoft .NET 플랫폼으로 개발하는 Brett Morrison과 함께 했습니다. 그는 Onestop, ememories 등의 스타트업 회사를 설립했으며 SpaceX에서도 일한 경력이 있습니다.
금주의 패키지: DateTime Extensions
애플리케이션에서 사용하는 날짜와 관련된 계산은 대부분 간단하며 그리 복잡하지 않습니다. 하지만 “이번 달의 공휴일이 몇 일인가?”와 같은 계산은 조금 복잡합니다. 이런 경우 DateTime Extensions 프로젝트를 활용하면 도움을 받을 수 있습니다. 이 라이브러리는 현재 24개 문화권의 공휴일이 정확하게 설정되어 있고, 이를 계산에 이용할 수 있습니다.
송 기수, 기술 전무, 오픈에스지 현재 개발 컨설팅회사인 OpenSG의 기술이사이며 여러 산업현장에서 프로젝트를 진행중이다. 입사 전에는 교육 강사로서 삼성 멀티캠퍼스 교육센터 등에서 개발자 .NET 과정을 진행해 왔으며 2005년부터 TechED Korea, DevDays, MSDN Seminar등 개발자 컨퍼런스의 스피커로도 활동하고있다. 최근에는 하루 업무의 대다수 시간을 비주얼 스튜디오와 같이 보내며 일 년에 한 권 정도 책을 쓰고, 한달에 두 번 정도 강의를 하면 행복해질 수 있다고 믿는 ‘Happy Developer’ 이다.
In this post, Application Development Manager, Deepa Chandramouli shared some tips on getting the most of your Premier Support for Developers contract.
Microsoft Premier Support manages the highest tier support programs from Microsoft. Premier Support for Developers (PSfD) empowers developers and enterprises to plan, build, deploy and maintain high quality solutions. When you purchase a Premier Support for Developers contract from Microsoft, an Application Development Manager(ADM) is assigned. He or she will guide you to use the contract in an efficient way that will benefit your developers and the business.
Premier Support for Developers and your ADM does not replace a development team, rather, it complements your team and helps with best practice guidance, products and technology roadmaps, and future proofing your solutions. Your ADM becomes a trusted advisor and a persistent point of contact into Microsoft with the technical expertise to understand your development needs, pain points, and recommend services that are right for you.
A Premier Support contract can be leveraged to validate architecture, perform design/code reviews against best practices and help teams to ramp up on the new technology as needed. As with any Premier Support relationship, customers have ways to engage support — Reactive and Proactive.
Reactive Support – Reactive or the Problem Resolution Support provides a consistent way to engage Microsoft to open support cases when you run into issues with any of Microsoft Products and Service still covered under Lifecycle Policy. You can use http://support.microsoft.com or call into the 1800-936-3100 to open a support case with Premier.
Proactive Support – Proactive or Support Assistance is used for advisory consulting engagements and trainings. Examples would be best practice guidance, code reviews, migration assessments, trainings etc…
A common misconception about the proactive support is that it is only meant to be used for training and workshops. It’s also common practice to use proactive hours for remediation work that comes out of critical, reactive support issues that may come up. There are many types of services and engagements customers can leverage through proactive hours to reduce the likelihood of reactive issues in the future. We understand ONE SIZE DOESN’T FIT ALL. So most of the services can be customized to fit your needs. As with any successful projects, the key to get the most of your investment in Premier Support is by Planning, Planning and Planning ahead of time with your ADM.
Premier proactive services can be grouped into 3 broad categories.
Assess – Assessments are a great place to start since the results drive other engagements and services. If you don’t know where to start using Premier, start with an assessment of your most critical workload that has pain points. These findings can help align and prioritize next steps and Premier Services that can help.
Operate – Operate is the next step after assessments to help address issues with applications and infrastructure. It could be front-end or middle tier or database. For example, performance assessment could lead to optimizing stored procedures. SQL Performance and Optimization Clinic is a huge favorite of lot of Premier customers because it addresses performance issues as well as educate developers around how to address bottlenecks in the future.
Educate – Educate focused on empowering developers with the skills and the tools they need to deliver successful applications. You have access to Premier open enrollment workshops and webcasts that you can register for at any time. There is broad list of topics available that your ADM can share with you on a regular basis. You can also plan custom trainings that are more focused and targeted to your needs and relate to the projects that your team is currently working on.
This is only a small subset of services to give you an idea of how best to use the Premier Support for Developers contract. Application Development Managers (ADMs) can provide more information on each of these topics and the huge list of services that applies to your specific needs and environment. Another strong value proposition of Premier Support for Developers are custom engagements that cater to your needs and help achieve your goals.