Quantcast
Viewing all 5308 articles
Browse latest View live

Why are we still not unit testing?

Unit testing. At least unit testing.

I thought this was a solved problem, people. Agile has come out of the shadows and dominated the techie buzzword machine for years now. Everyone is doing Scrum now, right?

Well, OK, they’re usually doing “ScrumBut”. “We’re doing Scrum, but we’re not demoing our software to the customer.” Or, “We’re doing Scrum, BUT we’re not doing the team code ownership thing.” Usually, it just means that some of the team gathers on a semi-regular basis to do a “standup”. But that’s a rant for another time.

Unit testing. Why are we still not writing tests? I was invited by a US state a couple years ago to review some software for which they were getting ready to shell out a substantial sum of money. They wanted to know if the code was worth the dough. They owned the code – but there was not a single unit test in all of the 200K+ lines of code for the product. When we presented our findings, the lack of tests, which could assure the customer that the code was executing according to the requirements, was a primary factor in their rejection of the product.

Another customer asked us to assist them in migrating several dozen apps from on-premises to Azure. We went through dozens of interviews and not one of those apps had unit testing. Each interview sounded they same type of refrain,  “We’re going to start testing soon.”  “That’s something we’re working on.”  “We know that we haven’t done the best job with testing.” If they are to take advantage of the DevOps scenarios we’re proposing, they’ll need to be assured that their software is performing they way they think it should be (eg. it’s tested) before they can even consider DevOps.

We all see the benefits of testing, right? Are we lazy? Are we pressured by management to “just build features” and “no time for testing”? Unless you just don’t care if your code works, that’s got to cause all sorts of anxiety. Sometimes this happens, though, even at Microsoft Consulting Services. Several months ago I was asked to review one of our projects that was experiencing… “customer dissatisfaction”, let’s say. No tests. Even at MCS? We have our own Playbook that stresses the importance of testing, and that if testing is not supported by the project or customer, it is reason to walk away from that customer.

Got tests? Ah, I can finally sleep at night. Peace. Assurance – and dare I say, “rigor”?

I’ve been doing this agile thing for about 14 years now. I watch trends, I practice the craft – and doing so has its momentum. I admit, I am not objective when it comes to testing. You better have some major facts behind your assertions (no testing pun intended) that testing is contraindicated for software and they better be easy to see, because I’ve lived this, like I said, for the past 14 years. I see the benefits. I am comforted by the fact that people maintaining the code I’ve written over the years – my code, the team’s code – can be assured that it’s still working, because all they have to do is to run the tests. Green is good.


Application Insights Planned Maintenance – 06/15 – Final Update

Final Update: Friday, 23 June 2017 22:05 UTC

Maintenance has been completed on infrastructure for Application Insights Availability Web Test feature.
Necessary updates were installed successfully on all nodes, supporting Availability Web Test feature.

-Deepesh

Planned Maintenance: 17:00 UTC, 20 June 2017 – 00:00 UTC, 24 June 2017

The Application Insights team will be performing planned maintenance on Availability Web Test feature. During the maintenance window we will be installing necessary updates on underlying infrastructure.

During this timeframe some customers may experience very short availability data gaps in one test location at a time. We will make every effort to limit the amount of impact to customer availability tests, but customers should ensure their availability tests are running from at least three locations to ensure redundant coverage through maintenance. Please refer to the following article on how to configure availability web tests: https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/


We apologize for any inconvenience

vswhere version 2.0 released

One of the features of the new setup engine is to enable Visual Studio to be installed side-by-side with other minor releases, including minor upgrades and preview releases. After introducing the Visual Studio Preview channel a short while ago, some partners asked for the ability to filter out those Preview releases from vswhere.exe. Because this is a breaking change in behavior, following semantic version guidelines I have bumped the major version to 2. I also realized that in the past I have added new features without bumping the minor version and will be more diligent going forward.

vswhere.exe version 2.0.2 released last night to nuget.org (great for automating build and test environments, which the vswhere project itself uses), chocolatey.org, and on its own GitHub release page adds a new switch parameter -prerelease which will include prereleases like Preview builds. This will only work when using the query API version 1.11.2290 or newer, however, which is included in the logo for vswhere.exe when executed (starting with version 1.0.62). Specifically, it is included in the header after the query API is loaded, which currently for performance reasons is not when displaying the help text using the switch parameter -help or when a syntax error is detected.

This new query API will be released in Visual Studio 2017 version 15.3.

The output of vswhere.exe when using the new query API will also include a boolean property IsPrerelease. This is also true of the VSSetup PowerShell cmdlets, which also added a -Prerelease switch parameter to the Get-VSSetupInstance cmdlet. The PowerShell cmdlets also include information about the catalog that was last used to install or update the instance, including the semver productSemanticVersion which supports the IsPrelease property.

We expect that this change in behavior will affect very few developers based on telemetry and, it seems, most desired this behavior anyway. While our Preview releases are well-tested, we want to provide our partners using vswhere.exe the ability to select releases that have been out a while.

Troubleshooting Connectivity Common Scenarios

Connectivity troubleshooting.

Connectivity issues are quite common in Azure DB and SQL in general. We have a great article that can assist in resolving these which is here:

https://support.microsoft.com/en-us/help/10085/troubleshooting-connectivity-issues-with-microsoft-azure-sql-database

This article is great, and I encourage you to review its content and consider using it when you have a connectivity problem in the future. However, I’d also like to explain how I go about troubleshooting customer issues when connecting to Azure DB is a problem, and some of the common causes.

The first question I always ask is ‘where are you connecting from?’ at first sight this may seem to be a trivial question but it makes a lot of difference. As a first step I usually advise SSMS as a good tool to test with.

The three obvious answers are

  1. a VM within Azure itself
  2. a machine located inside a corporate network
  3. a machine connected to the internet

Scenario 1:

Connection from 1 fails and from 2 and 3 works without error. This is quite a common issue and catches a lot of users. Fortunately, it’s a straight forward issue to resolve, in this situation we need to remember that from within Azure there is a setting ‘allow Azure Services to connect’ but even with this set we might not be good to go. If connection is not possible with this setting in most cases the problem is going to be that the ports on the VM are locked down. We all know that port 1433 needs to be open which it undoubtedly does, but while connections outside Azure will probably be happy with this, inside Azure you need other sets of ports to be open 11000-11999 and 14000-14999 should also be open, this is because connections from inside Azure will make an initial connection over port 1433 but then negotiate the port from the ranges given which they are actually going to use, connections from outside of Azure go via the gateway which masks this behaviour.

This is great if your client running on the VM supports TDS 7.4 AND also supports re-direction, I’ve found that not all drivers supporting TDS 7.4 do. In this case we can set the connection policy to Proxy to avoid re-direction.

https://msdn.microsoft.com/en-us/library/azure/mt604439.aspx?f=255&MSPPError=-2147217396

It can be set either with a REST PUT request or via PowerShell.

Example:

Set-AzureRmResource -ResourceId /subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Sql/servers/<server name>/connectionPolicies/Default -Properties @{connectionType=”Proxy”}

The accepted values for connectionType are Proxy, Redirect or Default.

Under Default, connections behave as documented in https://docs.microsoft.com/en-us/azure/sql-database/sql-database-develop-direct-route-ports-adonet-v12

 

Scenario 2:

Connection from 2 fails, but from 1 and 3 work without error. This is almost certainly going to be down to the corporate firewall you need to ensure that port 1433 is open but also that the IP address ranges for Azure are open. They can be found here :-

Be aware that the ranges are different for each data centre so you need to confirm the right range is open.

You can download the latest ranges from here:

https://www.microsoft.com/en-gb/download/details.aspx?id=41653

 

Scenario 3:

Nothing works. This is a worst case scenario, but often the easiest to resolve, this important thing is to understand the error. It may be that you see something like an error 40, or 53, this is usually down to name resolution. So the first test is a simple ping of the server name, from a command prompt.

The above example shows that we have resolved the name back to an IP address and a data centre (useful for knowing which IP ranges you need). The timeout is fine and can be ignored, it is the name resolving we are looking for here.

Image may be NSFW.
Clik here to view.

Scenario 4:

Azure DB has its own firewall, usually from inside Azure this doesn’t come into play unless you have set ‘Don’t Allow Azure Services to connect’. These firewalls are at two levels, one at the server and one at the database. A troubleshooting step I often recommend is to open all addresses (0.0.0.0 – 255.255.255.255) that should wave everyone through. A useful feature if SSMS is that if you are admin and you attempt to connect to Azure DB and your IP address is not allowed, you will be asked if you want to add a rule to allow your IP or subnet to connect. Something like this…..

Image may be NSFW.
Clik here to view.

 

So you have the option to sign in and create a rule for either your IP address or subnet range.

Its worth remembering that you can set the Azure DB firewall for either the server or the database, details on the Azure DB firewall can be found here

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure

 

Scenario 5:

In SSMS you can get as far as connecting and an error such as 18456 is thrown up. This is caused by a number of things, indeed 18456 is traditionally a ‘catch all’ authentication issue. The state of the error can give a good idea of what is going on. You can see the list of states here :-

https://msdn.microsoft.com/en-us/library/cc645917.aspx

Often this will be down to a bad user name or password, or it may be that the Login has been created in Master but a user has not been created in the Master DB or the required user DB.

Scenario 6:

The above are all ‘hard’ failures, in so far that you will either connect or you won’t, this is ‘easy’ it is when you get connection issues that come and go that your work is harder.

As we know databases can scale in Azure DB, allowing for some quite large DBs to be hosted, but this creates an issue. Take for example an Azure DB server with several Premium tier databases, with 1000’s of connections occurring all OK I hear you say they are all Premium DBs what could go wrong. Well there is a problem, it may be that you are sending all of the authentication via Master DB, and at times of stress you can overload Master DB. This is not good at all, but there is a solution, which is Contained Databases, this is where the Master DB and the user DB users are separated. It means that there is much less load on the Master DB. There is another really important side effect of this, the Geo-replication feature allows DBs to replicate across regions, cos we all want our data to be safe, if you use contained DB then your logins/users get replicated too, and you don’t need to worry about moving users between master DBs (we call that a win). As an aside geo-repl now has a listener which can automate failover.

Now having covered these scenarios the one single bit of advice I can give is to ensure that you have re-try logic implemented within your application. There are a number of situations that will require your database to be taken off-line for short periods of time, if you get your retry logic in place most of these will simply never appear as a problem to your applications

The End of an Era

Image may be NSFW.
Clik here to view.
image
Here we are, just a couple weeks from my favorite week of the year, our annual Microsoft partner conference, now known as Inspire. Over the course of my career at Microsoft, I have had the extreme pleasure to work with so many amazing Partners, both individuals at these partners and the organizations themselves. A couple of years ago, I put up my “
It has been a fun 15 years,” post, covering many of these incredible opportunities I have had, including the creation of the “Fantastic People of WPC,” celebrating the opportunity to meet so many amazing partners from around the world during “Worldwide Partner Conference.”

Over the years at Worldwide Partner Conference, there have been so many fun times, such as:

  • The annual contest to see who would be the Top 5 live teeters covering the WPC keynotes and conference
  • Hanging out and connecting with all of the press and influencers while covering the WPC keynotes from the WPC press box
  • Hosting the MPN Live shows talking with partners from around the world
  • Annual tweet-ups, connecting partners and partner influentials
  • Virtual scavenger hunts I’ve run using social media and/or my Microsoft Partner mobile app
  • All of the partner receptions and meet ups
  • And much more…

As this year’s Partner conference approaches, now known as Inspire instead of Worldwide Partner Conference, I’ve been talking with several partners and received some comments and questions such as:

  • “We need to make sure we grab our annual pic this year”
  • “Looking forward to being part of the Fantastic People this year”
  • “Are you changing the name of your Fantastic People of WPC now that the conference is renamed?”

You know what they say, “All good things eventually come to an end,” and this year brings with it the end of an era. With the transition of WPC to Inspire this year, I can still say that I have been to every single Worldwide Partner Conference Microsoft has ever held; however, for the first time since the turn of the century, this is the very first Microsoft annual partner conference I will not be attending.

So why won’t I be attending this year? You may recall the “Cloud Partner Insights (CPI): Information –> Insights –> Impact” post I put up here on the blog about the Cloud Partner Insights project I was leading, which was an entirely new way to drive partner impact and insights across our business that we leveraged across the U.S. business. Well since that time, the platform has grown beyond its initial set of products to many more, beyond sales to include consumption, beyond the partner business to partner and customer, and beyond our SMS&P segment to include our EPG and Pub Sec segments as well. Given the expansion, the platform went through an initial rebranding from “Cloud Partner Insights,” to “Cloud Performance Insights” to better reflect the broader scope. Then as it continued to grow and expand, it transformed from a reporting suite into a data and analytics Business Insights backend platform bringing together dozens of master data sets from across Microsoft to power and enable in-depth and custom insights capabilities delivered through Power BI across Microsoft, which led to it becoming known as simply the “CPI” platform.

Image may be NSFW.
Clik here to view.
image

As this project continued to expand and morph, so has my role and the scope of my role. In fact, at the beginning of our Fiscal Year 2017, my role moved from our US SMS&P Partner team over to our US National Sales Excellence Team, focused across the entire US business including customer and partner, in addition to partnering with our Microsoft Worldwide teams on data analytics and insights. Because of this, for the first time in my Microsoft career, my role is no longer technically a “partner role,” which means I am not one of the individuals who will be attending our annual Microsoft partner conference.

Now even though I won’t be there in person, I’ll be following along through social media and online, like all of our partners around the world who are unable to attend in person are invited and encouraged to do. Also, even though I will not be running my “Fantastic People of” collection of photos this year (since I won’t be there), Please feel free to send over and share your photos from Inspire 2017 with me via social media as I would love to see them!

Here’s wishing you all an amazing Microsoft Inspire Conference!

Image may be NSFW.
Clik here to view.
ELigman4New2_thumb_thumb_thumb17_thu

Eric Ligman

Director – Sales Excellence
Microsoft Corporation

Follow me on: TWITTER, LinkedIn, Facebook

 

This posting is provided “AS IS” with no warranties, and confers no rights 

Certificate between IoT hub and devices connection

There are a lot of questions from our customers about certificate issues during the TLS connection between IoT devices and IoT hub. So here I write this article to reveal something that you need to know when you are trying to connect your IoT devices to Azure IoT hub.

IoT Hub requires all device communication to be secured using TLS/SSL (hence, IoT Hub doesn’t support non-secure connections over port 1883). The supported TLS version is TLS 1.2, TLS 1.1 and TLS 1.0, in this order. Support for TLS 1.0 is provided for backward compatibility only. It is recommended to use TLS 1.2 since it provides the most security.

Firstly, let’s have a look at the process of TLS handshake. Here I quote the steps from this link: https://msdn.microsoft.com/en-us/library/windows/desktop/aa380513(v=vs.85).aspx

  1. The client sends a “Client hello” message to the server, along with the client’s random value and supported cipher suites.
  2. The server responds by sending a “Server hello” message to the client, along with the server’s random value.
  3. The server sends its certificate to the client for authentication and may request a certificate from the client. The server sends the “Server hello done” message.
  4. If the server has requested a certificate from the client, the client sends it.
  5. The client creates a random and encrypts it with the public key from the server’s certificate, sending the encrypted Pre-Master Secret to the server.
  6. The server receives the Pre-Master Secret. The server and client each generate the Master Secret and session keys based on the Pre-Master Secret.
  7. The client sends “Change cipher spec” notification to server to indicate that the client will start using the new session keys for hashing and encrypting messages. Client also sends “Client finished” message.
  8. Server receives “Change cipher spec” and switches its record layer security state to symmetric encryption using the session keys. Server sends “Server finished” message to the client.
  9. Client and server can now exchange application data over the secured channel they have established. All messages sent from client to server and from server to client are encrypted using session key.

In step 3, the server would send server certificate and may request client certificate. There are two types of certificate involved during the TLS handshake:

  1. Server side certificate. Sent by Azure IoT Hub Server to client.
  2. Client side certificate. Sent by client to server. It’s optional.

Please note that IoT Hub doesn’t support X.509 mutual authentication at the end of writing this article. So, there isn’t any client certificate required. But we do provide a mechanism to authenticate a device with IoT Hub using X.509 certificate. You can refer to following link for more detail:https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-security#supported-x509-certificates

Since IoT Hub doesn’t support mutual authentication, here we just need to discuss server authentication. There are 3 certificates in play here during the server authentication. These form part of cert chain and are linked together as a chain.

Type Issued by Other info Expiration
Root CA Baltimore CyberTrust Root azure-iot-masterccertsms.der, certs.c, certs.h (part of Azure IoT SDK) 5/13/2025
Intermediate CA Baltimore CyberTrust Sent by Server 12/20/2017
Wild Card certificate (leaf) Microsoft IT SSL SHA2 Sent to all host names ending in *.azure-devices.net (wildcard certificate) 8/26/2017

 

Only 2 and 3 are sent by the server as part of TLS handshake. Client will normally validate Root CA of the chain and will determine if it is trusted. If the device doesn’t pass the verification of the server certificate, it will generally report “Unknown CA” error in the network log. You can see from above table that Root CA is included in Azure IoT SDK. So you can either install this Root CA in your device or trust this CA in your device application code explicitly. Generally, CyberTrust Root CA has existed in your Windows and most of Linux desktop version. You don’t need to install the CA to trust the certificate from IOT hub. But in a lot of embedded Linux devices which are tailored CyberTrust Root CA is not provided. And you might not even install it to the system. In this case, you should explicitly trust the certificate in you code. In IoT SDK for C,the sample application has included related code like this but not enabled by default.

#ifdef MBED_BUILD_TIMESTAMP

#include “certs.h”

#endif // MBED_BUILD_TIMESTAMP

#ifdef MBED_BUILD_TIMESTAMP

// For mbed add the certificate information

if (IoTHubClient_LL_SetOption(iotHubClientHandle, “TrustedCerts”, certificates) != IOTHUB_CLIENT_OK)

{

printf(“failure to set option “TrustedCerts”rn”);

}

#endif // MBED_BUILD_TIMESTAMP

Here MBED_BUILD_TIMESTAMP is only defined in mBED system. If you want to enable it in other platform which doesn’t include CyberTrust Root CA, you should remove this ifdef or add the definition before compilation.

The certificates data is hardcoded in certs.c. In your real code, you should provide an interface to adjust it in case there is any certificate change. There aren’t many cases which need to do the change. The certificates sent from IoT hub is not changeable and configurable by the users. But Microsoft might change the certificate for some reason. And the root certificates still may expire or be revoked though they are long-lived. If there is no way of updating the certificate on the device, the device may not be able to subsequently connect to the IoT Hub (or any other cloud service). Having a means to update the root certificate once the IoT device is deployed will effectively mitigate this risk.

Note that China Azure didn’t use CyberTrust Root CA. Instead it still uses WoSign Root CA. This certificate is also included in certs.c so that you don’t need to change anything for China Azure connection. But it only works for IoT SDK for C. In IoT SDK for other language, this certificate might not be included in the sample code. You should add the correct certificate data explicitly.

You can also bypass the certificate validation although it’s definitely not recommended. We don’t provide the surface in the IoT SDK APIs. You can modify the code in OpenSSL or websocket layer to disable SSL verification if needed for development or if you know exactly what your case is.

That’s all I want to discuss in this post. I hope it helps to solve connection issue between device and IoT Hub. .

Announcing TypeScript 2.4

Today we’re excited to announce the release of TypeScript 2.4!

If you haven’t yet heard of TypeScript, it’s a superset of JavaScript that brings static types and powerful tooling to JavaScript. These static types are entirely optional and get erased away – you can gradually introduce them to your existing JavaScript code, and get around to adding them when you really need. At the same time, you can use them aggressively to your advantage to catch painful bugs, focus on more important tests that don’t have to do with types, and get a complete editing experience. In the end, you can run TypeScript code through the compiler to get clean readable JavaScript. That includes ECMAScript 3, 5, 2015, and so on.

To get started with the latest stable version of TypeScript, you can grab it through NuGet, or use the following command with npm:

npm install -g typescript

Visual Studio 2015 users (who have Update 3) will be able to get TypeScript by simply installing it from here. Visual Studio 2017 users using Update 2 will be able to ge TypeScript 2.4 from this installer.

Built-in support for 2.4 should be coming to other editors very soon, but you can configure Visual Studio Code and our Sublime Text plugin to pick up any other version you need.

While our What’s New in TypeScript page as well as our 2.4 RC blog post may be a little more in-depth, let’s go over what’s here in TypeScript 2.4.

Dynamic import() expressions

Dynamic import expressions are a new feature in ECMAScript that allows you to asynchronously request a module at any arbitrary point in your program. These modules come back as Promises of the module itself, and can be await-ed in an async function, or can be given a callback with .then.

What this means in short that you can conditionally and lazily import other modules and libraries to make your application more efficient and resource-conscious. For example, here’s an async function that only imports a utility library when it’s needed:

async function getZipFile(name: string, files: File[]): Promise<File> {
    const zipUtil = await import('./utils/create-zip-file');
    const zipContents = await zipUtil.getAsBlob(files);
    return new File(zipContents, name);
}

Many bundlers have support for automatically splitting output bundles (a.k.a. “code splitting”) based on these import()expressions, so consider using this new feature with the esnext module target. Note that this feature won’t work with the es2015 module target, since the feature is anticipated for ES2018 or later.

String enums

TypeScript has had string literal types for quite some time now, and enums since its release. Having had some time to see how these features were being used, we revisited enums for TypeScript 2.4 to see how they could work together. This release of TypeScript now allows enum members to contain string initializers.

enum Colors {
    Red = "RED",
    Green = "GREEN",
    Blue = "BLUE",
}

String enums have the benefit that they’re much easier to debug with, and can also describe existing systems that use strings. Like numeric enums and string literal types, these enums can be used as tags in discriminated unions as well.

enum ShapeKind {
    Circle = "circle",
    Square = "square"
}

interface Circle {
    kind: ShapeKind.Circle;
    radius: number;
}

interface Square {
    kind: ShapeKind.Square;
    sideLength: number;
}

type Shape = Circle | Square;

Improved checking for generics

TypeScript 2.4 has improvements in how types are inferred when generics come into play, as well as improved checking when relating two generic function types.

Return types as inference targets

One such improvement is that TypeScript now can let types flow through return types in some contexts. This means you can decide more freely where to put your types. For example:

function arrayMap<T, U>(f: (x: T) => U): (a: T[]) => U[] {
    return a => a.map(f);
}

const lengths: (a: string[]) => number[] = arrayMap(s => s.length);

it used to be the case that s would need to be explicitly annotated or its type would be inferred as {}. While lengths could be left unannotated in that case, it felt surprising to some users that information from that type wasn’t used to infer the type of s.

In TypeScript 2.4, the type system knows s is a string from the type of lengths, which could better fit your stylistic choices.

This also means that some errors will be caught, since TypeScript can find better candidates than the default {} type (which is often too permissive).

let x: Promise<string> = new Promise(resolve => {
    resolve(10);
    //      ~~ Now correctly errors!
});

Stricter checking for generic functions

TypeScript now tries to unify type parameters when comparing two single-signature types. As a result, you’ll get stricter checks when relating two generic signatures which may catch some bugs.

type A = <T, U>(x: T, y: U) => [T, U];
type B = <S>(x: S, y: S) => [S, S];

function f(a: A, b: B) {
    a = b;  // Error
    b = a;  // Ok
}

Strict contravariance for callback parameters

TypeScript has always compared parameters in a bivariant way. There are a number of reasons for this, and for the most part it didn’t appear to be a major issue until we heard more from users about the adverse effects it had with Promises and Observables. Relating two Promises or Observables should use the type arguments in a strictly covariant manner – a Promise<T> can only be related to a Promise<U> if T is relatable to U. However, because of parameter bivariance, along with the structural nature of TypeScript, this was previously not the case.

TypeScript 2.4 now tightens up how it checks two function types by enforcing the correct directionality on callback parameter type checks. For example:

interface Mappable<T> {
    map<U>(f: (x: T) => U): Mappable<U>;
}

declare let a: Mappable<number>;
declare let b: Mappable<string | number>;

a = b; // should fail, now does.
b = a; // should succeed, continues to do so.

In other words, TypeScript now catches the above bug, and since Mappable is really just a simplified version of Promise or Observable, you’ll see similar behavior with them too.

Note that this may be a breaking change for some, but this more correct behavior will benefit the vast majority of users in the long run.

Stricter checks on “weak types”

TypeScript 2.4 introduces the concept of “weak types”. A weak type is any type that contains nothing but all-optional properties. For example, this Options type is a weak type:

interface Options {
    data?: string,
    timeout?: number,
    maxRetries?: number,
}

In TypeScript 2.4, it’s now an error to assign anything to a weak type when there’s no overlap in properties. That includes primitives like number, string, and boolean.

For example:

function sendMessage(options: Options) {
    // ...
}

const opts = {
    payload: "hello world!",
    retryOnFail: true,
}

// Error!
sendMessage(opts);
// No overlap between the type of 'opts' and 'Options' itself.
// Maybe we meant to use 'data'/'maxRetries' instead of 'payload'/'retryOnFail'.

This check also catches situations like classes that might forget to implement members of an interface:

interface Foo {
    someMethod?(): void;
    someOtherMethod?(arg: number): string;
}

// Error! Did 'Dog' really need to implement 'Foo'?
class Dog implements Foo {
    bark() {
        return "woof!";
    }
}

This change to the type system may introduce some breakages, but in our exploration of existing codebases, this new check primarily catches silent errors that users weren’t aware of.

If you really are sure that a value should be compatible with a weak type, consider the following options:

  1. Declare properties in the weak type that are always expected to be present.
  2. Add an index signature to the weak type (i.e. [propName: string]: {}).
  3. Use a type assertion (i.e. opts as Options).

In the case above where the class Dog tried to implement Foo, it’s possible that Foo was being used to ensure code was implemented correctly later on. You can get around this by declaring them as optional properties of the type never.

class Dog implements Foo {
    // These properties should never exist.
    someMethod?: never;
    someOtherMethod?: never;

    bark() {
        return "woof!";
    }
}

Enjoy!

You can read up our full what’s new in TypeScript page on our wiki for some more details on this new release. To also see a full list of breaking changes, you can look at our breaking changes page as well.

Keep in mind that any sort of constructive feedback that you can give us is always appreciated, and used as the basis of every new version of TypeScript. Any issues you run into, or ideas that you think would be helpful for the greater TypeScript community can be filed on our GitHub issue tracker.

If you’re enjoying TypeScript 2.4, let us know on Twitter with the #iHeartTypeScript hashtag on Twitter.

Thanks for reading up on this release, and happy hacking!

Aggregation of OMS Data from across Azure Subscriptions

An Operations Management Suite (OMS) Repository can be associated to a single Azure Subscription. Companies that host their Products on separate per-tenant Azure Subscription for their Customers have this need to consolidate the Health data from all these Subscriptions for ease of monitoring and tracking. OMS provides the ability to configure Alerts that would call a Web Hook configured on a Central Incident Management System. The IT Support Team could then track and diagnose issues in their  Solution deployment. However, when there are no such Systems implemented, an alternative approach would be to host a Common OMS Workspace to which the individual, per tenant Workspaces would send the Health data to, using the Data Collector APIs.

Image may be NSFW.
Clik here to view.

Aggregation of Heartbeat data from Compute Resources in Azure

The PowerShell script VMsHeartbeatAggregator.ps1 that implements the aggregation logic is called from an Azure Runbook. The Runbook executes under an Azure Automation Account and has a scheduler configured to recurrently trigger the PowerShell script.

The PowerShell script executes a Dynamic Query on OMS to retrieve aggregated Heartbeat Data from the VMs and VMSS deployed in that Subscription. This data is then pushed to the Common OMS Workspace using the Data Collector API. This Runbook would be deployed in each of the Azure Subscriptions and the data captured from these OMS Workspaces is aggregated to a single, common OMS Workspace.

The Data Collector API (REST API) does not insert or update data into the Record Types that OMS provides by default, namely Heartbeat, Event, etc. It is required to create a custom Record Type into which the data is pushed using the API. Assign different Record Type names in the Common OMS Workspace corresponding to each Source OMS Workspaces that sends the data.

View Designer and Custom Views can be used to create the Dashboard in the common OMS Workspace, one for each Source OMS Workspace. Provide hyperlinks in these Dashboards back to the Dashboards in the respective Source OMS Workspace. This lets the Help Desk Team drill down into the raw data in the Source Subscriptions to investigate an issue.

The PowerShell scripts used to implement this scenario can be downloaded from the GitHub Repository here It is based on the elaborate guidance on the invocation of the Data Collector API using PowerShell, which is covered in the Documentation here

The key steps to be performed to implement this scenario are:

  • Configure a Log Analytics Resource and add the VMs and VMSS Resources to be monitored to it. An OMS Workspace corresponding to the Log Analytics Resource is created where the Heartbeat information is sent to. Refer to the Azure documentation here to get started
  • Create an Azure Automation Account from the Azure Portal
  • Within the Automation Account, navigate to Runbooks> Add a new Runbook
  • Edit the runbook and paste the PowerShell script from VMsHeartbeatAggregator.ps1 into it

To execute this script, certain PowerShell Modules need to be imported first, as shown in the screenshot below. At the prompt, acknowledge the installation of the Dependent Modules as well.

Image may be NSFW.
Clik here to view.

  • Add a Scheduler to the Runbook created above. Since the PowerShell script uses input parameters, the Scheduler configuration form would prompt for these values. In the scenario implemented here, the input parameters used are:

[Parameter (Mandatory=$true)]
[String] $ResourceGroupName,    -> Resource Group that contains the VMs and VMSS being monitored
[Parameter (Mandatory=$true)]
[String] $WorkspaceName,   -> Source OMS Workspace Name where the Heartbeat data is queried upon
[Parameter (Mandatory=$true)]
[String] $CustomerName,     # e.g. CustomerX for whom the solution is deployed
[Parameter (Mandatory=$true)]
[String] $EnviromentName     # e.g. staging

The values of $CustomerName and $EnvironmentName are concatenated to form the name of the Record Type created in the Common OMS Workspace.

In the PowerShell script, set the values of the following variable pertaining to the Target Common OMS Workspace

# This is the Unique identifier of the Common Workspace in OMS
$CustomerId = “[Enter the Common OMS Workspace ID]”
# The access key to connect to the common OMS Workspace
$SharedKey = “[Enter the Access key required to invoke the Data Collector API on the common OMS Workspace]”
These values can be obtained from the OMS Workspace Data Source Settings Page, as shown in the screenshot below
  • Execute the Runbook manually, or use the Scheduler to trigger it. View the Console output and ensure that there are no errors.
  • Navigate to the Common OMS Workspace and use Log Search to view the data inserted using the Data Collector API. Note that the Record Type name is suffixed with _CL by the API. See Screenshot below.

Image may be NSFW.
Clik here to view.

 

  • Likewise execute the Runbook in each of the Source Subscriptions and push the data to the corresponding Record Type in the Common OMS Workspace.
  • Use View Designer or Custom Views in the Common OMS Workspace to create a Dashboard for data inserted for each Record Type

Azure Web App Monitoring with Alerts

The PowerShell script WebAppHealthcheckRunbook.ps1 in this sample is deployed in a Runbook that exposes a Web hook.

  • An Azure Application Insight Resource is configured to execute Web App URL Ping Tests
  • When the outcome is a failure an Alert can be raised which would call the Web hook configured for the Runbook and pass the details like the Web App Name, Resource Name, etc to it
  • The Power Shell script would then invoke the Data Collector API of the common OMS Workspace and insert the Ping Test result data into a custom Record Type

Guidance on working with Webhook in a Runbook is covered in the Azure Documentation Link here 

The screenshot below shows how Application Insights can be configured to execute Ping Health Checks on a Web App, and how an Alert could be configured to invoke the Webhook exposed by a Runbook.

Image may be NSFW.
Clik here to view.

 

 

 


VSTS Private Agents with ARM

This post is provided by App Dev Managers, Mariusz Kolodziej and Francis Lacroix who discuss how to automagically deploy a VSTS Private Agent with Azure Resource Manager (ARM) and some PowerShell.


Image may be NSFW.
Clik here to view.
schema
My customers love to use VSTS to enable their DevOps capabilities, but in some cases they are not able to use the Hosted Agents due to security restrictions. In that case, the alternative is to use Private Agents. For detailed description on differences between the two configurations checkout this article.  In this blog we’ll discuss how to automagically deploy a VSTS Private Agent with Azure Resource Manager (ARM) and some PowerShell. 

Continue reading on Mariusz’s blog.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Create Bot for Microsoft Graph with DevOps 6: Continuous Deployment – Release Definition

This time, I create a Release Definition to complete the CI/CD pipeline. While build definition defines how to build the source, release definition defines how to release the compiled bits. I include following three tasks.

  • Infrastructure as a Code (ARM template for Azure) to automate infrastructure.
  • Release
  • Function Test

You can find the detail of ARM Template here.

ARM (Azure Resource Manager) Template

I already created App Service via Visual Studio, but I will automate it by using ARM Template.

Get Template

It’s tedious to write a template from scratch, and you can get it from Azure Portal.

1. Login to https://portal.azure.com. To start from scratch, let’s delete the resource group provisioned previously.
* Before delete the environment, please take a note for each application settings as you need them later.

Image may be NSFW.
Clik here to view.
image

2. Next, create a Web App from new.

Image may be NSFW.
Clik here to view.
image

3. Specify the settings and click [Automation options], which generates templates for you.

Image may be NSFW.
Clik here to view.
image

4. Click [Download]

Image may be NSFW.
Clik here to view.
image

5. Extract the template.zip file. You find template.json, which contains service definitions and parameters.json, which contains values for each parameters.

Image may be NSFW.
Clik here to view.
image

6. As this template only contains single WebApp service definition, you can add more as you need. For now, replace the template.json as follows.

{
  "parameters": {
    "webName": {
      "type": "string"
    },
    "webNameTest": {
      "type": "string"
    },
    "hostingPlanName": {
      "type": "string"
    },
    "hostingEnvironment": {
      "type": "string"
    },
    "location": {
      "type": "string"
    },
    "sku": {
      "type": "string"
    },
    "skuCode": {
      "type": "string"
    },
    "workerSize": {
      "type": "string"
    },
    "serverFarmResourceGroup": {
      "type": "string"
    },
    "subscriptionId": {
      "type": "string"
    },
    "botId": {
      "type": "string"
    },
    "microsoftAppId": {
      "type": "string"
    },
    "microsoftAppPassword": {
      "type": "string"
    },
    "activeDirectory.RedirectUrl": {
      "type": "string"
    },
    "botIdTest": {
      "type": "string"
    },
    "microsoftAppIdTest": {
      "type": "string"
    },
    "microsoftAppPasswordTest": {
      "type": "string"
    },
    "activeDirectory.RedirectUrlTest": {
      "type": "string"
    }
  },
  "resources": [
    {
      "apiVersion": "2016-03-01",
      "name": "[parameters('webName')]",
      "type": "Microsoft.Web/sites",
      "properties": {
        "name": "[parameters('webName')]",
        "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
        "hostingEnvironment": "[parameters('hostingEnvironment')]"
      },
      "location": "[parameters('location')]",
      "tags": {
        "[concat('hidden-related:', '/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "empty"
      },
      "dependsOn": [
        "[concat('Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]"
      ],
      "resources": [
        {
          "apiVersion": "2015-08-01",
          "name": "appsettings",
          "type": "config",
          "tags": {
            "displayName": "WebAppSettings"
          },
          "properties": {
            "BotId": "[parameters('botId')]",
            "MicrosoftAppId": "[parameters('microsoftAppId')]",
            "MicrosoftAppPassword": "[parameters('microsoftAppPassword')]",
            "ActiveDirectory.RedirectUrl": "[parameters('activeDirectory.RedirectUrl')]"
          },
          "dependsOn": [
            "[concat('Microsoft.Web/sites/', parameters('webName'))]"
          ]
        }
      ]
    },
    {
      "apiVersion": "2016-03-01",
      "name": "[parameters('webNameTest')]",
      "type": "Microsoft.Web/sites",
      "properties": {
        "name": "[parameters('webNameTest')]",
        "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
        "hostingEnvironment": "[parameters('hostingEnvironment')]"
      },
      "location": "[parameters('location')]",
      "tags": {
        "[concat('hidden-related:', '/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]": "empty"
      },
      "dependsOn": [
        "[concat('Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]"
      ],
      "resources": [
        {
          "apiVersion": "2015-08-01",
          "name": "appsettings",
          "type": "config",
          "tags": {
            "displayName": "WebAppSettings"
          },
          "properties": {
            "BotId": "[parameters('botIdTest')]",
            "MicrosoftAppId": "[parameters('microsoftAppIdTest')]",
            "MicrosoftAppPassword": "[parameters('microsoftAppPasswordTest')]",
            "ActiveDirectory.RedirectUrl": "[parameters('activeDirectory.RedirectUrlTest')]"
          },
          "dependsOn": [
            "[concat('Microsoft.Web/sites/', parameters('webNameTest'))]"
          ]
        }
      ]     
    },
    {
      "apiVersion": "2016-09-01",
      "name": "[parameters('hostingPlanName')]",
      "type": "Microsoft.Web/serverfarms",
      "location": "[parameters('location')]",
      "properties": {
        "name": "[parameters('hostingPlanName')]",
        "workerSizeId": "[parameters('workerSize')]",
        "numberOfWorkers": "1",
        "hostingEnvironment": "[parameters('hostingEnvironment')]"
      },
      "sku": {
        "Tier": "[parameters('sku')]",
        "Name": "[parameters('skuCode')]"
      }
    }
  ],
  "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0"
}

7. Replace the code in paramters.json. Change the value to fit your environment.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "webName": {
      "value": "o365botprod"
    },
    "webNameTest": {
      "value": "o365bottest"
    },
    "hostingPlanName": {
      "value": "O365BotPlan"
    },
    "hostingEnvironment": {
      "value": ""
    },
    "location": {
      "value": "South Central US"
    },
    "sku": {
      "value": "Standard"
    },
    "workerSize": {
      "value": "0"
    },
    "serverFarmResourceGroup": {
      "value": "O365BotRG"
    },
    "skuCode": {
      "value": "S1"
    },
    "subscriptionId": {
      "value": "__YourAsureSubscriptionId__"
    },
    "botId": {
      "value": "__YourBotId__"
    },
    "microsoftAppId": {
      "value": "__YourMicrosoftAppId__"
    },
    "microsoftAppPassword": {
      "value": "__YourMicrosoftAppPassword__"
    },
    "activeDirectory.RedirectUrl": {
      "value": "__YourSite__/api/OAuthCallback"
    },
    "botIdTest": {
      "value": "__YourTestBotId__"
    },
    "microsoftAppIdTest": {
      "value": "__YourTestMicrosoftAppId__"
    },
    "microsoftAppPasswordTest": {
      "value": "__YourTestMicrosoftAppPassword__"
    },
    "activeDirectory.RedirectUrlTest": {
      "value": "__YourTestSite__/api/OAuthCallback"
    }
  }
}

Create repository for ARM template

In VSTS, create new repository for ARM template.

1. Login to your VSTS and go to the project.

2. Create the repository dropdown and click [New repository].

Image may be NSFW.
Clik here to view.
image

3. Enter any name such as ARM.

Image may be NSFW.
Clik here to view.
image

4. Click [Initialize].

Image may be NSFW.
Clik here to view.
image

5. Once the repository created, copy template.json and parameters.json.

Image may be NSFW.
Clik here to view.
image

Release Definition

Now I am ready to create the Release Definition.

Create Release Definitoin

1. Select Build & Release and go to Releases, click [New definition].

Image may be NSFW.
Clik here to view.
image

2. Select Empty as template and click [Next].

Image may be NSFW.
Clik here to view.
image

3. Select your build for this release and enable [Continuous deployment]. As you can see, Jenkins is also supported Image may be NSFW.
Clik here to view.
Smile
.

Image may be NSFW.
Clik here to view.
image

Now blank definition is created.

Add Artifacts

If you need files in addition to Build output, you can link them as Artifact.

1. Select Artifacts tab and click [Link an artifact source]

Image may be NSFW.
Clik here to view.
image

2. Select ARM repository by using Git type.

Image may be NSFW.
Clik here to view.
image

3. Do the same for BotWithDevOps Repository.

Image may be NSFW.
Clik here to view.
image

Add ARM task

1. Go back to Environment tab, and rename [Environment 1] to [ARM]

Image may be NSFW.
Clik here to view.
image

2. Click [Add tasks].

Image may be NSFW.
Clik here to view.
image

3. From Deploy category, add [Azure Resource Group Deployment].

Image may be NSFW.
Clik here to view.
image

4. Change the template version down to [1.*]. v2 didn’t work well in my lab.

Image may be NSFW.
Clik here to view.
image

5.Select the Azure subscription, and click [Authorize].  Select [Create or update resource group] for Action which create the environment if not exist, otherwise update the settings to match with templates.

Image may be NSFW.
Clik here to view.
image

6. Click […] menu next to Template, and select template.json from ARM artifact.

Image may be NSFW.
Clik here to view.
image

7. Do the same for parameters.json.

Image may be NSFW.
Clik here to view.
image

Add Release Definition

1. Click [Add environment] and select [Create new environment].

Image may be NSFW.
Clik here to view.
image

2. Select [Azure App Service Deployment with Test] template.

Image may be NSFW.
Clik here to view.
image

3. Select [Automatically approve] and create.

Image may be NSFW.
Clik here to view.
image

4. Change the environment name to Test.

Image may be NSFW.
Clik here to view.
image

5. Specify Azure Subscription and App Service Name.

Image may be NSFW.
Clik here to view.
image

6. Select RunTests task, and update Test assemblies to *functiontests*.dll.

Image may be NSFW.
Clik here to view.

7. Click […] for Settings File

Image may be NSFW.
Clik here to view.
image

8. Select Test.runsettings from BotWithDevOps git.

Image may be NSFW.
Clik here to view.
image

9. I also enabled [Code coverage enabled]. Just select any option as you want.

Image may be NSFW.
Clik here to view.
image

10. Click [Run on agent] and select [Hosted VS2017].

Image may be NSFW.
Clik here to view.
image

11. Let’s add prod environment, too. You can clone the environment this time.

Image may be NSFW.
Clik here to view.
image

12. Change the environment name to [Prod]

Image may be NSFW.
Clik here to view.
image

13. Change App Service name.

Image may be NSFW.
Clik here to view.
image

14. Update Run Tests Settings file, too.

Image may be NSFW.
Clik here to view.
image

15. Then name the release definition and click [Save]

Image may be NSFW.
Clik here to view.
image

Test the Release Definition

Let’s test the definition.

1. Click [Create Release] from Release.

Image may be NSFW.
Clik here to view.
image

2. Select the latest check-in and create.

Image may be NSFW.
Clik here to view.
image

3. Select the definition on the left pane, and you see release is queued. Click […] to open it.

Image may be NSFW.
Clik here to view.
image

4. Click Log tab to see details.

Image may be NSFW.
Clik here to view.
image

5. Confirm the result. If something went wrong, fix the issue.

Summery

Okay, most of the DevOps part has been done! I will start introducing BotBuilder feature from next time.

Ken

PIX 1706.25.002 – system monitor and timing capture improvements

Today we released PIX 1706.25.002 beta and an updated WinPixEventRuntime (version 1.0.170625002).

New in this release:

  • System Monitor displays realtime counter data while a game is running
    • Present statistics (fps, frame duration, sync interval)
    • GPU memory usage (commitment, budget, demotions)
    • Custom title counters reported by the WinPixEventRuntime PIXReportCounter API
  • Continuous timing captures
    • Record timing data into a circular buffer (rather than just capturing a fixed duration), then use the System Monitor graph view to select a time region of interest and open that as a timing capture
  • Timing capture event list can now be ordered by either CPU or GPU execution time
  • Timing capture GPU timeline uses flame graphs to display nested marker regions
  • More robust pixel history (many bugfixes)
  • Fixed crashes caused by HLSL syntax highlighting
  • Improved callstack resolution performance when opening timing captures
  • Support for Function Summary, Callgraph, Memory and File IO captures of packaged titles

 

Image may be NSFW.
Clik here to view.

Excel の SpecialCells メソッドで特定の状況において期待しない該当セルが取得される

こんにちは、Office 開発サポート チームの中村です。

 

Excel には、SpecialCells メソッドという、引数に指定した条件に合致するセルを返すメソッドが用意されています。

 

タイトル : Range.SpecialCells メソッド (Excel)

アドレス : https://msdn.microsoft.com/ja-jp/library/office/ff196157.aspx

 

現在の Excel の動作では、特定の処理の流れでこのメソッドを使用するとき、期待した範囲が取得できないことがあります。

今回の記事では、この動作について詳細を記載します。この動作が生じる状況に該当する場合は、後述の回避策で対応することをご検討ください。

 

目次
1. 再現サンプル

1-1. 再現ファイル構成
1-2. サンプル プログラム

2. 再現手順と現象
3. 発生条件と原因
4. 回避策

4-1. DisplayAlerts プロパティに False を設定する
4-2. 事前に SpecialCells メソッドを実行する


 

1. 再現サンプル

今回の現象は、発生条件が込み入っていますので、まずは現象が再現するファイルとサンプル プログラムを用いて、具体的な動作を説明します。


 

1-1. 再現ファイル構成

以下のような構成の Excel ファイルを作成します。

  • 外部リンク先用の新規ブック (以下では [testBook.xlsx] とします。内容は何も変更する必要はありません。)
  • 以下のシート構成のブック

<シート 1>

空のシートを用意します。(発生条件としては、「SpecialCells メソッドの指定条件に合致するセルがないこと」です。)

<シート 2>

外部ブックへのリンクを含む数式を設定します。

例) =[testBook.xlsx]Sheet1!A1

このリンクは、再現手順実行時に更新できない状況であることが条件となります。このため、のちに再現手順を実行するときに、リンク先ブック (testBook.xlsx) を同時に開いたり、再現ファイルと同じフォルダに配置しないでください。


 

1-2. サンプル プログラム

1-1. で作成した数式を含むブックのオープン時にマクロが実行されるよう、ThisWorkbook オブジェクトに以下のマクロを記述します。SpecialCells メソッドで、数式を含むセルを検索し、合致したセルのアドレスをダイアログに表示するマクロです。

 

Private Sub Workbook_Open()
    MsgBox "Sheet1:" & Worksheets(1).Cells.SpecialCells(xlCellTypeFormulas).Address
End Sub

 

このサンプル プログラムのポイントは、SpecialCells メソッドを、該当セルがないシートに対して実行することです。今回、シート1 には何も記述していないため、該当セルはありません。この場合、本来の動作では、以下のようなエラー メッセージが表示されます。

Image may be NSFW.
Clik here to view.
図 1. SpecialCells メソッドで該当セルがない場合の想定された動作

図 1. SpecialCells メソッドで該当セルがない場合の想定された動作

 

上記の手順で作成したファイルを保存します。


 

2. 再現手順と現象

次に、再現手順を説明します。今回の現象は、再現手順にも条件があります。

 

まず、1. で作成した数式とマクロを含むファイルを開きます。

 

注意

このとき、ファイルが信頼されていない場合は、マクロや数式の更新が無効となり、セキュリティの警告が黄色いバーで表示され、[コンテンツの有効化] ボタンのクリックによってこれらの処理が行われます。この状態では、現象は再現しません。ファイルのオープン時に自動的にこれらの処理が行われるよう、ファイルを信頼してください。

<ファイルを信頼する方法>

以下のいずれかの方法で、今回の再現に必要なマクロの実行と外部リンクの更新を許可することができます。

  • [オプション] – [セキュリティ センターの設定] で [信頼できるドキュメント] の [信頼済みドキュメントを無効にする] にチェックが入っていない状態で、ファイルを開いて [コンテンツの有効化] をクリックします。信頼済みドキュメントにするか確認するダイアログが表示される場合は、ここで [はい] を選択します。
  • [オプション] – [セキュリティ センターの設定] で [信頼できる場所] にファイルの格納フォルダを追加します。
  • [オプション] – [セキュリティ センターの設定] で [マクロの設定] を [すべてのマクロを有効にする] に、かつ、[外部コンテンツ] の [ブック リンクのセキュリティ設定] を [すべてのブック リンクの自動更新を有効にする] に設定します。(※ 以後、全てのファイルにこの設定が有効になりますので、検証後元に戻してください。)

 

次に、表示されるリンクの更新確認ダイアログで、[更新する] をクリックします。(ここで [更新しない] を選択すると、現象は発生しません。)

Image may be NSFW.
Clik here to view.
図 2. 外部リンク更新確認ダイアログ

図 2. 外部リンク更新確認ダイアログ

 

すると次に、以下のダイアログが表示されますので、[続行] をクリックします。

Image may be NSFW.
Clik here to view.
図 3. 外部リンク更新不可通知ダイアログ

図 3. 外部リンク更新不可通知ダイアログ

 

この結果、「該当するセルが見つかりません」というエラーが期待されますが、現象発生時は、以下のようにセル全体 ($1:$1048576) が結果として返されます。

Image may be NSFW.
Clik here to view.
図 4. 現象発生時の結果

図 4. 現象発生時の結果


 

3. 発生条件と原因

ここまでに説明した発生条件をまとめると、以下の通りです。

 

<ファイル作成時の条件>

1.  ブック内に外部ブックへのリンクが存在すること

2. SpecialCells メソッドで初めにチェック対象とするシートに、条件に合致するセルがないこと 

<実行手順の条件>

3. ブックが信頼されており、ブックを開いたときにリンクの更新とマクロの実行が行われること

4. リンク更新確認ダイアログで、[更新する] を選択すること

5. 外部ブックへのリンクが更新できないこと

 

これらの条件をすべて満たすとき、ブックオープン時の外部リンク更新の過程で、リンクが更新できない旨のエラーとして [図 3. 外部リンク更新不可通知ダイアログ] が表示されます。

このエラーの発生によって、Excel の内部処理では、他のエラーを含め、一時的にエラーが無視される状態となります。この間に SpecialCells メソッドが実行されると、「該当するセルが見つかりません」というエラーも無視されます。その結果、SpecialCells メソッドの結果として該当セルはないものの、エラーは発生していないという意図しない状況が生じ、シート内のセル全体が返されます。


 

4. 回避策

プログラムにて、以下のいずれかの対応を行うことで再現条件を避けることができ、この現象を回避できます。


 

4-1. DisplayAlerts プロパティに False を設定する

先述の発生時の動作に記載のとおり、[図 3. 外部リンク更新不可通知ダイアログ] が表示されることによって、エラーが無視される状況が生じます。

このため、このダイアログを表示しないよう、SpecialCells メソッドの実行直前に DisplayAlerts プロパティを False に設定することで回避できます。

 

タイトル : Application.DisplayAlerts プロパティ (Excel)

アドレス : https://msdn.microsoft.com/ja-jp/library/office/ff839782.aspx

 

<回避策 4-1. を追加したサンプル プログラム>

Private Sub Workbook_Open()
    Application.DisplayAlerts = False '回避コード : メッセージ通知を抑止
    MsgBox "Sheet1:" & Worksheets(1).Cells.SpecialCells(xlCellTypeFormulas).Address
    Application.DisplayAlerts = True '回避コード : メッセージ通知抑止を元に戻す
End Sub

 

この対処方法の場合、[図 3. 外部リンク更新不可通知ダイアログ] は、ユーザーに通知されません。


 

4-2. 事前に SpecialCells メソッドを実行する

[図 3. 外部リンク更新不可通知ダイアログ] は、当該ブックでの初回の SpecialCells 実行タイミングで表示されます。

このため、実際に結果を使用する SpecialCells メソッドの実行前に、ダミーで SpecialCells メソッドを実行し、この結果は利用しないといった対応方法も検討できます。

 

<回避策 4-2. を追加したサンプル プログラム>

Private Sub Workbook_Open()
    Worksheets(1).Cells.SpecialCells(xlCellTypeFormulas) '回避コード : ダミーの SpecialCells メソッド実行
    MsgBox "Sheet1:" & Worksheets(1).Cells.SpecialCells(xlCellTypeFormulas).Address
End Sub

 

この方法では、[図 3. 外部リンク更新不可通知ダイアログ] がユーザーに通知されます。

 

今回の投稿は以上です。

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

 

永続的な敵対者を検出するための二本柱のアプローチ

2017 年 4 13 Microsoft Secure Blog スタッフ – マイクロソフト

このポストは「 The two-pronged approach to detecting persistent adversaries 」の翻訳です。

持続的標的型攻撃 (APT) では、持続性を維持するための 2 つの主な手段として、侵害されたエンドポイントと侵害された資格情報が利用されます。そのため、この 2 つを同時に検出するためのツールを使用することが重要になります。いずれか一方を検出するツールを導入しただけでは、ネットワーク内にとどまるためのより多くのチャンスを敵対者に与えてしまうことになります。

この 2 つの主なカテゴリには、ゼロデイ攻撃の利用、脆弱性や防御の弱さの悪用、ソーシャル エンジニアリングの使用、悪意のある埋め込みによるマルウェアの自作、正式な資格情報の収集などの、さまざまな攻撃ベクトルがあります。多くのサイバーセキュリティ ツールに搭載されたそれらの攻撃に対する検出制御機能は不十分であり、収集された資格情報が使用されたことを検出する機能はごくわずかです。そこで、マイクロソフトは、組織が両方の問題に対応できるようにするため、多額の投資をしてツールを開発しました。

Image may be NSFW.
Clik here to view.

今でも多くの初期攻撃が電子メールの添付ファイルを介して送信されるため、電子メール ベースの保護ツールは防御における重要な最前線となります。Office 365 Advanced Threat Protection を使用すると、巧妙化した新たな攻撃からご使用のメールボックスをリアルタイムで保護できます。安全でない添付ファイルや悪意のあるリンクから保護することで、電子メールを介した攻撃を食い止められるようになります。

しかし、すべての攻撃が電子メールで送信されるわけではありません。そこで、Windows Defender Advanced Threat Protection (Windows Defender ATP) を使用すると、企業のお客様が、エンドポイントに対する高度なゼロデイ攻撃を検出し、調査して、それらに対応できるようになります。組み込みの動作センサーのほか、機械学習や分析によって、他の防御を突破した攻撃が検出されます。脅威に対する比類のない視点と、OS セキュリティやビッグ データに関する深い専門知識によって、セキュリティ運用 (SecOps) に関連付けられた実用的なアラートが提供されます。SecOps では、1 つのタイムラインで最長 6 か月分の履歴データを調査し、ワン クリックの応答アクションにより、効果的にインシデントを阻止して、感染したエンドポイントを修復することができます。Windows Defender ATP には、ファイル、レジストリ、ネットワーク、プロセス、メモリ、カーネルのアクティビティを追跡して、防御する側がエンドポイントで発生していることを把握することのできるセンサーが含まれています。

 

また、これらのエンドポイント検出機能を補完するために、Microsoft Advanced Threat Analytics では、疑わしい異常なユーザーの動作に対する重要な洞察を提供し、Lateral Movement (攻撃の後半過程)、資格情報の盗難活動、攻撃者が使用する既知の手法の兆候を検出します。これは通常、ネットワークを防御する側、デジタル フォレンジクス チーム、インシデント対応チームにとっては、死角となるものです。Advanced Threat Analytics では、環境内のネットワーク トラフィックやイベントを収集したり、機械学習機能を既知の手法の検出と組み合わせて使用したりすることにより、ノイズを該当の疑わしいアクティビティへと変換し、インシデント対応チームの作業を簡素化します。対応チームが敵対者を早期に検出できるほど、攻撃者がネットワークへの永続的なアクセス権を取得するのをより効果的に阻止できます。

 

インシデント対応チームにとっては、侵害された資格情報だけでなく、異常なアクティビティをエンドポイントで直接検出することが同じように重要となります。

それでは、実際の例を紹介しましょう。

Image may be NSFW.
Clik here to view.

 

Image may be NSFW.
Clik here to view.

 

上の図では、Windows Defender ATP によってユーザー レベルの悪用が検出され (アプリケーションがユーザー モードで実行されたと想定)、その攻撃に対する最初のアラートが発行されたことを、インシデント対応チームが確認します。攻撃者が偽造された特権属性証明 (PAC) を使用してドメイン コントローラーへのアクセスを試みても、攻撃は失敗します。あなたが MS14-068 に対するパッチをドメイン コントローラーに適用したからです。Advanced Threat Analytics によって、偽造された PAC の試行が失敗したことが検出されます。そしてそれが、あなたの環境で敵対者が活動していて、特権の昇格を試みている証拠となります。

Advanced Threat Analytics によって User-Workstation-B が攻撃のソース コンピューターとして特定され、それに対応するときに、多くの人はその資産のみを検査しようとするでしょう。しかし、この侵害の範囲を完全に把握するためには、このユーザーが使用しているすべてのマシンを調査して、最初に感染したコンピューターと侵害されたその他のエンドポイントを見つける必要があります。デジタル フォレンジクスとインシデント対応のピボット全体のルールに従い、適切なツールを使用することで、ネットワークを防御する側では、User-Workstation-A から User-Workstation-B への接続をすばやく特定し、最初の侵害にまで遡ってたどることができるでしょう。

エンドポイントに対する高度な攻撃と侵害された資格情報の両方を検出しなければ、対応と回復の取り組みは不十分になってしまいます。たとえば、ターゲット エンドポイントをクリーンアップするだけで、侵害された資格情報をリセットしなければ、敵対者は引き続き環境にアクセスできます。また、侵害された資格情報をリセットするだけでも、敵対者は引き続き環境にアクセスできます (そして、アクセス可能なシステムで新しい資格情報をまたいともたやすく収集していまいます)。どちらの場合も、敵対者の削除に失敗するだけでなく、セキュリティ チームが、脅威に対応して環境が安全になったと取締役会に報告してしまうことさえ予想されます。

Windows Defender ATP と Advanced Threat Analytics から得たデータと洞察を組み合わせることで、実際に回復戦略を変更して、完全な調査を促進できる可能性があります。

これら 2 つの機能を併用することは、デジタル フォレンジクス チームとインシデント対応チームにとってこれまでの状況を一変させるものとなり得ます。つまり、エンドポイント全体の 6 か月分の履歴データを即座に検索して調べ、フォレンジクスの証拠と深い分析の結果を視覚的に確認し、すばやく対応して攻撃を阻止し、再発を防ぐことができるでしょう。

さらに、マイクロソフト インテリジェント セキュリティ グラフによって、マイクロソフトの独自機能が強化されます。これによって、侵害の兆候、認証、電子メールなどに関する情報が関連付けられます。Windows Defender ATP Advanced Threat Analytics をはじめとするマイクロソフト製品から検出、ブロック、修復された脅威は、インテリジェント セキュリティ グラフに追加されます。その結果、1 つのソリューションで持続的攻撃が検出され、修復された場合に、別のソリューションでもそれらの脅威に対する保護を直ちに開始できるようになります。

持続的標的型攻撃から保護するための手段やツールを評価するときには、1 つのアラート、軸、入力、または変数を確認する従来の検出ツールから移行するにはどうしたらよいかを検討してください。防御する側では、統合されたツールを探すことで、メタイベント分析が可能になるほか、スピードと精度も向上します。

ご質問がございましたら、Microsoft Advanced Threat Analytics Tech Community サイト、または TechNet フォーラム Windows Defender ATP チームまでお問い合わせのうえ、ディスカッションに参加してください。サイバーセキュリティに対するマイクロソフトのアプローチとビジョンの詳細については、Microsoft Secure Web サイトを参照してください。

 

.NET Framework June 2017 Cumulative Quality Update for Windows 10

Today, we are releasing a new Cumulative Quality Update for the .NET Framework. It is specific to Windows 10 Creators Update (1703).

Previously released security and quality updates are included in this release, including the NET Framework May 2017 Security and Quality Rollup and the May 2017 Cumulative Quality Update for Windows 10. There was no Security and Quality Rollup released in June.

Security

This release contains no new security improvements.

Quality and Reliability

The following improvements are included in this release.

Windows Presentation Framework

Issues 429046

Resolves a reliability issue where a PimcContext is incorrectly used after it is released.

This issue affects Visual Studio 2017. You are encouraged to install this updated if you use Visual Studio 2017.

Issues 429047

Resolves a reliability issue where a failure to query a tablet cursor is incorrectly handled from the WISP component.

Issues 429048

Resolves a reliability issue where a PenContext is incorrectly used after it is released.

Getting the Update

The May 2017 Cumulative Quality Update is available via Windows Update, Windows Server Update Services and Microsoft Update Catalog.

Docker Images

The Windows ServerCore and .NET Framework images have not been updated for this release.

Downloading KBs from Microsoft Update Catalog

You can download patches from the table below. See .NET Framework Monthly Rollups Explained for an explanation on how to use this table to download patches from Microsoft Update Catalog.

Product Version Cumulative Quality Update KB
Windows 10 Creators Update Catalog
4022723
.NET Framework 4.7 4022723

Previous Monthly Rollups

The last few .NET Framework Monthly Rollups are listed below for your convenience:

More Information

You can read the .NET Framework Monthly Rollups Explained to learn more about how the .NET Framework is updated.

PowerShell: Automating creation and editing of Task Sequences in 1706 (TP and CB)

Our development team has been working hard on implementing a much requested automation scenario for PowerShell in Configuration Manager, and that’s being able to create and modify task sequence steps.

Task sequence editing has three separate pieces: groups of steps, commands (such as “Install Application” and “Partition Disk”), and conditions. With 1706 Current Branch and Technical Preview we now have PowerShell support for creating and removing groups, all conditional statements, and what have been identified as the most commonly used task sequence steps.

The typical flow is something like this:

  1. Create your task sequence steps and groups (New-CMTaskSequenceStepCommand)
  2. Create a task sequence (New-CMTaskSequence)
  3. Add the steps you’ve created to the task sequence ($ts | Add-CMTaskSequenceStep –Step ($step1, $step2, $step3)

In 1706, the following Step types are supported for Get, New, Remove, and Set operations:

  • Run command line (Verb-CMTaskSequenceStepRunCommandLine)
  • Install application (Verb-CMTaskSequenceStepInstallApplication)
  • Install software (Verb-CMTaskSequenceStepInstallSoftware)
  • Install update (Verb-CMTaskSequenceStepInstallUpdate)
  • Partition disk (Verb-CMTaskSequenceStepPartitionDisk)
  • Reboot (Verb-CMTaskSequenceStepReboot)
  • Run PowerShell script (Verb-CMTaskSequenceStepRunPowerShellScript)
  • Setup Windows and Configuration Manager (Verb-CMTaskSequenceStepSetupWindowsAndConfigMgr)
  • Set variable (Verb-CMTaskSequenceStepSetVariable)

We plan to add additional Step types in future releases.

Here’s an example showing how easy it is to create a new custom task sequence that runs two PowerShell scripts:

$step1 = New-CMTaskSequenceStepRunPowerShellScript -Name "Run script 1" -PackageID $PackageId -ScriptName "script1.ps1" -ExecutionPolicy Bypass
$step2 = New-CMTaskSequenceStepRunPowerShellScript -Name "Run script 2" -PackageID $PackageId -ScriptName "script2.ps1" -ExecutionPolicy Bypass
$ts = New-CMTaskSequence -Name "Run scripts" -CustomTaskSequence
$ts | Add-CMTaskSequenceStep -Step ($step1, $step2)

If you’re interested in tracking the ongoing progress of this feature, you can follow the original feedback for this issue on UserVoice.


Management Reporter における、インテグレーション状況の確認について

Management Reporter (以下、MR) にて、AX のデータベースから、MR のデータマートのデータベースへのインテグレーション(データの移行)は、いくつかのタスクに分かれて実行されています。

それぞれのタスクの状況は、MR のデータベースに対し、以下の SELECT文を 実行することでご確認いただけます。

[タスクの実行状況]

select CIG.[Description], STK.[Name], STS.[Progress],
CASE STS.[StateType]
WHEN 3 THEN ‘3-Processing’
WHEN 5 THEN ‘5-Complete’
WHEN 7 THEN ‘7-Error’
END AS StateType,
DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), STS.[LastRunTime]) as LocalLastRunTime,
DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), STS.[NextRunTime]) as LocalNextRunTime,
STRG.[Interval],
CASE STRG.[UnitOfMeasure]
WHEN 1 THEN ‘Seconds’
WHEN 2 THEN ‘Minutes’
WHEN 3 THEN ‘Hours’
WHEN 4 THEN ‘Days’
END AS UnitOfMeasure,
STRG.[IsEnabled]
from [Scheduling].[Task] STK with (nolock)
inner join [Scheduling].[TaskState] STS with (nolock) on STK.[Id] = STS.[TaskId]
inner join [Connector].[IntegrationGroup] CIG with (nolock) on CIG.[IntegrationId] = STK.[CategoryId]
inner join [Scheduling].[Trigger] STRG with (nolock) on STK.[TriggerId] = STRG.[Id]
order by CIG.[Description], STK.[Name];

[タスクの実行結果(対象件数の確認)]

select CIG.[Description], ST.[Name], SM.[Text], SM.[KEY] as MsKey,
DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), SL.[StartTime]) as LocalStartTime,
DATEADD(minute, DATEDIFF(minute,GETUTCDATE(),GETDATE()), SL.[EndTime]) as LocalEndTime,
SL.[TotalRetryNumber], SL.[IsFailed], STT.[Name] as TaskType
from [Scheduling].[Log] SL with (nolock)
inner join [Scheduling].[Task] ST with (nolock) on SL.TaskId = ST.Id
inner join [Scheduling].[Message] SM with (nolock) on SL.Id = SM.LogId
inner join [Scheduling].[TaskType] STT with (nolock) on ST.TypeId = STT.Id
inner join [Connector].[IntegrationGroup] CIG with (nolock) on CIG.[IntegrationId] = ST.[CategoryId]
order by SL.[StartTime] desc;

なお、お問合せいただきました際にも、サポートからこちらの実行をお願いすることがございます。 その際には、上記結果を Excel に張り付けて送付してください。

Docker Blog Series Part 3– Deploy IIS based applications to Service Fabric using Docker Containers

One of  the value propositions of using containers with Service Fabric is that you can now deploy IIS based applications to the SF cluster. In this blog post, we will see how to leverage docker containers to deploy IIS apps to Service Fabric. I will skip the image creation and publish to the docker hub in this blog. Please reference my earlier blog to learn more about creating and pushing images.

For this blog, I will be using an already pushed IIS image to the docker Hub. The application image uses microsoft/iis as a base image.

Let’s get started.

Step 1. Open Visual Studio 2017 and create a Service Fabric Application.

Image may be NSFW.
Clik here to view.
image_thumb2

Step 2. Now choose the Guest Container feature and provide valid Image Name and Click Ok. The image name is the one we published to Docker Hub in the previous exercise.

Image may be NSFW.
Clik here to view.
image

Step 3. Once the application is created and loaded, add the following endpoint information into ServiceManifest.xml

Image may be NSFW.
Clik here to view.
SNAGHTML5e2bc1f

Step 4. Now let’s add following section into ApplicationManifest.xml to add Hub Credentials and Port Binding endpoint information.

Image may be NSFW.
Clik here to view.
SNAGHTML4f39dc_thumb1

 

Step 5. We are ready to Publish our application to Service Fabric Cluster now. Right click on the Application and Publish to Azure Service Fabric. Make sure when you create the Service Fabric cluster, you pick the option to create it with Windows Server 2016 with Containers.

 

Image may be NSFW.
Clik here to view.
image_thumb20

Step 6. First let’s see our application deployed using Service Fabric explorer.

Image may be NSFW.
Clik here to view.
image_thumb24

Step 7 Now, lets browse to our application on Service Fabric. You should see your IIS  application running on Service Fabric.

As you saw in this blog post we can use Service Fabric for Container Orchestration for newer and legacy applications. More coming in next blog on Container Orchestration capabilities like DNS, Scaling, Labels etc.. Stay tuned!

주간닷넷 2017년 6월 6일

On .NET 소식 : Brett Morrison

지난 주 On .NET에는 사업가이자 기업의 임원이면서 Microsoft .NET 플랫폼으로 개발하는 Brett Morrison과 함께 했습니다. 그는 Onestop, ememories 등의 스타트업 회사를 설립했으며 SpaceX에서도 일한 경력이 있습니다.

금주의 패키지: DateTime Extensions

애플리케이션에서 사용하는 날짜와 관련된 계산은 대부분 간단하며 그리 복잡하지 않습니다. 하지만 “이번 달의 공휴일이 몇 일인가?”와 같은 계산은 조금 복잡합니다. 이런 경우 DateTime Extensions 프로젝트를 활용하면 도움을 받을 수 있습니다. 이 라이브러리는 현재 24개 문화권의 공휴일이 정확하게 설정되어 있고, 이를 계산에 이용할 수 있습니다.


DateTimeCultureInfo pt_ci = new DateTimeCultureInfo("pt-PT");
DateTime startDate = new DateTime(2011, 4, 21);

//21-04-2011 - start
//22-04-2011 - holiday
//23-04-2011 - saturday
//24-04-2011 - sunday
//25-04-2011 - holiday
//26-04-2011 - end

DateTime endDate = startDate.AddWorkingDays( 1, pt_ci);
Assert.IsTrue(endDate == startDate.AddDays(5));

.NET 소식

ASP.NET 소식

C# 소식

F# 소식

VB 소식

Xamarin 소식

Azure 소식

UWP 소식

주간닷넷.NET Blog에서 매주 발행하는 The week in .NET을 번역하여 진행하고 있으며, 한글 번역 작업을 오픈에스지의 송기수 전무님의 도움을 받아 진행하고 있습니다.

Image may be NSFW.
Clik here to view.
song
송 기수, 기술 전무, 오픈에스지
현재 개발 컨설팅회사인 OpenSG의 기술이사이며 여러 산업현장에서 프로젝트를 진행중이다. 입사 전에는 교육 강사로서 삼성 멀티캠퍼스 교육센터 등에서 개발자 .NET 과정을 진행해 왔으며 2005년부터 TechED Korea, DevDays, MSDN Seminar등 개발자 컨퍼런스의 스피커로도 활동하고있다. 최근에는 하루 업무의 대다수 시간을 비주얼 스튜디오와 같이 보내며 일 년에 한 권 정도 책을 쓰고, 한달에 두 번 정도 강의를 하면 행복해질 수 있다고 믿는 ‘Happy Developer’ 이다.

Getting the most out of your Premier Support for Developers Contract

In this post, Application Development Manager, Deepa Chandramouli shared some tips on getting the most of your Premier Support for Developers contract.


Microsoft Premier Support manages the highest tier support programs from Microsoft. Premier Support for Developers (PSfD) empowers developers and enterprises to plan, build, deploy and maintain high quality solutions. When you purchase a Premier Support for Developers contract from Microsoft, an Application Development Manager(ADM) is assigned. He or she will guide you to use the contract in an efficient way that will benefit your developers and the business.

Premier Support for Developers and your ADM does not replace a development team, rather, it complements your team and helps with best practice guidance, products and technology roadmaps, and future proofing your solutions. Your ADM becomes a trusted advisor and a persistent point of contact into Microsoft with the technical expertise to understand your development needs, pain points, and recommend services that are right for you.

A Premier Support contract can be leveraged to validate architecture, perform design/code reviews against best practices and help teams to ramp up on the new technology as needed.  As with any Premier Support relationship, customers have ways to engage support — Reactive and Proactive.

Reactive Support – Reactive or the Problem Resolution Support provides a consistent way to engage Microsoft to open support cases when you run into issues with any of Microsoft Products and Service still covered under Lifecycle Policy.   You can use http://support.microsoft.com or call into the 1800-936-3100 to open a support case with Premier.

Proactive Support – Proactive or Support Assistance is used for advisory consulting engagements and trainings. Examples would be best practice guidance, code reviews, migration assessments, trainings etc…

A common misconception about the proactive support is that it is only meant to be used for training and workshops. It’s also common practice to use proactive hours for remediation work that comes out of critical, reactive support issues that may come up. There are many types of services and engagements customers can leverage through proactive hours to reduce the likelihood of reactive issues in the future. We understand ONE SIZE DOESN’T FIT ALL. So most of the services can be customized to fit your needs. As with any successful projects, the key to get the most of your investment in Premier Support is by Planning, Planning and Planning ahead of time with your ADM.

Premier proactive services can be grouped into 3 broad categories.

  • Assess – Assessments are a great place to start since the results drive other engagements and services. If you don’t know where to start using Premier, start with an assessment of your most critical workload that has pain points. These findings can help align and prioritize next steps and Premier Services that can help.
  • Operate – Operate is the next step after assessments to help address issues with applications and infrastructure. It could be front-end or middle tier or database. For example, performance assessment could lead to optimizing stored procedures. SQL Performance and Optimization Clinic is a huge favorite of lot of Premier customers because it addresses performance issues as well as educate developers around how to address bottlenecks in the future.
  • Educate –  Educate focused on empowering developers with the skills and the tools they need to deliver successful applications. You have access to Premier open enrollment workshops and webcasts that you can register for at any time. There is broad list of topics available that your ADM can share with you on a regular basis. You can also plan custom trainings that are more focused and targeted to your needs and relate to the projects that your team is currently working on.

Image may be NSFW.
Clik here to view.
PremierServices

This is only a small subset of services to give you an idea of how best to use the Premier Support for Developers contract. Application Development Managers (ADMs) can provide more information on each of these topics and the huge list of services that applies to your specific needs and environment.  Another strong value proposition of Premier Support for Developers are custom engagements that cater to your needs and help achieve your goals.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.  For more information on Premier Support for Developers, check out https://www.microsoft.com/en-us/microsoftservices/premier-support-developers.aspx

Insider Fast release notes: 15.37 (170627)

This week brings our first version 15.37 update to reach Insider Fast. Here’s a quick look at the update:

 

Top improvements and fixes:

  • Favorites: Folders can now be added or removed via clicking star icon displayed on hover of folder
  • Favorites: Top-level Favorites group is now only displayed if at least one folder has been favorited
  • Google Calendar: Now view free & busy information for attendees in Scheduling when creating a meeting
  • Account setup: Improved error messages for IMAP accounts, including configuring IMAP access and two-step auth
  • When adding an Outlook.com account, calendars and contacts are now selected and properly displayed

 

Image may be NSFW.
Clik here to view.
Favorited

Hover to add to Favorites or remove from Favorites

 

Other notes:

  • For any issues, please use Help > Contact Support
  • For feature requests, please use Help > Suggest a Feature
  • For weekly updates with the latest features and fixes listed here, join Insider Fast!
  • Lastly, the Outlook Preview is concluding this week; thanks for all the valuable feedback!

 

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>