Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

Application Insights – Advisory 06/20

$
0
0
We are working on switching our email delivery system for all Application Insights related emails. As part of this change, Application Insights emails will be delivered from the following email address: ai-noreply@applicationinsights.io, instead of ai-noreply@microsoft.com. Starting from 6/26/2017 20:00 UTC, all Application Insights emails will be sent from ai-noreply@applicationinsights.io. Customers might also notice small cosmetic changes.

Please refer to the following documentation for any addition details/information regarding Application Insights: https://docs.microsoft.com/en-us/azure/application-insights/

-Vitaliy


Running Spring Boot Java application as a Microservice in Service Fabric – Part 1

$
0
0

This post covers authoring a new Spring Boot application using Maven.

Start Eclipse and start with File–>New–>Maven Project.

Select convenient location to save.

Select webapp as archetype.

Provide a name for your application/JAR file.

Navigate to pom.xml file thus generated. Change the packaging from WAR to JAR. Add additional sections for properties and dependencies. I am using Java 1.8 as depicted below.

More dependencies will be added in dependencies section.

1st dependency to be added is Spring Boot Framework. Navigate to Spring Boot Website, click “PROJECTS”.

Select and copy parent and dependency sections for Maven.

Paste copied content inside pom.xml file as shown below.

Next add Tomcat Embed Jasper dependency.

Ensure that a “Release” version is chosen before you copy the Maven config settings.

Select Maven specific config settings.

Add Spring Boot Tomcat Starter dependency. Ensure that “Release” version is selected.

Copy Maven specific config setting and paste it to pm.xml.

Next, add Maven plugin itself by navigating to Spring Boot Maven Plugin website. Copy plugin config setting as highlighted below.

After copying 4 dependencies – Spring Boot Frameork, Tomcat Embed Jasper, Spring Boot Tomat Starter and Maven plugin – pom.xml file should look like as below.

Now start writing some code 🙂 Add a Application class which will bootstrap application. Ensure that checkbox for creating public static void main is ticked.

Skeleton of class should look something like below.

Now add Spring specific attributes and routes so that class looks like below.

In this example, I am simply returning HOST name as a JSON from our Spring Boot API.

Now, let’s clean the files and directories during compilation of application. Navigate to path where pom.xml file resides and issue mvn clean command as shown below.

Now, lets build and package our fat JAR file. To do that, issue mvn install command as shown below.

After a successful completion, following message should appear.

A fat JAR file should be created in target folder as shown below.

 

Lets run application by running following command.

When it starts successfully, note down port number on which it has started. In my case, it is port 8080 as shown below.

Navigate to localhost:<portno> and verify if application is running.

We now have got a fat JAR spring boot application running locally.

In next post, let’s see how we can use Service Fabric to run this application.

Running Spring Boot Java application as a Microservice in Service Fabric – Part 2

$
0
0

Previous post covered creating a fat JAR application and running it locally.

In this post we’ll look at how we can run this JAR application as a microservice in Service Fabric. Service Fabric Plugin for Eclipse makes it very easy to deploy from Eclipse on Linux/Mac platforms as described here. If you are using Windows as a development/deployment platform you can use Visual Studio 2017 with Service Fabric SDK. In this post, I’ll use Visual Studio 2017.

Start Visual Studio and click File–>New. Select “Cloud” under Templates and “Service Fabric Application” on the right-hand side pane.

In next dialogue, select “Guest Executable” as a service template. Folder location containing JAR file as “Code Package Folder” and name of service in “Service Name”.

Visual Studio creates a solution structure  as shown below.

Now, lets add the JAR file to “Code” folder shown above. To do so, right click it, select “Add an existing Item”, navigate to folder containing JAR file and select it.

Along with JAR, we also need to upload runtime to run this JAR file. Typically, this is JRE. It generally resides in the JDK installation folder (C:Program FilesJavajdk1.8.0_131).

Simply copy this folder and paste it on “Code” folder in VS.

After copy, solution structure in VS should look like below.

Now, navigate to “ServiceManifest.xml” file in Service Fabric solution. Change following parameters as shown below.

  1. Program: This should point to java.exe file in JRE folder that was copied.
  2. Arguments: This should contain the -jar and path of the JAR filename relative to java.exe. They are arguments passed to java.exe when it starts.
  3. WorkingFolder: This should be CodeBase.
  4. Endpoint: Name a endpoint and provide protocol as HTTP and port no (8080) along with type(input).

Next open the “ApplicationManifest.xml” file and verify its settings as shown below. Pay attention to name, type of the Service. They should match in ServiceManifest and ApllicaitonManifest files.

Click “Start” button in Visual Studio and wait for a successful deployment to local SF cluster.

Navigate the local service fabric cluster endpoint at http://localhost:19080/Explorer/index.html#/ and validate application is deployed and running successfully.

Application is now running on service fabric cluster. Verify by navigating to localhost:8080 and it should return the IP address.

At this point, we can potentially deploy this application to Azure. However, let’s see how we can implement a CI/CD pipeline to deploy to Azure in next post.

Running Spring Boot Java application as a Microservice in Service Fabric – Part 3

$
0
0

Let’d do a quick summary of previous 2 posts.

Part 1: Covered creating Spring Boot application using Eclipse

Part 2: Covered deploying this application to service fabric on a local cluster.

In this post,  we’ll take a look at how we can use Visual Studio Team Services (VSTS) for implementing CI/CD pipeline to deploy to Azure. Project which I was working on has been pushed to GitHub and I’ll use it as a source control repository to start CI/CD process.

Ensure that you copy JRE folder and following Service Fabric specific files to Sprint Boot application and keep them at same location as pom.xml.

  1. JRE: Folder that we earlier copied inside Code folder in Service Fabric solution.
  2. ApplicationManifest.xml
  3. ApplicationParameters.xml
  4. PublishProfile.xml
  5. ServiceManifest.xml

Lets start with building CI/CD pipeline. Navigate to “Build” menu in VSTS home page and click “New” button.

Select “Empty” as a template.

Select the “Github” as source control repository. You can also choose from other options shown. Select repository and branch name.

Next add a Maven Build as a task.

Select the Maven version, path to pom.xml file and goal(package).

Next add a “Copy Files” task to copy JAR file generated by previous Maven build step and configure as shown below. We will create a service fabric VS solution structure using CI/CD pipeline.

Note target folder is in source location denoted by variable $(build.sourcesdirectory). Maven build step generates JAR file in this source directory location. Specify **/sbmartifact.jar in “Contents” box, which is JAR file name as mentioned in pom.xml file.  Finally copy this jar file to a staging directory denoted by build variable $(build.artifactstagingdirectory). This is where we start to construct service fabric VS solution structure. Folder stucture specified here is Root/Package/JarApiSFAppPkg/Code.

Next add task to copy JRE folder

Add Copy ApplicationManifest.xml file task. Note target folder location. It’s at root of the Package.

Add Copy ServiceManifest.xml file task. Note target folder location. It’s 1 level below root Package.

Now copy ApplicationParameters.xml and PublishProfile.xml files.

Finally, add a “Publish Build Artifact” task. Publish Root folder that was constructed by earlier tasks.

After this build finishes its run, take a look at “Artifact Explorer”. It should have all files and directories arranged as shown below.

We now have got all the build artifacts ready for deployment to Azure. Let’s see how we can deploy this application to Azure. There are 2 pre-requisites before we can get started with deployment.

  1. A Service Fabric cluster already deployed in Azure. See these instructions for setting this up.
  2. Azure Service Fabric Endpoint in VSTS. See these instructions for setting this up.

One above pre-requisites are fulfilled, lets go back to our build definition. Click Build & Release –> Releases–> +create release definition.

Select “Azure Service Fabric Deployment” as a template.

Select the “Source (Build Definition)” we just created from the drop-down. Ensure “Continuous deployment” checkbox is ticked.

Click “Create” button and a screen as shown below should be presented.

 

Populate values as shown below.

  1. Application Package: This value should be Root/Package folder.
  2. Cluster Connection: This value should be selected from drop-down. It is populated by azure service fabric endpoint set as pre-requisite.
  3. Publish Profile: Should be used to connect with Azure based SF cluster. An example is shown below.
  4. Applicaiton Parameters: It is referred from Publish Profile. Typically contains instance_count setting for service.

Now, deploy this release and verify all deploy steps are completed successfully.

Navigate to Azure Service Fabric Explorer and verify application has got deployed successfully and cluster is healthy.

Given that this application runs on port 8080, a load-balancing rule needs to be added to Sevice Fabric cluster in Azure as shown below.

Once the load-balancing rule is added, verify that application is working as expected navigating to service fabric URL.

There it is – a Spring Boot Fat Jar application running as a microservice in Service Fabric in Azure using CI/CD pipeline in VSTS.

But there’s more! in next part, I’ll show how to containerize this application and run it in Service Fabric as a container.

Running Spring Boot Java application as a Microservice in Service Fabric – Part 4

$
0
0

This series so far covered how Service Fabric can run Spring Boot application using JAR file. It uses java.exe which is bundled along with application to run this JAR file. This post will cover a more elegant approach – using containers instead of a full runtime.

Service Fabric already supports running Windows containers on Windows cluster and docker/linux containers on Linux cluster. With Hyper-V container support on roadmap, Service Fabric will be able to provide better isolation for multi-tenant applications where customers prefer to share nothing.

Coming back to Spring Boot application,  it can be either containerized to a linux container or windows container with just a single line change and I’ll cover how soon.

Lets start with windows container.

To containerize Spring Boot application, add dockerfile at the same location as pom.xml file. Let’s look at content inside.

  • 1st line is a commented out denoted by #. It is actually a switch between linux and windows container. If comment is removed, it will generate a linux container image.
  • 2nd line is base image on top of which new image of application will be created. In this example, base image is of openjdk for windows server core available on Docker Hub.
  • 3rd line exposes port 8080 on container.
  • 4th line adds jar file from target folder in source directory to container.
  • 5th line configures java to run added jar file using -jar switch.

In Docker for Windows, switch to Windows container mode as shown below.

Navigate to folder containing dockerfile and issue following command.

This command is going to use dockerfile described above (denoted by . in the end) and create a container image called maksh/sbmappimagewin.

Once this command completes successfully (This may take some time due to large size base image!). Run following command to verify new image is successfully created.

Now run a container from this image using following command.

Switch -d is for running command in detached mode. -p is for port mapping between host and container port. A new conatiner with Id 9d* chas been successfully created. Lets extract its IP address by executing following command.

Command above executes ipconfig on container identified by 9d (2 initials that uniquely identifies this container). Copy the IP address and navigate to IP:PortNo combination.

Spring Boot application is now running as a windows container locally. Next step is to push it to Docker Hub so that it can be used for deployment anywhere we want. Use docker login and push commands as illustrated below.

Local image (maksh/sbmappimagewin) should be available in Docker Hub after push operation completes successfully.

Now, this container image can be used to deploy application on service fabric. Open the Service Fabric VS Solution we created in part 2.

Delete the “Code” folder from solution.

Comment out the ExeHost section and add new ContainerHost section in ServiceManifest.xml file as shown below.

Add additional section as shown below in ApplicationManifest.xml file. Repository credentials are same as the ones used to connect with Docker Hub and push container image. Ensure that EndpointRef refers to Endpoint Name as in ServiceManifest.xml file.

Next, select the project in solution explorer, right click and select “Publish”.

In dialog box that appears, select Cloud.xml as Target profile and Connection endpoint pointing to Azure Service Fabric cluster.

Verify that application is successfully deployed by navigating to Azure Service Fabric Explorer.

Finally, verify application as well 🙂

There it is again – a Spring Boot application running as a container in Service Fabric.

Running Spring Boot Java application as a Microservice in Service Fabric

$
0
0

Spring Boot Framework is one of the popular frameworks to author RESTful APIs using Java. It is very commonly used alongwith Angular and Kafka to create distributed applications.  In this series of posts, I’ll cover how a Spring Boot application can be deployed and run as a microservice using Azure Service Fabric. Below is table of content for each post and what it covers.

  1. Part 1: Creating a Spring Boot application and getting it to run locally.
  2. Part 2: Getting this application deployed and running in local Service Fabric cluster.
  3. Part 3: Deploying this application to Azure Service Fabric cluster using CI/CD pipeline defined in VSTS.
  4. Part 4: Containerizing this application and getting it running as a container in Azure Service Fabric.

Source code for application is available at following GitHub locations.

  1. Spring Boot Application (Maven)
  2. Spring Boot Application (Service Fabric)

 

 

 

PDW Performance: Investigate Inconsistent Query Execution Times

$
0
0

This post applies to both APS and Azure SQL DW

 

There is a reasonable expectation that if a query is executed in a controlled environment multiple times it will have minimal variance in total execution time when no changes are made to the system, data or query.  When a variance is experienced it should be investigated to find the cause.  In my experience with PDW a variance of less than 5% is not necessarily an indication of an underlying issue unless there are other contributing factors to make one believe so.

Any experienced variance could be due to a changes to the data, changes to the query, or concurrent workload.  Its best to start with the basics.   First confirm the query itself is indeed identical to previous runs.  The slightest change such as adding or removing columns in the select list or a slight change in predicate can have a significant impact on query performance.

Next I like to look at the two extremes as it will be the easiest to find the differences.  Pick the fastest and slowest execution you can find.  First check to make sure the query was not suspended waiting on any resource locks.  Details can be found in this post.

If the two executions had similar wait times for concurrency and object locks, next you need to drill into the execution of the query.

Compare the overall execution for different runs of the same query:

SELECT fast.step_index,
fast.operation_type,
fast.total_elapsed_time                           AS fast_total,
slow.total_elapsed_time                           AS slow_total,
slow.total_elapsed_time - fast.total_elapsed_time AS time_delta,
fast.row_count                                    AS fast_row_count,
slow.row_count                                    AS slow_row_count,
slow.row_count - fast.row_count                   AS row_count_delta
FROM   (SELECT *
FROM   sys.dm_pdw_request_steps
WHERE  request_id = '<fast_request_id>') fast
INNER JOIN (SELECT *
FROM   sys.dm_pdw_request_steps
WHERE  request_id = '<slow_request_id>') slow
ON fast.step_index = slow.step_index
ORDER  BY step_index ASC  

 

The operation list should be identical between two identical queries.  Pick the operation with the most variance and compare between two request_ids.  Depending on the type of operation will determine which DMV will have the appropriate data:

 Data movement step:

 

SELECT fast.pdw_node_id,
fast.distribution_id,
fast.type,
fast.total_elapsed_time                           AS fast_time_total,
slow.total_elapsed_time                           AS slow__time_total,
slow.total_elapsed_time  fast.total_elapsed_time AS time_delta,
slow.rows_processed  fast.rows_processed         AS row_count_delta,
slow.cpu_time  fast.cpu_time                     AS CPU_time_delta,
slow.query_time  fast.query_time                 AS query_time_delta,
slow.bytes_processed  fast.bytes_processed       AS
bytes_processed_delta
FROM   (SELECT *
FROM   sys.dm_pdw_dms_workers
WHERE  request_id = ‘<fast query id>’
AND step_index = <step index>) FAST
INNER JOIN (SELECT *
FROM   sys.dm_pdw_dms_workers
WHERE  request_id = ‘<slow query id>’
AND step_index = <step_index>) SLOW
ON FAST.pdw_node_id = slow.pdw_node_id
AND fast.distribution_id = slow.distribution_id
AND fast.type = slow.type 

 

 

Other operation:

SELECT fast.step_index,
fast.pdw_node_id,
fast.distribution_id,
fast.total_elapsed_time                           AS fast_time,
slow.total_elapsed_time                           AS slow_time,
slow.total_elapsed_time  fast.total_elapsed_time AS time_delta
FROM   (SELECT *
FROM   sys.dm_pdw_sql_requests
WHERE  request_id = ‘<fast query id>’) fast
INNER JOIN (SELECT *
FROM   sys.dm_pdw_sql_requests
WHERE  request_id = ‘<slow query id>’) slow
ON fast.distribution_id = slow.distribution_id  

 

 

Pay attention to start and end times for each distribution as well as rows and bytes processed to look for any discrepancies.  Concurrency can significantly impact performance by creating a bottleneck on a system resource but will not be evident in these results.  It is always best to baseline a query with no concurrent executions if possible. 

 

 

The goal of this article is not to identify a specific reason query execution times vary, but rather how to access the data detailing the execution and identify areas which require further investigation.  Once an anomaly is identified, further investigation may be needed.

 

Build your own Web API protected by Azure AD v2.0 endpoint with custom scopes

$
0
0

* This post is writing about Azure AD v2.0 endpoint. If you’re using v1, please see “Build your own api with Azure AD (written in Japanese)”.

You can now build your own Web API protected by the OAuth flow and you can add your own scopes with Azure AD v2.0 endpoint (also with Azure AD B2C).
Here I show you how to setup, how to build, and how to consider with custom scopes in v2.0 endpoint. (You can also learn several OAuth scenarios and ideas through this post.)

I note that now your Microsoft Account cannot provide the following scenarios with custom (user-defined) scopes. Then, please follow the next steps with your organization account (Azure AD account).

Register your own Web API

First we register our custom Web API in v2.0 endpoint, and consent this app in the tenant.

Please go to Application Registration Portal, and start to register your own Web API by pressing [Add an app] button. In the application settings, click [Add Platform] and select [Web API].

In the added platform pane, you can see the following generated scope (access_as_user) by default.

This scope is used as follows.
For example, when you create your client app to access this custom Web API by OAuth, this client can access the following uri for the permissions calling Web API with the scope value.

https://login.microsoftonline.com/common/oauth2/v2.0/authorize
  ?response_type=id_token+code
  &response_mode=form_post
  &client_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
  &scope=openid+api%3a%2f%2f8a9c6678-7194-43b0-9409-a3a10c3a9800%2faccess_as_user
  &redirect_uri=https%3A%2F%2Flocalhost%2Ftest
  &nonce=abcdef

Now let’s change this default scope, and define the new read and write scopes as follows here. (We assume that the scopes are api://8a9c6678-7194-43b0-9409-a3a10c3a9800/read and api://8a9c6678-7194-43b0-9409-a3a10c3a9800/write.)

Next we must also add “Web” platform (not “Web API” platform), because the user needs to consent this api application before using these custom scopes.

For example, please remember “Office 365”. The organizations or users who don’t purchase (subscribe) Office 365 cannot use the Office 365 API’s permissions. (No Office 365 API permissions are displayed in their Azure AD settings.) After you purchase Office 365 in https://portal.office.com/, you can start to use these API’s permissions.
Your custom api is the same. Before using these custom scopes, the user have to involve this custom application in the tenant or the individual.

When some user accesses the following url in their web browser and login with the user’s credential, the following consent UI will be displayed. Once the user approves this consent, this custom Web API application is registered in user’s individual permissions. (Note that the client_id is the application id of this custom Web API application, and the redirect_uri is the redirect url on “Web” platform in your custom Web API application. Please change these values to meet your application settings.)

https://login.microsoftonline.com/common/oauth2/v2.0/authorize
  ?response_type=id_token
  &response_mode=form_post
  &client_id=8a9c6678-7194-43b0-9409-a3a10c3a9800
  &scope=openid
  &redirect_uri=https%3A%2F%2Flocalhost%2Ftestapi
  &nonce=abcdef

Note : You can revoke the permission with https://account.activedirectory.windowsazure.com/, when you are using the organization account (Azure AD Account). It’s https://account.live.com/consent/Manage, when you’re using the consumer account (Microsoft Account).

Use the custom scope in your client application

After the user has consented the custom Web API application, now the user can use the custom scopes (api://.../read and api://.../write in this example) in the user’s client application. (In this post, we use the OAuth code grant flow with the web client application.)

First let’s register the new client application in Application Registration Portal with the user account who consented your Web API application. In this post, we create as “Web” platform for this client application (i.e, web client application).

The application password (client secret) must also be generated as follows in the application settings.

Now let’s consume the custom scope (of custom Web API) with this generated web client.
Access the following url with your web browser. (As you can see, the requesting scope is the previously registered custom scope api://8a9c6678-7194-43b0-9409-a3a10c3a9800/read.)
Here client_id is the application id of the web client application (not custom Web API application), and redirect_uri is the redirect url of the web client application.

https://login.microsoftonline.com/common/oauth2/v2.0/authorize
  ?response_type=code
  &response_mode=query
  &client_id=b5b3a0e3-d85e-4b4f-98d6-e7483e49bffc
  &scope=api%3A%2F%2F8a9c6678-7194-43b0-9409-a3a10c3a9800%2Fread
  &redirect_uri=https%3a%2f%2flocalhost%2ftestwebclient

Note : In the real production, it’s also better to retrieve the id token (i.e, response_type=id_token+code), since your client will have to validate the returned token and check if the user has logged-in correctly.
This sample will skip this complicated steps for your understandings.

When you access this url, the following login page will be displayed.

After the login succeeds with the user’s credential, the following consent is displayed.
As you can see, this shows that the client will use the permission of “Read test service data” (custom permission), which is the previously registered scope permission (api://8a9c6678-7194-43b0-9409-a3a10c3a9800/read).

After you approve this consent, the code will be returned into your redirect url as follows.

https://localhost/testwebclient?code=OAQABAAIAA...

Next, using code value, you can request the access token for the requested resource (custom scope) with the following HTTP request.
This client_id and client_secret are each application id and application password of the user’s web client application.

HTTP Request

POST https://login.microsoftonline.com/common/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded

grant_type=authorization_code
&code=OAQABAAIAA...
&client_id=b5b3a0e3-d85e-4b4f-98d6-e7483e49bffc
&client_secret=pmC...
&scope=api%3A%2F%2F8a9c6678-7194-43b0-9409-a3a10c3a9800%2Fread
&redirect_uri=https%3A%2F%2Flocalhost%2Ftestwebclient

HTTP Response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "token_type": "Bearer",
  "scope": "api://8a9c6678-7194-43b0-9409-a3a10c3a9800/read",
  "expires_in": 3599,
  "ext_expires_in": 0,
  "access_token": "eyJ0eXAiOi..."
}

Note : If you want to get refresh token, you must add “offline_access” to the scopes.

Using the returned access token (access_token property), you can call your custom Web API as follows and the API can verify the passed token. (Later I show you how to verify this token in your custom Web API.)

GET https://localhost/testapi
Authorization: Bearer eyJ0eXAiOi...

Verify access token in your Web API

Now it’s turn in your custom Web API.

How to check whether the access token is valid ? How to get the logged-in user’s claims ?

First you must remember that v2.0 endpoint returns the following token format.

id token access token
organization account (Azure AD) JWT JWT
consumer account (MSA) JWT Compact Tickets

As you can see in the table above, the passed access token is IETF JWT (Json Web Token) format as follows, if you are using Azure AD account (organization account).

  • JWT has 3 string tokens delimited by the dot (.) character.
  • Each delimited tokens are the base64 url encoded (encoded by RFC 4686).
  • Each delimited tokens (3 tokens) are having :
    Certificate information (ex: the type of key, key id, etc), claim information (ex: user name, tenant id, token expiration, etc), and digital signature (byte code).

For example, the following is PHP example of decoding access token. (The sample of C# is here.)
This code shows the 2nd delimited token string (i.e, claims information) as result.

<?php
echo "The result is " . token_test("eyJ0eXAiOi...");

// return claims
function token_test($token) {
  $res = 0;

  // 1 create array from token separated by dot (.)
  $token_arr = explode('.', $token);
  $header_enc = $token_arr[0];
  $claim_enc = $token_arr[1];
  $sig_enc = $token_arr[2];

  // 2 base 64 url decoding
  $header = base64_url_decode($header_enc);
  $claim = base64_url_decode($claim_enc);
  $sig = base64_url_decode($sig_enc);

  return $claim;
}

function base64_url_decode($arg) {
  $res = $arg;
  $res = str_replace('-', '+', $res);
  $res = str_replace('_', '/', $res);
  switch (strlen($res) % 4) {
    case 0:
      break;
    case 2:
      $res .= "==";
      break;
    case 3:
      $res .= "=";
      break;
    default:
      break;
  }
  $res = base64_decode($res);
  return $res;
}
?>

The result (claim information) is the json string as follows.

{
  "aud": "8a9c6678-7194-43b0-9409-a3a10c3a9800",
  "iss": "https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/v2.0",
  "iat": 1498037743,
  "nbf": 1498037743,
  "exp": 1498041643,
  "aio": "ATQAy/8DAA...",
  "azp": "b5b3a0e3-d85e-4b4f-98d6-e7483e49bffc",
  "azpacr": "1",
  "name": "Christie Cline",
  "oid": "fb0d1227-1553-4d71-a04f-da6507ae0d85",
  "preferred_username": "ChristieC@MOD776816.onmicrosoft.com",
  "scp": "read",
  "sub": "Pcz_ssYLnD...",
  "tid": "3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15",
  "ver": "2.0"
}

The aud means the application id for targeting web api (here, custom Web API), nbf (= not before) is the starting time of the token expiration, exp is the expiring time of the token, tid means the tenant id of this logged-in user, and scp is the granted scopes.
With these claim values, you can check if the token is valid.

Here I show you the PHP sample code for checking these claims.

<?php
echo "The result is " . token_test("eyJ0eXAiOi...");

// return 1, if token is valid
// return 0, if token is invalid
function token_test($token) {
  // 1 create array from token separated by dot (.)
  $token_arr = explode('.', $token);
  $header_enc = $token_arr[0];
  $claim_enc = $token_arr[1];
  $sig_enc = $token_arr[2];

  // 2 base 64 url decoding
  $header =
    json_decode(base64_url_decode($header_enc), TRUE);
  $claim =
    json_decode(base64_url_decode($claim_enc), TRUE);
  $sig = base64_url_decode($sig_enc);

  // 3 expiration check
  $dtnow = time();
  if($dtnow <= $claim['nbf'] or $dtnow >= $claim['exp'])
    return 0;

  // 4 audience check
  if (strcmp($claim['aud'], '8a9c6678-7194-43b0-9409-a3a10c3a9800') !== 0)
    return 0;

  // 5 scope check
  if (strcmp($claim['scp'], 'read') !== 0)
    return 0;

  // other checks if needed (lisenced tenant, etc)
  // Here, we skip these steps ...

  return 1;
}

function base64_url_decode($arg) {
  $res = $arg;
  $res = str_replace('-', '+', $res);
  $res = str_replace('_', '/', $res);
  switch (strlen($res) % 4) {
    case 0:
      break;
    case 2:
      $res .= "==";
      break;
    case 3:
      $res .= "=";
      break;
    default:
      break;
  }
  $res = base64_decode($res);
  return $res;
}
?>

But it’s not complete !

Now let’s consider what if some malicious one has changed this token ? For example, if you are a developer, you can easily change the returned token string with Fiddler or other developer tools and you might be able to login to the critical corporate applications with other user’s credential.

Lastly, the digital signature (the third token in access token string) works against this kind of attacks.

The digital signature is generated using the private key in Microsoft identity provider (Azure AD, etc), and you can verify using the public key which everyone can access. Moreover this digital signature is derived from {1st delimited token string}.{2nd delimited token string} string.
That is, if you change the claims (2nd token string) in access token, the digital signature must be also generated again. And only Microsoft identity provider can create this digital signature. (The malicious user cannot.)

That is, all you have to do is to check whether this digital signature is valid with public key. Let’s see how to do that.

First you can get the public key from https://{issuer url}/.well-known/openid-configuration. (The issuer url is equal to the “iss” value in the claim.) In this case, you can get from the following url.

GET https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/v2.0/.well-known/openid-configuration
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "authorization_endpoint": "https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/oauth2/v2.0/authorize",
  "token_endpoint": "https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/oauth2/v2.0/token",
  "token_endpoint_auth_methods_supported": [
    "client_secret_post",
    "private_key_jwt"
  ],
  "jwks_uri": "https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/discovery/v2.0/keys",
  "response_modes_supported": [
    "query",
    "fragment",
    "form_post"
  ],
  "subject_types_supported": [
    "pairwise"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ],
  "http_logout_supported": true,
  "frontchannel_logout_supported": true,
  "end_session_endpoint": "https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/oauth2/v2.0/logout",
  "response_types_supported": [
    "code",
    "id_token",
    "code id_token",
    "id_token token"
  ],
  "scopes_supported": [
    "openid",
    "profile",
    "email",
    "offline_access"
  ],
  "issuer": "https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/v2.0",
  "claims_supported": [
    "sub",
    "iss",
    "cloud_instance_name",
    "cloud_graph_host_name",
    "aud",
    "exp",
    "iat",
    "auth_time",
    "acr",
    "nonce",
    "preferred_username",
    "name",
    "tid",
    "ver",
    "at_hash",
    "c_hash",
    "email"
  ],
  "request_uri_parameter_supported": false,
  "tenant_region_scope": "NA",
  "cloud_instance_name": "microsoftonline.com",
  "cloud_graph_host_name": "graph.windows.net"
}

Next you access to the location of “jwks_uri” property (see above), and you can get public key list from that location. Finally you can find appropriate key by matching the “kid” (key id).

Here I show you the complete code by PHP as follows.

<?php
echo "The result is " . token_test("eyJ0eXAiOi...");

// return 1, if token is valid
// return 0, if token is invalid
function token_test($token) {
  // 1 create array from token separated by dot (.)
  $token_arr = explode('.', $token);
  $header_enc = $token_arr[0];
  $claim_enc = $token_arr[1];
  $sig_enc = $token_arr[2];

  // 2 base 64 url decoding
  $header =
    json_decode(base64_url_decode($header_enc), TRUE);
  $claim =
    json_decode(base64_url_decode($claim_enc), TRUE);
  $sig = base64_url_decode($sig_enc);

  // 3 period check
  $dtnow = time();
  if($dtnow <= $claim['nbf'] or $dtnow >= $claim['exp'])
    return 0;

  // 4 audience check
  if (strcmp($claim['aud'], '8a9c6678-7194-43b0-9409-a3a10c3a9800') !== 0)
    return 0;

  // 5 scope check
  if (strcmp($claim['scp'], 'read') !== 0)
    return 0;

  // other checks if needed (lisenced tenant, etc)
  // Here, we skip these steps ...

  //
  // 6 check signature
  //

  // 6-a get key list
  $keylist =
    file_get_contents('https://login.microsoftonline.com/3bc5ea6c-9286-4ca9-8c1a-1b2c4f013f15/discovery/v2.0/keys');
  $keylist_arr = json_decode($keylist, TRUE);
  foreach($keylist_arr['keys'] as $key => $value) {

    // 6-b select one key
    if($value['kid'] == $header['kid']) {

      // 6-c get public key from key info
      $cert_txt = '-----BEGIN CERTIFICATE-----' . "n" . chunk_split($value['x5c'][0], 64) . '-----END CERTIFICATE-----';
      $cert_obj = openssl_x509_read($cert_txt);
      $pkey_obj = openssl_pkey_get_public($cert_obj);
      $pkey_arr = openssl_pkey_get_details($pkey_obj);
      $pkey_txt = $pkey_arr['key'];

      // 6-d validate signature
      $token_valid =
        openssl_verify($header_enc . '.' . $claim_enc, $sig, $pkey_txt, OPENSSL_ALGO_SHA256);
      if($token_valid == 1)
        return 1;
      else
        return 0;
    }
  }

  return 0;
}

function base64_url_decode($arg) {
  $res = $arg;
  $res = str_replace('-', '+', $res);
  $res = str_replace('_', '/', $res);
  switch (strlen($res) % 4) {
    case 0:
      break;
    case 2:
      $res .= "==";
      break;
    case 3:
      $res .= "=";
      break;
    default:
      break;
  }
  $res = base64_decode($res);
  return $res;
}
?>

Calling another services in turn (OAuth – On Behalf Of)

As you can see above, the access token is for the some specific api (for “aud“) and you cannot reuse the token for another api.
What if your custom Web API needs to call another api (for ex, Microsoft Graph API, etc) ?

In such a case, your api can convert to another token with OAuth on-behalf-of flow as follows. No need to display the login UI again.
In this example, our custom Web API will connect to Microsoft Graph API and get e-mail messages of the logged-in user.

Note : For a long ago I explained about this on-behalf-of flow in my blog post with Azure AD v1 endpoint, but here I will explain with v2.0 endpoint, because it’s a little tricky …

First, as the official document says (see here), you need to use tenant-aware endpoint when you use on-behalf-of flow with v2.0 endpoint. That is, the administrator consent (admin consent) is needed for the on-behalf-of flow. (In this case, the user consent for custom Web API which is done in the previous section in this post is not needed.)

Before proceeding the admin consent, you must add the delegated permission for your custom Web API in Application Registration Portal. In this example, we add Mail.Read permission as follows. (When you use admin consent, you cannot add scopes on the fly and you must set the permissions beforehand.)

Next the administrator in the user tenant must access the following url using the web browser for administrator consent.
Note that xxxxx.onmicrosoft.com can also be the tenant id (which is the Guid retrieved as “tid” in the previous claims). 8a9c6678-7194-43b0-9409-a3a10c3a9800 is the application id of the custom Web API and https://localhost/testapi is the redirect url of the custom Web API.

https://login.microsoftonline.com/xxxxx.onmicrosoft.com/adminconsent
  ?client_id=8a9c6678-7194-43b0-9409-a3a10c3a9800
  &state=12345
  &redirect_uri=https%3A%2F%2Flocalhost%2Ftestapi

After logged-in with the tenant administrator, the following consent is displayed. When the administrator approves this consent, your custom Web API is registered in the tenant. As a result, all users in this tenant can use this custom Web API and custom scopes.

Note : You can revoke the admin-consented application in your tenant with Azure Portal. (Of course, the administrator privilege is needed for this operation.)

Now you can ready for the OAuth on-behalf-of flow in v2.0 endpoint !

First the user (non-administrator) gets the access token for the custom Web API and call the custom Web API with this access token. This flow is the same as above and I skip the steps here.

Then the custom Web API can request the following HTTP POST for Azure AD v2.0 endpoint using the passed access token. (I note that eyJ0eXAiOi... is the passed access token for this custom Web API, 8a9c6678-7194-43b0-9409-a3a10c3a9800 is the application id of your custom Web API, and itS... is the application password of your custom Web API.)
This POST method is requesting the new access token for https://graph.microsoft.com/mail.read (pre-defined scope).

POST https://login.microsoftonline.com/xxxxx.onmicrosoft.com/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded

grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer
&assertion=eyJ0eXAiOi...
&requested_token_use=on_behalf_of
&scope=https%3A%2F%2Fgraph.microsoft.com%2Fmail.read
&client_id=8a9c6678-7194-43b0-9409-a3a10c3a9800
&client_secret=itS...

The following is the HTTP response for this on-behalf-of request.
The returned access token is having the scope for Mail.Read (https://graph.microsoft.com/mail.read), and it’s not the application token, but the user token by the logged-in user. (Please parse and decode this access token as I described above.)

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "token_type": "Bearer",
  "scope": "https://graph.microsoft.com/Mail.Read https://graph.microsoft.com/User.Read",
  "expires_in": 3511,
  "ext_expires_in": 0,
  "access_token": "eyJ0eXAiOi..."
}

Finally, when your custom Web API connects to Microsoft Graph endpoint with this access token, the user’s e-mail messages will be returned to your custom Web API.

GET https://graph.microsoft.com/v1.0/me/messages
  ?$orderby=receivedDateTime%20desc
  &$select=subject,receivedDateTime,from
  &$top=20
Accept: application/json
Authorization: Bearer eyJ0eXAiOi...

 

[Reference] App types for the Azure Active Directory v2.0 endpoint
https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-flows

 


シャドウ IT を管理するための 4 つのステップ

$
0
0

2017 年 4 24 Microsoft Secure Blog スタッ – マイクロソフト

このポストは「4 steps to managing shadow IT」の翻訳です。

シャドウ IT は増加の一途をたどっています。従業員の 80% 以上が、IT 部門の承認を受けていないアプリを使用していると答えています。シャドウ IT には未承認のあらゆるハードウェアやソフトウェアが含まれますが、その急速な拡大の主な原因は SaaS です。もはや、シャドウ IT を阻止しようとするやり方は時代遅れで効果がありません。従業員が IT 部門の管理を逃れる方法を見つけるからです。

では、従業員にパワーをもたらしながら可視性と保護を維持するにはどうすればいいでしょうか。 以下に、SaaS アプリとシャドウ IT の管理に役立つ 4 つのステップをご紹介します。

ステップ 1: 実際にどのようにユーザーが使用しているかを把握する

最初のステップでは、従業員がどのようにクラウドを使用しているかを詳細に把握します。どのようなアプリケーションが使用されているでしょうか。どのようなデータがアップロードまたはダウンロードされているでしょうか。最も頻繁に使用しているのは誰でしょうか。非常にリスクが高いアプリはあるでしょうか。これらの洞察から、組織内でのクラウド アプリの使用について戦略を立てるときに役立つ情報を得られるほか、特定のアカウントが侵害されているどうか、従業員が許可されていない行動をとっているかどうかといったことがわかります。

 

ステップ 2: きめ細かなポリシーでデータを制御する

組織内で使用されているアプリについて全体的に可視化して把握することができたら、ユーザーのアクティビティを監視して、組織のセキュリティ ニーズに合わせて調整したカスタム ポリシーを実装することができるようになります。ポリシーは、アクティビティの発生率が予想外に高い場合にアラートを出したり特定のデータの種類を制限するのに適しています。これにより、ポリシーに対して違反があった場合に措置を講じることが可能になります。たとえば、公開リンクの取得とプライベートへの変更、またはユーザー検疫の作成などを行うことができます。

 

ステップ 3: ファイル レベルでデータを保護する

ファイル レベルでデータを保護することは、特に不明なアプリケーションからデータへのアクセスがある場合に重要になります。データ損失防止 (DLP) ポリシーを使用すると、従業員が誤って機密情報 (個人を特定できる情報 (PII)、クレジット カード番号、決算結果など) を企業ネットワークの外部に送信しないようにすることができます。現在、その作業をさらに容易にするための各種ソリューションも提供されています。

 

ステップ 4: 行動分析を使用してアプリやデータを保護する

革新的な脅威検出テクノロジでは、機械学習や行動分析によって、各ユーザーが SaaS アプリケーションを操作する方法を分析し、詳細分析によってリスクを評価します。これにより、データ侵害を示している可能性のある異常を特定できます。たとえば、2 か国からの同時ログイン、テラバイト単位のデータの突然のダウンロード、またはブルート フォース攻撃を示している可能性のある複数回のログイン試行の失敗などです。

 

どこから始めることができるか

クラウド アクセス セキュリティ ブローカー (CASB) を検討してください。これらのソリューションは、管理可能な簡単な方法でこれらの各ステップを達成できるように設計されています。承認済みか未承認かにかかわらず、従業員が使用するクラウド アプリケーションに対して可視性を高め、包括的に制御し、保護を強化できるようになります。

CASB の必要性が高まっている理由については、マイクロソフトの新しい電子書籍をご覧ください。シャドウ IT に関する共通の問題と、CASB がお客様のエンタープライズ セキュリティ戦略において役立つツールとなる理由を説明しています。

参考ブログ『Bring Shadow IT into the Light』 (シャドウ IT を明るみに出す) を読む

 

GDPR が CISO の計画を促進

$
0
0

2017 年 5 9 Microsoft Secure Blog スタッフ – マイクロソフト

このポストは「 How the GDPR is driving CISOs’ agendas 」の翻訳です。

執筆者: Daniel Grabski – エンタープライズ サイバーセキュリティ グループ、エグゼクティブ セキュリティ アドバイザー

 

 

私は、中欧/東欧地域を担当するエグゼクティブ セキュリティ アドバイザーとして、各社の最高情報セキュリティ責任者 (CISO) と日々関わり、その考えや懸念事項を学んでいます。

私が出席するミーティングやカンファレンス、セミナーでは毎回挙がる非常にホットな話題として、EU の一般データ保護規則 (GDPR) に関するものです。基本的に、GDPR は個人のプライバシー権の保護と有効化に関する規則です。送信先、処理される場所、保存先に関係なく、個人データの管理方法や保護方法を統制するための厳格な世界的プライバシー要件を確立する一方で、個別の選択を尊重しています。

GDPR が欧州連合のプライバシー法に適用される近年最大の変更の 1 つであることは間違いありません。

GDPR は、次のようなすべての企業に大きな変更を求める可能性のある複雑な規制です。

  1. EU で設立された企業
  2. EU で製品またはサービスを販売する企業
  3. EU 内にあるそれらのデータを監視/処理する企業 (その処理と監視がどこで行われるかは関係ない)

GDPR の要件には、組織内で使われるテクノロジのほか、すべての段階を管理するために配置する必要のある関係者や関連プロセスが含まれる可能性もあります。

2018 5 25 日付で GDPR が施行されてからも、この規則への準拠は継続的なプロセスとなります。

本稿では、CISO から最も頻繁に寄せられる質問に答える際の参考となるように、次の質問に簡単に答えます。

  • GDPR への準拠に向けたマイクロソフトの取り組みとはどのようなものですか。
  • 企業が今すぐできることは何ですか。
  • クラウド プロバイダーはどのような役割を果たしますか。
  • 準拠の際にテクノロジがどのように役立ちますか。

 

GDPR への準拠に向けたマイクロソフトの取り組みとはどのようなものですか。

GDPR において、マイクロソフトは多くの役割を果たします。例えば、消費者向けサービスを提供する場合はデータ管理者の役割を果たし、企業向けオンライン サービスを提供する場合はデータ処理者の役割を果たします。また、当社は、テクノロジ企業としての役割とは別に、グローバルな従業員基盤を擁する国際企業でもあります。つまり、マイクロソフトは、皆様の組織と同様の取り組みを進めると同時に、2018 5 月までにお客様による GDPR への準拠を容易にするためのイノベーションも進めています。マイクロソフトの最高プライバシー責任者である Brendon Lynch による最近のブログ記事で述べられているとおり、「お客様による準拠を容易にすることを目的として、マイクロソフトは、2018 5 25 日の発効時に当社のクラウド サービス全体が GDPR に準拠できているようにするための取り組みを進めています。また、複雑な規制への準拠における当社の経験を共有することで、皆様の組織が GDPR のプライバシー要件への対応に向けた最善の手順を決めるお手伝いをさせていただきます」。

 

GDPR への準拠に向けたマイクロソフトの取り組みと推奨事項については、当社の Web サイトとブログ記事「Get GDPR compliant with the Microsoft Cloud」を参照してください。Web サイトでは、マイクロソフトの企業向け製品とクラウド サービスが皆様の GDPR への準備に役立つ理由を説明するホワイトペーパーも提供しています。

お客様やパートナー様との議論から、多くの企業が GDPR の要件を強く認識されているのを承知していますが、現在のところ、認識度合いと準備状況は企業によってさまざまです。約 3 分の 1 がまだ取り組みを始めておらず、3 分の 1 はプロセスを始めたばかりで、残りの 3 分の 1 GDPR の要件を積極的に社内の現行プロセスやテクノロジ スタックに対応付ける作業を進めています。

GDPR は、最高情報セキュリティ責任者やデータ プライバシー責任者だけでなく、すべての経営幹部の責任です。テクノロジの適用に関することだけでなく、関係するさまざまなプロセスを考慮し、それらを最新の規制内容と整合させることが重要になります。もう 1 つ大切なのは、これがエグゼクティブ レベルから業務担当者まですべての従業員が認識しておくべきトピックであるということです。全社規模での正しい認識とトレーニングを提供すること、GDPR の重要性、会社の運営への影響を強調すること、GDPR 要件に準拠できなかったらどのような事態になるかを説明することが非常に重要です。したがって、GDPR への準拠には、人、プロセス、テクノロジの整合に関する全範囲が含まれます。

 

企業が今すぐできることは何ですか。

次の 4 つの重要なステップに焦点を合わせて、GDPR への準拠の取り組みを開始することをお勧めします (下の図 1 を参照)

  • 【検出】 保有する個人データとその保存場所を特定します。これは、適切なリスク管理業務の基盤となり、GDPR において非常に重要です。データを特定して初めて、GDPR 要件に従ってデータを保護し、管理することができます。
  • 【管理】  データ主体の要求を実行し、個人データの使用方法とアクセス方法を管理します。また、データが意図された目的にのみ使用され、データにアクセスする必要がある人物だけがアクセスできるようにします。
  • 【保護】 脆弱性とデータ侵害の防止、検出、対応を行うためのセキュリティ制御を確立します。データをそのライフサイクル全体で適切に保護することで、侵害が発生するリスクを低減できます。侵害発生の有無とタイミングを知ることで、常にデータ保護機関に情報を提供できるようになります。
  • 【 レポート】  データの侵害を報告し、必要な書類を保管します。データを正しい方法で管理していて、データ主体の要求を適切に処理していることを証明することが、準拠の中心となります。

1: GDPR への準拠に向けた 4 つのステップ

ホワイトペーパー「Beginning your GDPR Journey」では、現在入手可能な役立つ手順とテクノロジについてより詳細に説明しています。

 

クラウド プロバイダーはどのような役割を果たしますか。

この質問は、CISO が自社の複雑な環境を踏まえ、GDPR 要件に対処する際にクラウド プロバイダーが果たす役割を理解しようとするときによく寄せられる質問です。GDPR では、データ管理者が使用するデータ処理者は、GDPR 準拠に取り組むと同時に、データ管理者による準拠の取り組みを支援してくれる人でなければならないと規定しています。マイクロソフトは、この規定を初めて実現した主要クラウド サービス プロバイダーです。つまり、マイクロソフトは、GDPR の厳しいセキュリティ要件への対応を目指しています。

基本的に、GDPR は共同責任と信頼に関する規則でもあります。クラウド サービス プロバイダーは、マイクロソフトのようなプライバシー、セキュリティ、準拠性、透明性への原則に基づいたアプローチをとる必要があります。「信頼」はさまざまな角度から確認されることが考えられます。たとえば、プロバイダーがサイバーセキュリティのリスクを管理するために、自社のインフラストラクチャと顧客のインフラストラクチャをどのように保護しているか、データをどのように保護しているか、この非常に機微な領域で、どのようなメカニズムや原則によってアプローチと業           務を促進しているか、などです。

マイクロソフトは、社内で、そしてお客様と世界中の数百万人に上るサイバー犯罪被害者に代わってセキュリティ インシデントの保護、検出、対応を行うために、年間 10 億ドルを投資しています。2015 11 月、マイクロソフトは、Microsoft Cyber Defense Operations Center (CDOC) を発表しました。この施設では、全社からセキュリティの専門家を集め、サイバー脅威に対する保護、検出、対応をリアルタイムで支援しています。CDOC の専任チームは、24 時間年中無休で対応しており、センターは、マイクロソフトのグローバル ネットワーク全体の何千人ものセキュリティ担当者、データ アナリスト、データ サイエンティスト、エンジニア、開発者、プログラム マネージャー、および運用スペシャリストに直接アクセスできます。これにより、セキュリティ脅威を迅速に検出して対応し、解決できるようにしています。

2: Cyber Defense Operations Center (CDOC)

マイクロソフトは、自社およびお客様のインフラストラクチャの保護方法を公開しています。Cyber Defense Operations Center で採用されているベスト プラクティスについて、ぜひ詳細をご確認ください。また、CDOC では、マイクロソフト インテリジェント セキュリティ グラフ (ISG) を通じてクラウドの力を利用しています。

マイクロソフトは、毎日、1 秒ごとに数百ギガバイト相当の利用統計情報をセキュリティ グラフに追加しています。以下は、匿名化されたデータは、以下のソースから収集されています。

  • 何百ものグローバル クラウド サービス (消費者および一般企業の両方)
  • 当社が Windows Update を通じて毎月更新する 10 億台以上の PC で直面しているサイバー脅威に関するデータ
  • マイクロソフトのデジタル犯罪対策部門による大規模な調査や業界および法執行機関とのパートナーシップを通じて収集した外部データ ポイント

わかりやすく視覚化するために、当社の消費者向けおよび企業向けサービスで処理される毎月 3,000 億件を超える認証のデータと、マルウェアや悪意のある Web サイトが含まれていないか毎月分析される 2,000 億件の電子メールのデータを、セキュリティ グラフに追加しています。

図3

このようなデータがすべて一元化されることを想像してみてください。そこから得られる洞察が、攻撃を予測して阻止し、組織を保護する際にいかに役立つか考えてみてください。図 3 に示すとおり、マイクロソフトでは、フィードバック、マルウェア、スパム、認証、および攻撃を分析しています。たとえば、数百万台の Xbox Live デバイスのデータから、デバイスがどのように攻撃を受けているかがわかります。この情報を適用することでお客様の保護の強化に役立てることができます。多くの情報は、機械学習やデータ サイエンティストの分析から取り込まれ、サイバー攻撃の最新の手口をより深く理解するために活用します。

CDOC、デジタル犯罪対策部門、インテリジェント セキュリティ グラフのほかにも、マイクロソフトは、お客様による安全なクラウドへの移行とデータの保護を支援することを目的とした、エンタープライズ サイバーセキュリティ担当者による専任チームを設置しました。上記は、マイクロソフトがサイバーセキュリティに対して行っている継続的な投資のほんの数例であり、お客様による GDPR への準拠をサポートする製品やサービスを開発するうえで非常に重要なものです。

 

準拠の際にテクノロジがどのように役立ちますか。

幸い、GDPR への準拠に役立つテクノロジ ソリューションは数多くあります。私のお気に入りのうち 2 つは、Microsoft Azure Information Protection (AIP) Exchange Online Advanced Threat Protection (ATP) です。AIP を使用すると、GDPR の主要要件であるデータの識別可能性とセキュリティを、その保存場所や共有方法に関わらず確保できます。AIP によって、前述のステップ 1 2 に即座に取り掛かり、新規および既存のデータの分類/ラベル付け/保護や、組織内外の人との安全なデータ共有、使用状況の追跡、さらにはリモートからのアクセスの取り消しができるようになります。この直感的で使いやすい、強力なソリューションには、データの配信を監視するための充実したログ機能やレポート機能、暗号化キーを管理して制御するためのオプションも用意されています。

GDPR への準拠の取り組みでステップ 3 に進む用意ができたら、Advanced Threat Protection (ATP) によって、各ユーザーの個人データをセキュリティの脅威から保護するための GDPR の主要要件に取り掛かります。Office 365 には、データを保護し、データの侵害が発生したタイミングを特定するための機能が用意されています。それらの機能の 1 つが Exchange Online Protection Advanced Threat Protection (ATP) であり、リアルタイムで新しい高度なマルウェアから電子メールを保護するのに役立ちます。また、悪意のある電子メールの添付ファイルや、電子メールからリンクされた悪意のある Web サイトにユーザーがアクセスしないようにするために、ポリシーを作成する方法も提供されています。例えば、[安全な添付ファイル] 機能を使用すると、その署名が未知の場合でも、悪意のある添付ファイルがメッセージング環境に影響を与えるのを阻止できます。すべての疑わしいコンテンツが、リアルタイムのマルウェア行動分析の対象となり、疑わしいアクティビティがないか機械学習技術によってコンテンツが評価されます。安全でない添付ファイルについては、受信者に送信される前にデトネーション チャンバーでサンドボックス化されます。

 

結論

Economist の最新記事で、コンピューター セキュリティの脅威を管理する方法が取り上げられました。その最も重要な推奨事項は、政府規制と製品規制の両方が取り組みをリードしていく必要があるというものです。GDPR については、すべての CISO の計画における最優先事項の 1 つとして、現在および 2018 5 月以降も本格的対応が必要になるのは間違いありません。これは、セキュリティとプライバシーの確保を目的とした継続的な取り組みだからです。GDPR によって規制が厳格化されることで、個人データの保護を強化できるフレームワークが実現し、脅威の保護、検出、対応に役立つセキュリティ制御を導入するためのツールが提供されるため、私たちはサイバー犯罪に効果的に対抗できるようになるでしょう。マイクロソフトでは、いつでも CISO と連携して認識を高め、現在および将来に利用できるリソースへのアクセスを可能にし、保証できる体制が整っています。

マイクロソフトの GDPR とセキュリティ全般への取り組みの詳細については、次の役立つリソースを参照してください。

 

執筆者について Daniel Grabski は、IT 業界で 20 年の経験を持つベテランであり、現在、マイクロソフトのエンタープライズ サイバーセキュリティ グループで、欧州、中東、アフリカのタイムゾーンを担当するエグゼクティブ セキュリティ アドバイザーを務めています。この役職では、企業のお客様、パートナー様、公共部門のお客様、および重要なセキュリティ関係者様を担当しています。Daniel は、セキュアで回復力のある ICT インフラストラクチャを構築し、維持するために必要なサイバーセキュリティに関するソリューションとサービスについて、戦略的なセキュリティの専門知識とアドバイスを提供しています。

 

 

Release of SDK Preview 0.9.0.0 and Runtime Preview 5.6.3.6 for Linux

$
0
0

We have released a minor update to our  runtime and SDK preview on Linux, versions 5.6.3.6 and 0.9.0.0 respectively. This update contains several bug fixes and improvements. For more details, see the release notes.

To update from a current version of the SDK and runtime on your developer environment, perform the following steps (remove SDKs from the list that you hadn’t installed and don’t want to install):

sudo sh -c ‘echo “deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ xenial main” > /etc/apt/sources.list.d/dotnetdev.list’
sudo apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv-keys 417A0893
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
sudo apt-get update
sudo apt-get install servicefabric servicefabricsdkcommon servicefabricsdkcsharp servicefabricsdkjava

If you are installing for the first time, you will also need to run the following command (before any of the commands above are run):

    sudo sh -c ‘echo “deb [arch=amd64] http://apt-mo.trafficmanager.net/repos/servicefabric/ trusty main” > /etc/apt/sources.list.d/servicefabric.list

For updating the CLI, navigate to the directory where you cloned the CLI and run a git pull for updating.

Highlights of this release include Container features and diagnostics along with Jenkins and Eclipse Eclipse plugin improvements.

 

Cheers,

The Service Fabric Team

Initial Troubleshoot DCOM Errors 10000, 10001, 10002, 10003 and 10004

$
0
0

Continuing on the last topic about how to troubleshoot some of the most common DCOM Errors, a couple more of those below:

 

DCOM Event ID 10000

Description:

Unable to start a DCOM Server: {AppGUID}.

The error:

“C:WINDOWSsystem32DLLName.dll” -Embedding is not a valid Win32 application.

Happened while starting this command: “C:WINDOWSsystem32DLLName.dll” -Embedding

Cause

  • NTFS permissions are not setup properly.
  • The application or service is looking for a short file name or long file name.
  • Bug in the 3rd party or custom application.

Resolution

  • Check NTFS permissions (DCOM).
  • Check the path in the registry to make sure that short file name or long file name is being used.
  • Check for updates from the manufacturer.

 

DCOM Event ID 10001

Description:

Unable to start a DCOM Server: {AppGUID} as ./USERNAME.

The error:

“Access is denied. “

Happened while starting this command: %Path%ExecutableName.exe /Processid:{ProcessGUID}

Cause

  • Permissions.

Resolution

  • Check the Check NTFS permissions (DCOM) .

 

DCOM Event ID 10002

Description:

Access denied attempting to launch a DCOM Server. The server is: {AppGUID}

Cause

  • App ID doesn’t match.

Resolution

  • Check HKEY_CLASSES_ROOT for the AppGUID.
  • Check the DCOM Check NTFS permissions (DCOM) specific to the application.
  • Re-registering components.

 

DCOM Event ID 10003

Description:

Access denied attempting to launch a DCOM Server using DefaultLaunchPermission. The server is:{AppGUID}

Cause

  • Permissions.

Resolution

  • Check the DCOM Check NTFS permissions (DCOM) specific to the application.

 

DCOM Event ID 10004

Possible Descriptions:

DCOM got error “Logon failure: unknown user name or bad password. ” and was unable to logon .UserName in order to run the server: {AppGUID}.

DCOM got error “Logon failure: the user has not been granted the requested logon type at this computer.” and was unable to logon to ComputerName in order to run the server: {AppGUID}

DCOM got error “A system shutdown is in progress.” and was unable to logon to .ComputerName in order to run the server: {AppGUID}.

DCOM got error “The referenced account is currently locked out and may not be logged on to. ” and was unable to logon .UserName in order to run the server: {AppGUID}

Cause

  • User rights of “Log on as a batch job” has been removed from Local/Domain.
  • Permissions.

Resolution

  • Add user rights under Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesUser Rights AssignmentLog on as a batch job in the Local or Group Policy.
    Note: Make sure that the user account is not disabled or deleted.
  • Check the DCOM Check NTFS permissions (DCOM) specific to the application.

Hope that helps

Hear how contractors can drive IoT innovation in government – July 11 in Washington, DC

$
0
0

Gartner projects that 21 billion devices will be connected to the internet by 2020. With the U.S. government market for the Internet of Things (IoT) beginning to take shape, we invite government contractors register to join Washington Technology and Microsoft to a free event, “How Contractors Can Drive IoT Innovation in Government,” Tuesday, July 11, from 3:30 – 6:30 p.m., at the Marriott Marquis in Washington, DC.

Across government, agencies are exploring how they can harness the countless networked devices already embedded in their operations to gain new insights into their existing services – and to create new services to support their missions.

During this event, government and industry thought leaders will provide government contractors and suppliers with insights into the growing market, its emerging requirements, and the long-term business outlook.

Check out the agenda and register now on the Washington Technology site!

Featured speakers for How Government Contractors Can Drive IoT Innovation in Government:

Bruce Sinclair

Publisher of iot-inc.com

Advisor, Author and Speaker

 

Landon Van Dyke

Senior Advisor for Energy, Environment & Sustainability

Department of State

 

Sam George

Director, IoT Engineering Team

Microsoft Azure

 

Nick Wakeman

Editor-in-chief

Washington Technology

 

 

John P. Wagner

Deputy Executive Assistant Commissioner, Office of Field Operations

U.S. Customs and Border Protection

 

Riding the Azure Stack opportunity

$
0
0

In this blog post, Premier Developer consultant Rob Vettor talks about Azure Stack opportunities.


You are hearing more and more about Azure Stack. As a developer, the release of this long-awaited platform will present new opportunities and place a premium on Azure development skills. A highly ambitious project from Microsoft, Azure Stack brings many features of the Azure cloud platform right into your data center.

Read the rest on Rob’s blog here.

Rules Extensions – Helper Functions

$
0
0

 

This post is focused on Helper Functions  that Multiple Methods can call to complete a task, additionally talks about Function Overloading, “Overloaded functions enable programmers to supply different semantics for a function, depending on the types and number of arguments.”

See Referenced Documents:

Understanding the Helper Function

Function Overloading

Account-Expires attribute

Pwd-Last-Set attribute

Last-Logon-Timestamp attribute

When-Created attribute

The following is a snippet of code which I use to allow multiple methods to call the same functions with out the need to copy the function into each method. This way if I need to update a function I am only updating the function in one place.

The following code can be found on on the “Rules Extensions – MA Extension” Post I use a reference example to detail what the completed MA Extension should look like (as in format and placement of the code not the actual code, all environments are different and this code is to be used as a guide only)

#region helper functions

//1st GetDateString Function

private static void GetDateString(CSEntry csentry, MVEntry mventry, long dtInt, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)
{
if (dtInt == 0 || dtInt == 9223372036854775807)
{
// This is a special condition, do not contribute and delete any current value
mventry[mvAttrib].Delete();
}
else
{
DateTime dtFileTime = DateTime.FromFileTime(dtInt).AddDays(days);
if (targetFormat.Equals(“LONG”, StringComparison.OrdinalIgnoreCase))
{
mventry[mvAttrib].Value = dtFileTime.ToLongDateString();

}
else if (targetFormat.Equals(“SHORT”, StringComparison.OrdinalIgnoreCase))
{
mventry[mvAttrib].Value = dtFileTime.ToShortDateString();
}
else
mventry[mvAttrib].Value = dtFileTime.ToString(targetFormat);
// mventry[mvAttrib].Value = DateTime.FromFileTimeUtc(dtInt).ToString(targetFormat);
}
}
// 2nd GetDateString function

//(CSEntry csentry, MVEntry mventry, long dtInt, string mvAttrib, string targetFormat, int days = 0)
private static void GetDateString(CSEntry csentry, MVEntry mventry, string dateStr, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)
{
DateTime dt = DateTime.ParseExact(dateStr, sourceFormat, CultureInfo.InvariantCulture);

// drops into 1st GetDateString Function
GetDateString(csentry, mventry, dt.ToFileTime(), mvAttrib, sourceFormat, targetFormat, days);
}
private static string ConvertFileTimeToFimTimeStamp(long fileTime)
{
return DateTime.FromFileTimeUtc(fileTime).ToString(“yyyy-MM-ddTHH:mm:ss.000”);
}

private static string ConvertSidToString(byte[] objectSid)
{
string objectSidString = “”;
SecurityIdentifier SI = new SecurityIdentifier(objectSid, 0);
objectSidString = SI.ToString();
return objectSidString;
}

#endregion

 

Now the above Snippet shows the helper functions which can be called but now lets look at how these functions are called.

Lets start with the first helper function and lets looks at the first couple of lines

private static void GetDateString(CSEntry csentry, MVEntry mventry, long dtInt, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)

now lets look at a method that calls this function which can be found Rules Extensions – MapAttributesForImport

case “employeeEndDate”:
csAttrib = “accountExpires”;
mvAttrib = “employeeEndDate”;
dtInt = csentry[csAttrib].IntegerValue;
//targetFormat = “yyyy’-‘MM’-‘dd’T’HH’:’mm’:’ss’.000′”;
targetFormat = “yyyy-MM-ddTHH:mm:ss.000”;
//targetFormat = “M/d/yyyy h:mm tt”;
sourceFormat = string.Empty;
GetDateString(csentry, mventry, dtInt, mvAttrib, sourceFormat, targetFormat);
break;

Notice the Highlighted section GetDateString(csentry, mventry, dtInt, mvAttrib, sourceFormat, targetFormat);

and now look at the first line of the helper function private static void GetDateString(CSEntry csentry, MVEntry mventry, long dtInt, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)

What do you notice? The Method supplies 6 arguments but the Helper Function has 7 arguments 1 being a constant int days = 0 we will get deeper into that in a minute but for now just know that because it is a constant you don’t need to send that in as an argument from the method unless the value is different than the default constant value which in this example is 0 Zero.

As long as the Method that is calling the function by the function name which in the example is GetDateString and sending at a minimal of 6 arguments in the same order that the Helper Function is expecting them, you should be able to call the function within the method.

if you notice in the referenced post of Rules Extensions – MapAttributesForImport there are several methods that all call the same function.

case “employeeEndDate”:

case “pwdLastSet”:

case “pwdExpires”:

case “lastLogonTimestamp”:

case “createdDate”:

 

The first 4 methods all have the same type of source attribute that represents the Date Time but the 5th method createdDate uses the source Active Directory attribute which is a UTC String attribute which defers from the other values which are a value that represents the number of 100-nanosecond intervals since January 1, 1601 (UTC). A value of 0 or 0x7FFFFFFFFFFFFFFF (9223372036854775807) indicates that the account never expires. So in order to be able to use the same function across all Methods I need to do what is called Function Overloading which looks at the incoming arguments and drops the call into the corresponding function with the same name.

Function 1

private static void GetDateString(CSEntry csentry, MVEntry mventry, long dtInt, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)

Function 2

private static void GetDateString(CSEntry csentry, MVEntry mventry, string dateStr, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)

Notice they both have the same Function Name of GetDateString

If you look into the 2nd Function you will see that it takes the arguments being fed into and prepares it to be dumped into the 1st Function

GetDateString(csentry, mventry, dt.ToFileTime(), mvAttrib, sourceFormat, targetFormat, days);

 

 

 

 


Rules Extensions – MA Extension

$
0
0


The following is just an example of what an MA Extension would like like and should only be used as a reference on how to build your own MA Extension, I use this post as a reference for all my MA Extension Post which I have broken up into sections

Rules Extensions – ShouldProjectToMV

Rules Extensions – MapAttributesForImport

Rules Extensions – MapAttributesForJoin


2 Way Account Expires Rules Extension




using System;

using Microsoft.MetadirectoryServices;

using System.Globalization;

using System.Security.Principal;

// Date Changed 23 June 2017

namespace Mms_ManagementAgent_MAExtension

{
     /// <summary>
     /// Summary description for MAExtensionObject.
     /// </summary>
     public class MAExtensionObject : IMASynchronization
     {
         const string FSP = “foreignSecurityPrincipal”;
         const string ADMA1 = “Contoso ADMA”;
         const string ADMA2 = “Fabrikam ADMA”;
         const string ADMA3 = “Fabrikam SPMA”;
         public MAExtensionObject()
         {
             //
             // TODO: Add constructor logic here
             //
         }
         void IMASynchronization.Initialize()
         {
             //
             // TODO: write initialization code
             //
         }

        void IMASynchronization.Terminate()
         {
             //
             // TODO: write termination code
             //
         }
         //bool IMASynchronization.ShouldProjectToMV(CSEntry csentry, out string MVObjectType)
         //{
         //    MVObjectType = “foreignSecurityPrincipal”;
         //    bool ShouldProject = false;
         //    if (csentry[“whatever”].StringValue.Length >= 30)
         //    {
         //        ShouldProject = true;
         //    }

        //    return ShouldProject;
         //}


         bool IMASynchronization.ShouldProjectToMV(CSEntry csentry, out string MVObjectType)
         {
             string fsp = “foreignSecurityPrincipal”;
             bool ShouldProject = false;
             MVObjectType = null;
             switch (csentry.MA.Name)
             {
                 case ADMA1:
                     {
                         MVObjectType = “person”;
                         ShouldProject = true;
                     }
                     break;

                case ADMA2:
                     {
                         MVObjectType = “group”;
                         ShouldProject = true;
                     }
                     break;

                case ADMA3:
                     switch (csentry.ObjectType)
                     {
                         case FSP:
                             {
                                 MVObjectType = fsp;
                                 if (csentry[“cn”].StringValue.Length >= 30)
                                 {
                                     ShouldProject = true;
                                 }
                             }
                             break;
                     }
                     break;

                default: throw new EntryPointNotImplementedException();
             }

            return ShouldProject;
         }

        DeprovisionAction IMASynchronization.Deprovision(CSEntry csentry)
         {
             //
             // TODO: Remove this throw statement if you implement this method
             //
             throw new EntryPointNotImplementedException();
         }

        bool IMASynchronization.FilterForDisconnection(CSEntry csentry)
         {
             //
             // TODO: write connector filter code
             //
             throw new EntryPointNotImplementedException();
         }


         void IMASynchronization.MapAttributesForJoin(string FlowRuleName, CSEntry csentry, ref ValueCollection values)
         {
             switch (FlowRuleName)
             {
                 case “SPAccountName”:
                     //
                     // TODO: write join mapping code
                     //
                     values.Add(csentry[“samAccountName”].StringValue.Replace(“SP_”, “”));
                     break;

                case “BuildAccountName”:
                     if (csentry[“accountName”].IsPresent)
                     {
                         values.Add(csentry[“accountName”].StringValue);
                     }
                     else if (csentry[“firstName”].IsPresent && csentry[“lastName”].IsPresent)
                     {
                         values.Add(csentry[“firstName”].StringValue + “.” + csentry[“lastName”].StringValue);
                     }
                     break;
             }

        }

        bool IMASynchronization.ResolveJoinSearch(string joinCriteriaName, CSEntry csentry, MVEntry[] rgmventry, out int imventry, ref string MVObjectType)
         {
             //
             // TODO: write join resolution code
             //
             throw new EntryPointNotImplementedException();
         }

        void IMASynchronization.MapAttributesForImport(string FlowRuleName, CSEntry csentry, MVEntry mventry)
         {
             string csAttrib;
             string mvAttrib;
             long dtInt;
             string targetFormat;
             string sourceFormat;

            //
             // TODO: write your import attribute flow code
             //
             switch (FlowRuleName)
             {
                 case “getDate”:
                     mvAttrib = “deprovisionDate”;
                     if (mventry.ConnectedMAs[ADMA1].Connectors.Count == 0)
                     {
                         if (mventry[mvAttrib].IsPresent && !string.IsNullOrWhiteSpace(mvAttrib))
                         {
                             DateTime depoDate;
                             if (!DateTime.TryParse(mventry[mvAttrib].Value, out depoDate))
                             {
                                 //mventry [“deprovisionDate”].Value = DateTime.Now.AddDays(90).ToString(“yyyy’-‘MM’-‘dd’T’HH’:’mm’:’ss’.000′”);
                                 mventry[mvAttrib].Value = DateTime.Now.AddDays(90).ToString(“yyyy-MM-ddTHH:mm:ss.000”);
                             }
                             else
                             {
                                 mventry[mvAttrib].Value = DateTime.Now.AddDays(90).ToString(“yyyy-MM-ddTHH:mm:ss.000”);
                             }

                        }
                         else
                         {
                             mventry[mvAttrib].Value = DateTime.Now.AddDays(90).ToString(“yyyy-MM-ddTHH:mm:ss.000”);
                         }
                     }
                     break;

                case “removeDate”:
                     mvAttrib = “deprovisionDate”;
                     if (mventry.ConnectedMAs[ADMA1].Connectors.Count == 1)
                     {
                         if (mventry[mvAttrib].IsPresent)
                         {
                             mventry[mvAttrib].Values.Clear();
                         }
                     }
                     break;

                case “employeeEndDate”:
                     csAttrib = “accountExpires”;
                     mvAttrib = “employeeEndDate”;
                     dtInt = csentry[csAttrib].IntegerValue;
                     //targetFormat = “yyyy’-‘MM’-‘dd’T’HH’:’mm’:’ss’.000′”;
                     targetFormat = “yyyy-MM-ddTHH:mm:ss.000”;
                     //targetFormat = “M/d/yyyy h:mm tt”;
                     sourceFormat = string.Empty;
                     GetDateString(csentry, mventry, dtInt, mvAttrib, sourceFormat, targetFormat);
                     break;

                case “pwdLastSet”:
                     csAttrib = “pwdLastSet”;
                     mvAttrib = “pwdLastSet”;
                     dtInt = csentry[csAttrib].IntegerValue;
                     targetFormat = “M/d/yyyy h:mm tt”;
                     sourceFormat = string.Empty; ;
                     if (csentry[csAttrib].IsPresent && csentry[csAttrib].IntegerValue != 0)
                         GetDateString(csentry, mventry, dtInt, mvAttrib, sourceFormat, targetFormat);
                     ///mventry[mvAttrib].Value = ConvertFileTimeToFimTimeStamp(csentry[csAttrib].IntegerValue);
                     else
                         mventry[mvAttrib].Delete();
                     break;

                case “pwdExpires”:
                     csAttrib = “pwdLastSet”;
                     mvAttrib = “pwdExpires”;
                     dtInt = csentry[csAttrib].IntegerValue;
                     targetFormat = “M/d/yyyy h:mm tt”;
                     sourceFormat = string.Empty;
                     if (csentry[csAttrib].IsPresent && csentry[csAttrib].IntegerValue != 0)
                         GetDateString(csentry, mventry, dtInt, mvAttrib, sourceFormat, targetFormat, 180);
                     ///mventry[mvAttrib].Value = ConvertFileTimeToFimTimeStamp(csentry[csAttrib].IntegerValue);
                     else
                         mventry[mvAttrib].Delete();
                     break;

                case “lastLogonTimestamp”:
                     csAttrib = “lastLogonTimestamp”;
                     mvAttrib = “lastLogonTimestamp”;
                     dtInt = csentry[csAttrib].IntegerValue;
                     targetFormat = “M/d/yyyy h:mm tt”;
                     sourceFormat = string.Empty;
                     if (csentry[csAttrib].IsPresent && csentry[csAttrib].IntegerValue != 0)
                         GetDateString(csentry, mventry, dtInt, mvAttrib, sourceFormat, targetFormat);
                     //mventry[mvAttrib].Value = ConvertFileTimeToFimTimeStamp(csentry[csAttrib].IntegerValue);
                     else
                         mventry[mvAttrib].Delete();
                     break;

                case “createdDate”:
                     csAttrib = “whenCreated”;
                     mvAttrib = “createDate”;
                     string dateStr = csentry[csAttrib].StringValue;
                     targetFormat = “M/dd/yyyy h:mm:ss tt”;
                     sourceFormat = “yyyyMMddHHmmss.0Z”;
                     GetDateString(csentry, mventry, dateStr, mvAttrib, sourceFormat, targetFormat);
                     break;


                 case “objectSidString”:
                     string objectSidString = ConvertSidToString(csentry[“objectSid”].BinaryValue);
                     mventry[“objectSidSTring”].StringValue = objectSidString;
                     break;

            }
         }


        void IMASynchronization.MapAttributesForExport(string FlowRuleName, MVEntry mventry, CSEntry csentry)
         {
             //
             // TODO: write your export attribute flow code
             //

            //
             // TODO: write your export attribute flow code
             //

            switch (FlowRuleName)
             {

                case “accountExpires”:
                     CultureInfo provider = CultureInfo.InvariantCulture;

                    if (mventry[“employeeEndDate”].ToString() != “”)
                     {
                         //DateTime dtFileTime = DateTime.ParseExact(mventry[“employeeEndDate”].Value, “yyyy’-‘MM’-‘dd’T’HH’:’mm’:’ss’.000′”, provider);
                         DateTime dtFileTime = DateTime.Parse(mventry[“employeeEndDate”].Value, provider);

                        csentry[“accountExpires”].IntegerValue = dtFileTime.ToFileTime();
                     }
                     else
                     {
                         csentry[“accountExpires”].Value = “9223372036854775807”;
                     }

                    break;


             }
         }
         #region helper functions

        private static void GetDateString(CSEntry csentry, MVEntry mventry, long dtInt, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)
         {
             if (dtInt == 0 || dtInt == 9223372036854775807)
             {
                 // This is a special condition, do not contribute and delete any current value
                 mventry[mvAttrib].Delete();
             }
             else
             {
                 DateTime dtFileTime = DateTime.FromFileTime(dtInt).AddDays(days);
                 if (targetFormat.Equals(“LONG”, StringComparison.OrdinalIgnoreCase))
                 {
                     mventry[mvAttrib].Value = dtFileTime.ToLongDateString();

                }
                 else if (targetFormat.Equals(“SHORT”, StringComparison.OrdinalIgnoreCase))
                 {
                     mventry[mvAttrib].Value = dtFileTime.ToShortDateString();
                 }
                 else
                     mventry[mvAttrib].Value = dtFileTime.ToString(targetFormat);
                 // mventry[mvAttrib].Value = DateTime.FromFileTimeUtc(dtInt).ToString(targetFormat);
             }
         }
         //(CSEntry csentry, MVEntry mventry, long dtInt, string mvAttrib, string targetFormat, int days = 0)
         private static void GetDateString(CSEntry csentry, MVEntry mventry, string dateStr, string mvAttrib, string sourceFormat, string targetFormat, int days = 0)
         {
             DateTime dt = DateTime.ParseExact(dateStr, sourceFormat, CultureInfo.InvariantCulture);
             GetDateString(csentry, mventry, dt.ToFileTime(), mvAttrib, sourceFormat, targetFormat, days);
         }



         private static string ConvertFileTimeToFimTimeStamp(long fileTime)
         {
             return DateTime.FromFileTimeUtc(fileTime).ToString(“yyyy-MM-ddTHH:mm:ss.000”);
         }

        private static string ConvertSidToString(byte[] objectSid)
         {
             string objectSidString = “”;
             SecurityIdentifier SI = new SecurityIdentifier(objectSid, 0);
             objectSidString = SI.ToString();
             return objectSidString;
         }

        #endregion
     }

}

Create Bot for Microsoft Graph with DevOps 4: Continuous Integration – Build Definition

$
0
0

As I have basic application, it’s time to setup CI (Continuous Integration).

Create Build Definition

1. Go to Visual Studio Team Services and go to Build & Release | Build. Click [New definition].

image

2. Select ASP.NET (PREVIEW) template.

image

3. Set name.

image

4. Select [Get sources] and select the repo. As you see, you can use other repository such as GitHub.

image

5. Select [Test Assemblies] and update Test assemblies field. As the unit test assembly name is O365Bot.UnitTests.dll, I changed it like below.

image

6. I also enabled Code coverage. Select any other options as you want.

image

7. I want to copy files to artifact, so click [Add Task]

8. Select [Copy Files] and add. You can filter by search.

9. Set source folder to $(build.sourcesdirectory) and set Contents to **bin$(BuildConfiguration)**, Target Folder to $(build.artifactstagingdirectory).

10. Change the order under Publish Artifact.

11. Now, set the CI. Select [Triggers] tab and enable [Continuous Integration]. It’s so easy, you know.

image

12. Click [Options] tab and set [Hosted VS2017] as Agent
※ You can see what components are included in the agent at here.

image

13. I also enabled [Create work item on failure] which creates work item when compilation failed.

image

14. Click [Save & queue]

image

15. While running, you can see the actual log.

image

Trigger from check-in

Now you can check-in any change from Visual Studio and it automatically does build.

Summery

Now CI completed. I will explain function test next.

Ken

New Azure VPN Gateway SKU’s provide much higher bandwidth for hybrid workloads

$
0
0

Background:  Education customers, particularly Higher Education customers, tend to invest in Internet connectivity differently than the average company focused on making or selling widgets.  Commodity Internet connectivity investments are measured in multiple Gigabits/Sec of bandwidth for many of these customers.  A good number of my EDU customers are members of Internet2 (I2), a networking consortium providing members high performance private connectivity between members and service providers like Microsoft Azure and Office 365.  It’s not uncommon to see 10 Gb/s and even 100 Gb/s I2 peering at my research intensive Higher Education customers.

Wow – that’s a lot of bandwidth for public IP endpoint services in Azure like Azure Storage (REST API), Azure Backups, Azure Site Recovery replication traffic, Azure App Service Web Sites, etc.  But some hybrid workloads in Azure require secure, routable private network connectivity – for example VM’s running databases, line of business applications and many traditional client/server application stacks that presume LAN connectivity between app tiers, dependent systems and even end users on-premises.  What are the options for these workloads?  There are three primary options:  Site to Site VPN, 3rd party Network Appliances from the Azure Marketplace, and ExpressRoute.  Check out Gaurav’s post for more detail on all three here.

I’m going to focus on Site to Site VPN here because the previous IPsec tunnel throughput limits of our Azure Gateway option – 100 Mb/s and 200 Mb/s – were problematic for some customers.  For light-churn workloads like Domain Controller replication these limits were not an issue, but for large server to server database replication jobs or file server to file server copy operations, these limits were not ideal.  Now we have options up to 1.25 Gb/s.  Excellent!  But performance comes at a cost, what are the costs of these new SKU’s?  The good news is the Basic SKU remains unchanged in price as well as performance, but the VpnGw1 SKU costs the same as the old High Performance SKU and offers 250% more throughput!  Here are the details from the pricing page:

VPN GATEWAY TYPE PRICE BANDWIDTH S2S TUNNELS P2S TUNNELS
Basic ~$26.79/mo 100 Mbps Max 10
1-10: Included
Max 128
VpnGw1 ~$141.36/mo 500 Mbps Max 30
1-10: Included
11-30: $0.015 per tunnel
Max 128
VpnGw2 ~$364.56/mo 1 Gbps Max 30
1-10: Included
11-30: $0.015 per tunnel
Max 128
VpnGw3 ~$930/mo 1.25 Gbps Max 30
1-10: Included
11-30: $0.015 per tunnel
Max 128
* Monthly price estimates are based on 744 hours of usage per month.

These changes got me so excited I wanted to see the new SKU’s in action with some performance tests.  Before I get to that, I want to strongly recommend that customers review the VPN Gateway documentation here and customers with existing Azure VPN Gateways deployed under the old SKU’s need to check out the migration steps described there if they want to move to the new SKU’s.

Test Environment

Unlike my customers, I don’t have multi-Gb/s connectivity between my home lab and Azure, and I’m not interested in testing my cable provider’s network – just the new Azure VPN Gateway SKU’s.  So I chose to test a VNet to VNet connection as described in the docs here.  So I’ll need a couple of vNets, subnets, Gateway subnet, VPN Gateways, VPN Connections and VM’s.  But who’s got time to click through the portal to create that infrastructure for a quick test?  So I searched the Azure Quick Start template gallery of hundreds of templates for a recent (new SKU’s remember) VNet to VNet VPN scenario and found this one.  It only has four parameters and sets up all the networking I needed for my test.  Note that I selected VpnGw1 for the Gateway SKU parameter so that my Gateways would deploy at the SKU just above Basic (hey, my boss has to approve my Azure spend just like yours!).  Those of you that have provisioned a VPN Gateway before know that it’s slower to provision than some other Azure resources.  It took about 45 minutes for my deployment to wrap up, but it sure was nice to have the IPsec tunnel up and no networking config needed!  I added an Ubuntu Server 17.0 VM, size DS2v2 for it’s “high” network throughput, into each vNet.

Here’s a visual of the VNet and VM config:

Test Performance using iperf

At this point I feel compelled to state the obvious – your mileage may vary when testing performance and my results may not be exactly reproduced during a different lunar phase or proton flux condition.  You may want to check your space weather conditions first 🙂  With that out of the way, let’s get on to the bit-shipping!

Installing and using iperf is super easy.  I SSH’ed over to each VM and installed using “sudo apt-get install iperf”  then I had VM2 be the “server” with “iperf -s” and VM1 was the client with “iperf -c VM2’s private network IP Address”  Naturally I ran the client multiple times to get a better average picture:

Conclusion

For customers with ample available Internet and/or I2 bandwidth, these new Azure VPN Gateway SKU’s raise performance to where they can be a consideration for more hybrid workloads than in the past and testing shows that the gains are real and impactful.

 

 

SharePoint 2013 Multiple App Domain with Host Named Site Collections Issue

$
0
0

SO, the other day I ran into a problem at a customer:

Summary

My client has setup an environment where they can only utilize sub domains of their primary domain.

For example: command.com

Hence different sites would be sub domains such as:

  • release.command.com
  • forward.command.com

To save time, maintenance and resources these two web applications are in one central farm. Previously they were in separate farms to secure content from each other.

The client has also utilized HNSC (host named site collections) in order to be cloud ready. During the configuration of app domains for the organization to start building and deploying SharePoint hosted add-ins, the client ran into an issue.

The issue is as follows:

When utilizing HNSC and configuring SharePoint for multiple app domains. The settings for the following app domains are ignored.

  • release.command.com
  • forward.command.com

A global (via the UI) was setup such as: apps.command.com, and all redirects when utilizing a SharePoint hosted add-in goes to this location.

Problem with this are twofold:

  • URL changes to guid.apps.command.com
  • No web site exists at that URL, hence 404.

The problem does not exist when not utilizing HNSC, though this would remove the organization from being cloud ready.

This is a discovered issue unique to the configuration the client has, as there are required security boundaries that need to be in place between SharePoint Web Applications. Even if the above configuration is made to work, it means the site hosting the app domain has to have security rights to both web applications. This is a breach in the security design the client needs to maintain between web applications and content.

Solution Design

Summary

This solution to the issue at hand is to utilize an often used module provided by Microsoft called URL Rewrite. It also involves a careful use of extended sites and IP addresses.

This solution will outline the elements required and the configuration of the solution.

Note: there is no code for this solution, this is all configurations.

DNS Settings

First step is configuring the DNS, this is a bit different than most documentation. The standard configuration calls for a totally separate domain which isn’t possible for the command or desired.

Note: these would be load balanced IP’s by a load balancer. Each web front end in the farm would require this configuration be applied.

The following Table illustrates the configuration of the DNS for this solution:

Parent Domain Main Sub Domain Apps Domain IP
Command.com 10.0.04
Release 10.0.0.5
Apps 10.0.05
*.apps 10.0.0.6
Forward 10.0.0.5
Apps 10.0.0.5
*.apps 10.0.0.7
Apps 10.0.0.5
*.apps 10.0.0.8

The separate apps domain off the root domain is strictly for redirects and will be discussed later.

As illustrated in the above table, there are four addresses; this is because of the need for SSL certificates that are blank.

Note: in Windows Server 2016, the issue of not being able to have “Require Server name Indication” checked for blank host name bindings is resolved. Hence the requirement of multiple IP addresses is null.

Certificates

The following is a list of certificates required for the solution:

Used by Subject Name
apps.command.com *.command.com
*.apps.command.com *.apps.command.com
(apps/portal/root/whatever).release.command.com *.release.command.com
*.apps.release.command.com *.apps.release.command.com
(apps/portal/root/whatever).forward.command.com *.forward.command.com
*.apps.forward.command.com *.apps.forward.command.com

The solution requires the use of wild card certificates; this is not a requirement of this solution but of SharePoint Hosted Add-ins by Microsoft.

Web Applications

This solution has the following configuration:

Web Applications:

  • Release Home
  • Forward Home

Each of the above is extended via Central admin

  • Release Apps
  • Forward Apps

Note: the reason to extend the sites is because of the blank bindings for the *.apps.[site].command.com calls. Hence all such calls will go to the extended site and be covered under that certificate. If extended sites are not used then invalid certificates will be indicated for most sites.

Each site has the following site collection, though what is described is not a limit, just a short list.

  • [site].command.com
  • [site].command.com
  • [site].command.com(location of app catalog)

Note: set the app catalog for each web app pointing to the respective HNSC.

Note: also set the app domain URL to:  apps.command.com

IIS Bindings

The following are bindings for each website:

Note: all are 443, blanks do not have “Require Server name Indication” checked.

Site Binding IP Certificate
Release Home Portal.release.command.com 10.0.0.5 *.release.command.com
Release Home Dept1.release.command.com 10.0.0.5 *.release.command.com
Release Home apps.release.command.com 10.0.0.5 *.release.command.com
Release Apps [blank] 10.0.0.6 *.apps.release.command.com
Forward Home Portal.forward.command.com 10.0.0.5 *. forward.command.com
Forward Home Dept1. forward.command.com 10.0.0.5 *. forward.command.com
Forward Home apps. forward.command.com 10.0.0.5 *. forward.command.com
Forward Apps [blank] 10.0.0.7 *.apps. forward.command.com
Redirect Site Apps.command.com 10.0.0.5 *.command.com
Redirect Site [blank] 10.0.0.8 *.apps.command.com

Note: due to another issue within the redirect.aspx that builds the URL, all ports have to be 443, this is a SharePoint issue and cannot be changed until a fix is implemented.

Redirect Site

A site needs to be created in IIS, it can point to an empty folder, and nothing needs to be set.

Make sure the URL rewrite module is installed, download it if needed: https://www.iis.net/downloads/microsoft/url-rewrite

The model would show up when you click on the web site in IIS:

Two rules need to be created to support the solution discussed.

NOTE: this solution could be applied to more than one SharePoint web application, although only 2 are discussed here.

Double click URL Rewrite:

Click Add Rule(s)

Select Blank Rule from the inbound rules section.

Now you must define the actual rewrite rule. In the URL Rewrite Module, a rewrite rule is defined by specifying four required pieces of information:

  • Name of the rule.
  • Pattern to use for matching the URL string.
  • Set of conditions.
  • Action to perform if a pattern is matched and whether all conditions checks succeed.

Rule 1:

Name: Release Apps Redirect

Match URL section

Requested URL: Matches the Pattern

Using Regular Expressions

Pattern: ^(.*)$

Note: the string that comes in as an input is normally “sites/siteofinterest/pages/page1.aspx”. Hence nothing to identify it as the URL you want, so here we simply say give us anything that is a string.

Conditions Section:

Logical Grouping: Match All

Input Type Pattern
{HTTP_REFERER} Matches the Pattern breleaseb
{HTTP_POST} Matches the Pattern (^app-[0-9a-z]+)

Action Section

Action Type: Redirect

Action Properties:  https://{C:1}.apps.release.command.com{PATH_INFO}?{QUERY_STRING}

Uncheck Include query

Rule 2:

Name: Froward Apps Redirect

Match URL section

Requested URL: Matches the Pattern

Using Regular Expressions

Pattern: ^(.*)$

Note: the string that comes in as an input is normally “sites/siteofinterest/pages/page1.aspx”. Hence nothing to identify it as the URL you want, so here we simply say give us anything that is a string.

Conditions Section:

Logical Grouping: Match All

Input Type Pattern
{HTTP_REFERER} Matches the Pattern bforwardb
{HTTP_POST} Matches the Pattern (^app-[0-9a-z]+)

Action Section

Action Type: Redirect

Action Properties:  https://{C:1}.apps.forward.command.com{PATH_INFO}?{QUERY_STRING}

Uncheck Include query

Review

At this point, all the rest of the development process for building SharePoint Hosted Add-ins is the same. Deploying SharePoint Hosted Add-ins is the same, this is strictly an infrastructure solution and changes nothing of how users implement or use applications.

One aspect that is not mentioned in this document thus far is authentication; this assumes both the main and extended sites utilize the same authentication methods. If federation is used then the extended sites will also need to utilize the same provider.

References

The following are references for this solution:

Name URL
URL Rewrite tips and tricks http://ruslany.net/2009/04/10-url-rewriting-tips-and-tricks/
Download URL Rewrite https://www.iis.net/downloads/microsoft/url-rewrite
Creating Outbound rules https://docs.microsoft.com/en-us/iis/extensions/url-rewrite-module/creating-outbound-rules-for-url-rewrite-module
Using Failed request Tracing to trace Rewrite rules https://docs.microsoft.com/en-us/iis/extensions/url-rewrite-module/using-failed-request-tracing-to-trace-rewrite-rules
URL Parts Available to URL Rewrite Rules https://weblogs.asp.net/owscott/url-parts-available-to-url-rewrite-rules
URL Rewrite Configuration Reference https://docs.microsoft.com/en-us/iis/extensions/url-rewrite-module/url-rewrite-module-configuration-reference
Getting parts of a URL with Regex https://stackoverflow.com/questions/27745/getting-parts-of-a-url-regex
Regular Expressions Examples https://docs.microsoft.com/en-us/dotnet/standard/base-types/regular-expressions
Developing a Custom Rewrite provider https://docs.microsoft.com/en-us/iis/extensions/url-rewrite-module/developing-a-custom-rewrite-provider-for-url-rewrite-module
Creating Rewrite Rules https://docs.microsoft.com/en-us/iis/extensions/url-rewrite-module/developing-a-custom-rewrite-provider-for-url-rewrite-module

Example Python Program Reading SQL Azure Blob Auditing Data

$
0
0

I recently had a case that a customer needed a way to read the blob auditing data from Linux. This was the quickest and easiest way I could think of.

First install msodbcsql following the instructions here https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server

You will also need to install the pyodbc module.

import pyodbc
from datetime import datetime, timedelta

##############################################
# Settings, Please edit with your info       #
##############################################

#Your server name without .database.windows.net
server_name = ""
#Database name that will do the processing
database_name = ""
#Username and Password for your SQL Azure Database
user_name = ""
password = ""

#The storage account name where your audit data is stored
storage_account_name = ""

#Number of hours of auditing data to query
number_of_hours = 1

##############################################
# End Settings                               #
##############################################

#Get timestamp based on number_of_hours
timediff = datetime.now() - timedelta(hours = number_of_hours)

#Build connection string
cnxn = pyodbc.connect('Driver={ODBC Driver 13 for SQL Server};Server=tcp:'+server_name+'.database.windows.net,1433;Database='+database_name+';Uid='+user_name+'@'+server_name+';Pwd='+password+';Encrypt=yes;TrustServerCertificate=no;Connec
tion Timeout=30;')

cursor = cnxn.cursor()

#Query to fn_get_audit_file function
cursor.execute("SELECT [event_time], [action_id], [succeeded], [session_id], [session_server_principal_name], [server_instance_name], [database_name], [schema_name], [object_name], [statement], [additional_information], [transaction_id],
[client_ip], [application_name], [duration_milliseconds], [response_rows], [affected_rows] FROM sys.fn_get_audit_file('https://"+storage_account_name+".blob.core.windows.net/sqldbauditlogs/"+server_name+"', default, default) WHERE event
_time > '"+timediff.strftime('%Y-%m-%d %H:%M:%S')+"' ORDER BY event_time;")

rows = cursor.fetchall()

#Get column names and print them comma delimited
columns = []
for column in cursor.description:
        columns.append(column[0])
print ', '.join(str(x) for x in columns)

#Print data, comma delimited
for row in rows:
        print ', '.join(str(x) for x in row)
Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>