Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

The latest version of SMS Organizer reimagines mobile messaging

$
0
0

Machine learning and big data are revolutionizing the simplest of everyday applications and platforms. Perhaps the most conventional everyday platform is SMS. SMSes have been around since the days of the feature phone. Deeply rooted in traditional mobile computing, 800 million people use the platform daily. However, nearly 95% of these messages are spam or promotional communication generated by machines.

With every store, vendor and brand trying to reach and engage users with constant SMS updates, it’s easy to lose track of essential information in the sea of spam.

To address this challenge, the Microsoft Garage team had been hard at work on the Bing Hackathon Award winning smart SMS app – SMS Organizer. The SMS Organizer was built from the ground-up to reimagine the messaging platform. The easy-to-use and lightweight app uses sophisticated machine learning to sort through messages and make sure users get through to the most important ones. Now, the latest version of this smart app goes a step further to enhance mobile messaging.

Here’s what you can expect from the latest version of the smart SMS Organizer app:

Spam Cleanup

Spam messages are not only annoying but they also clutter the inbox making it harder to find truly important SMSes. With ‘Spam Cleanup’ deleting useless messages is as simple as clicking a button.

Smart Reminders

SMS Organizer’s cutting-edge algorithm can separate a message and pick up essential information. The algorithm goes further to dig beneath the surface and catalog dates, times, appointments, and addresses from a stream of messages.

Quick Actions

The benchmark for mobile efficiency is the number of clicks it takes to complete any task within an ecosystem. Quick actions reduce the number of clicks for any action relevant to a specific message. The app’s algorithm can, for example, read a message that confirms your flight to a new city and help you book a cab at arrival with a single click.

Free SMS

SMSes are locked to the legacy network operated by specific mobile carriers. By sending messages over the Internet, SMS organizer makes text messaging free and effortless.

Dark Theme

The new ‘Dark Theme’ allows users to reduce the glare from their mobile screens and read through the messages with comfort at night.

Backup & Restore

A quick backup and restore feature ensures you never lose essential information or data from your inbox.

With the new and improved SMS Organizer, the Microsoft Garage team has raised the bar for a smart and efficient messaging platform. Going forward, the team will continue to push the envelope to expand the capabilities of mobile messaging platforms. Finally, the team envisions a smart app that can handle every essential task related to information within SMSes. From managing finances based on data mined from messages to supporting mobile search and deeper integration with mobile browsers, it will create a comprehensive window of users’ lives on SMS.

The SMS Organizer App is available on the Play Store.


Exploring data with F# type providers

$
0
0

Guest post by Thomas Denny Microsoft Student Partner at the University of Oxford

thomasdenny

About Me

Hi I am currently studying Computer Science at the University of Oxford.

I am the president (2017-18) and was the secretary (2016-17) of the Oxford University Computer Society, and a member of the Oxford University History and Cross Country societies. I also lead a group of Microsoft Student Partners at the university to run talks and hackathons.

F#

F# is an incredibly flexible language, and amongst its many benefits is the ability to use type providers to access and manipulate data from external sources. A type provider allows you to create a .Net type at runtime without the need to declare the type in code – this facility is not dissimilar to LISP’s macro features. In F# you might use a type provider in place of a code generation, e.g. for writing wrapper types for a database schema. In this article we use a web page to generate a type that we then use for extracting data from other similar pages, and then we look at how to extract data from a CSV file.

Getting started

So long as you have F# and NuGet installed you can follow this guide using any editor, but you can make your experience a little easier by also installing Visual Studio Code and the Ionide F# plugin. This plugin has several useful features, but the most useful are its IntelliSense and type annotations features, which are even available for types created by a type provider!

image

Visual Studio Code

Once you’re setup you’ll need to install the F# Data package from NuGet:

PM> Install-Package FSharp.Data -Version 2.3.3

Wikipedia tables

Parsing and consuming data from HTML is traditionally a heavy task requiring a large amount of code; often a task as simple as extracting the column names of a table will require dozens of lines of code.

We’re going to take a look at a simple problem: each year the cast and crew members of a film will often win several different awards (e.g. Academy Award, Golden Globe), and we would like to find the names of the cast or crew members that won the most awards for that particular film.

To start off with, we’ll take a look at the accolades received by Spotlight, 2016’s Best Picture winner at the Oscars. The results are presented in a table like this:

image

Example table

To start off with, we need to use the HTML type provider to create a new type based on this page. Create a new file called awards.fsx (an F# script):

#r "FSharp.Data.2.3.3/lib/net40/FSharp.Data.dll"
open FSharp.Data

type AccoladeData = HtmlProvider<"https://en.wikipedia.org/wiki/List_of_accolades_received_by_Spotlight_(film)">

Next, we have to request the data for that specific page

let spotlightData = AccoladeData.Load("https://en.wikipedia.org/wiki/List_of_accolades_received_by_Spotlight_(film)")

spotlightData is an object of type AccoladeData, which has properties Html, Tables, and Lists – this is standard across all types created by the HTML type provider. However, the properties available off each of these properties varies based on the schema that the type was provided by. In our case, the Tables property has an Accolades property, which contains the table data from the page. If you use the Ionide plugin with Visual Studio Code, as described above, you can see this in the IntelliSense suggestions:

image

IntelliSense suggestions

Collecting the results together can be done in a few lines of F#. We need to do the following:

  • Filter out any results that were not wins
  • Group results by the winner
  • Count the number of wins for each winner
  • Sort the winners by number of wins

This can be done as a simple F# function that takes the accolade table as an argument:

let awardNumbers (data: AccoladeData) =
    data.Tables.Accolades.Rows
    |> Seq.filter (fun row -> row.Result = "Won")
    |> Seq.groupBy (fun row -> row.``Recipient(s) and nominee(s)``)
    |> Seq.map (fun (person, awards) -> (person, Seq.length awards))
    |> Seq.sortByDescending (fun (person, count) -> count)

Each table row is also of a type constructed by the type provider, and it will have properties for each column (e.g. the result, the recipient, etc). Finally, we can print the results:

for (person, count) in awardNumbers spotlightData do
    printfn "%s,%d" person count

Whilst this example is interesting for a single page, what about other pages with the same table of data? Simply by changing the URL that we load from we can also print the same results for another film:

let moonlightData = AccoladeData.Load("https://en.wikipedia.org/wiki/List_of_accolades_received_by_Moonlight_(2016_film)")
for (person, count) in awardNumbers moonlightData do
    printfn "%s,%d" person count

Finally, we could then collect this data for several films at once in parallel and then print the results for each film:

let urls = [
    "https://en.wikipedia.org/wiki/List_of_accolades_received_by_Spotlight_(film)"
    "https://en.wikipedia.org/wiki/List_of_accolades_received_by_Moonlight_(2016_film)"
    "https://en.wikipedia.org/wiki/List_of_accolades_received_by_La_La_Land_(film)"
]

let allMovies =
    urls
    |> Seq.map AccoladeData.AsyncLoad
    |> Async.Parallel
    |> Async.RunSynchronously
    |> Seq.map awardNumbers

for movie in allMovies do
    for (p,c) in movie do
        printfn "%s,%d" p c

Extracting data from CSVs

The F# Data package also provides a type provider for CSV files. Much like the HTML provider, you can also access all the column names as properties. Here’s a simple example that extracts data from the British Government’s list of MOT testing stations:

let [<Literal>] MOTUrl =
  "https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/613984/active-mot-testing-stations.csv"
// No need to specifically declare a type from the type provider if we are
// loading from one source
let data = new CsvProvider<MOTUrl>()

let stationsPerArea =
  data.Rows
  // Once again, column headers are the properties
  |> Seq.groupBy (fun row -> row.``VTS Address Line 4``)
  |> Seq.map (fun (location, rows) -> (location, Seq.length rows))
  |> Seq.sortBy (fun (location, count) -> count)

for (area, count) in stationsPerArea do printfn "%s,%d" area count

Conclusion

This is just a small glimpse of what you can do with F# data providers – the F# Data package also includes data providers for JSON files, for example.

Extra reading


Extensible X++: Chain of Command

$
0
0

As you can see on the Dynamics Roadmap a new capability is being introduced in X++; it enables strongly typed extension capabilities of public and protected methods; including access to public and protected members.

Oh; I almost forgot: This is my new favorite X++ feature.

See this video to learn more.

THIS POST IS PROVIDED AS-IS; AND CONFERS NO RIGHTS.

サプライ チェーンのセキュリティにはもっと細心の注意が必要

$
0
0

2017 年 4 26 Paul Nicholas – Trustworthy Computing、シニア ディレクター

このポストは「Supply chain security demands closer attention 」の翻訳です。

 

危険な状況に陥った場合、まず初めに私たちはとにかく外側に目を向けて、恐ろしい脅威がないかを確認しようとします。しかし時には、内側に目を向けた方がよい場合もあります。そのよい一例が、情報通信テクノロジ (ICT) のサプライチェーンのセキュリティです。

内側を細かく観察することが、すべての関係者のメリットにつながる可能性があります。攻撃者のエントリ ポイントが社内システムにあろうと、サプライヤーのシステムにあろうと、ほんのわずかなセキュリティ侵害が発生しただけで壊滅的な被害をもたらす可能性があります。ATM への不正アクセスは、請負業者を通じて行われる攻撃方法の 1 つです。数億人の人々の個人情報流出につながります。

これまで 15 年以上にわたってサイバーセキュリティ ポリシー業務に取り組んできた経験から、多様なグローバル化されたつながった世界では、サプライ チェーンを管理しない状態のまま放置した場合、重大なサイバーセキュリティ脅威がもたらされるおそれがあると考えます。多くの製品は、さまざまな企業がさまざまな場所で製造/改良した要素で構成されています。これは、ハードウェアにもソフトウェアにも当てはまります。グローバル サプライ チェーンは、偽の要素や悪意のあるコードを紛れ込ませる機会を生み出します。問題は 1 つの地域にとどまらず、世界中に影響が及ぶおそれもあります。

この状況は、まったく新しいものでも未知のものでもありません。マイクロソフトの観点からの ICT の製品/コンポーネント検証に最適なアプローチ (サイバー サプライ チェーンのリスク管理 (C-SCRM) 分野の経験に基づき、サイバーセキュリティ関連のすべての問題に対する幅広いアプローチに沿ったもの) はリスク ベースです。もしも私がサプライ チェーンのリスク管理に対する姿勢の基本要素を提起するとしたら、以下のようになります。

軽減する必要があるサプライ チェーンの重大リスクに関する明確な理解。

これには、定期的に評価を実施するとともに、脅威やテクノロジの変化に応じて調整を行う必要があります。

  • 企業間、企業と当局間の透明性、責任、および信頼を推進すると同時に、脅威のライフサイクルを考慮した原則と手法。
  • 柔軟性が重要であることの理解。これには i) ベンダーにはさまざまなビジネス モデルや市場がある、および ii) ちょっとしたテクノロジの変化が脅威モデルを急激に変化させる可能性があることを考慮する必要があります。
  • C-SCRM に基づいた技術の制御、運用の制御、およびベンダーや担当者の制御に対する包括的なアプローチ。

効果的なリスク管理に加えて、国際サプライ チェーンにおける国際規格も明確に把握しておく必要があります。「向こう側」の管轄区域にほんのちょっとした脆弱性があっても、サイバー犯罪者が「こちら側」に侵入する手段となるおそれがあることを認識すれば、国際規格はサプライ チェーンの基盤の安全性を判断するための共通基準となるはずです。

政府は、ICT サプライ チェーンの安全性を向上させる方法を検討するにあたり、政府提案について業界からフィードバックを募る必要があります。実際のところ、官民が協力しながらサプライチェーンに関する提案を発展させていくことこそ、この問題に対処する最善の方法はだと思います。サプライ チェーン主体のサイバー攻撃に協力して対処することは、国と企業の双方にメリットがあります。

マイクロソフトは、マイクロソフト製品に対するお客様の信頼を基に成り立っています。マイクロソフトは、多国籍企業として、国境を超えるセキュアなサプライ チェーンの重要性について理解しています。サイバーセキュリティについて検討する際に C-SCRM が第一選択肢になることはまだまだまれかもしれません。しかし、マイクロソフトは、ICT サプライ チェーンのセキュリティ保護に対する最良の対策として、リスク ベースの透明性と柔軟性を備えた規格主体の包括的かつグローバルなアプローチを今後も強く推進してまいります。

A really COOL feature we noticed on VSTS – New Release Definition Editor

$
0
0

The feature was introduced in New Release Definition Editor in Team Services. It’s therefore not really a new, but a cool feature that we explored and fell in Red heart with.

Looking back – this is how one of our pipelines look when viewed in the current (old) release editor.

SNAGHTMLa7e8d33

Looking forward – this is how the same pipeline looks, when viewed in the new release definition editor.

SNAGHTMLa7ed122

The difference is like night and day. The new experience is visual, intuitive, and cool.

It also aligns with pipeline diagrams we introduced in our CI/CD pipeline posts and the recent Phase the roll-out of your application through rings article.

image

How to enable the new editor experience

To enable this and other preview features, you need to logon to your Visual Studio Team Services (VSTS) account.

SNAGHTMLa5f06c4

  1. Click on your avatar.
  2. Select Preview features.
  3. Select whether to filter the preview features based on “me” or “this” account.
  4. Toggle the preview features you’d like to explore, in this case the New Release Definition Editor.

If the preview feature is not yet listed, join the early adopters or keep an eye on your preview features. It’s coming!

What’s new in the latest preview?

We opted to update one of our hands-on lab manuals, 1.5 weeks before an inaugural event, when these two new features made it into the latest preview.

SNAGHTMLa5df6db

  1. Ability to configure your pipeline Artifacts for a new and blank release definition.
  2. Ability to Remove an environment.

There’s also the easy configuration experience and a productivity feature. Environment properties and deployment settings are now in-context, saving you a lot of confusing context switches, state saves, and meaningless mouse clicks.

Using the new experience to re-do our pipeline

Seeing is believing … let’s share and walk-through the new exercise of our updated hands-on lab manual.

Prerequisites

Overview

To complete the CI/CD pipeline we need to create a release that is triggered by the build artifact. We’ll use the Publish Extension Task of the VSTS Developer Tools Build Tasks we used to package the extension during the build. This updates the VSIX package file by unzipping the content, updating the configuration values and zipping all the files. The task then deploys it to the configured Marketplace publisher account and deploy the extension to distinct DEV –> BETA environments, as shown.

image

Create empty release

  • You can get started with your release definition in two ways:
    • Click on Release in your build summary and click the Yes button when promoted to create a new release definition.
      SNAGHTMLbbdd79b
      -or-
    • Click on Releases ❶ and then on New Definition
      SNAGHTMLbbff124
  • Select the Empty process.
    SNAGHTMLb5b9e67
  • Click on the Artifact trigger ❶ and verify that the Continuous deployment trigger ❷ is enabled.
    SNAGHTMLb5cb843
  • Click on the Build artifacts ❶and verify that your Artifact ❷ defaults as shown.
    SNAGHTMLb5e551a
  • Click on the environment ❶ and change the name to DEV ❷.
    SNAGHTMLb5ebd79
  • Change the release name to Countdown Sample.
    SNAGHTMLb5f1379
  • Click on re-deployment conditions ❶ and review the Approvals which are set to Automatic by default ❷.
    SNAGHTMLbc845f4

Configure DEV environment

  • Click on 1 phase(s) 0 task(s).
    SNAGHTMLb5fbecc
  • Click on + (add a task to the phase).
    SNAGHTMLb600ec1
  • Search for Publish Extension ❶ and click Add ❷ to add to the agent phase.
    SNAGHTMLb60485f
  • Click on Publish Extension task to configure the settings that need attention.
    SNAGHTMLb60873d
  • Set the VSTS Marketplace connection to the service endpoint ❶ you created in exercise 4, and select VSIX file ❷.
    SNAGHTMLb61992a
  • Configure the remainder of the Publish Extension task.
    SNAGHTMLb675f6f
    • VSIX file ❶ set to the $(System.DefaultWorkingDirectory)/MyFirstProject-CI/drop/output.vsix file, which was created by the build.
    • Publisher ID ❷ set to the marketplace publisher, which you created in Exercise 3.
    • Extension ID ❸ set to unique ID, for example CountdownSample.
    • Extension Tag ❹ set to DEV to match the DEV environment.
    • Extension name ❺ set to Count Down Sample DEV.

NOTE – If you deploy your extension to the same publisher and/or the same VSTS accounts, we recommend that you change the extension name to include the extension tag, for example CountdownSampleBETA. It makes it much easier to distinguish which extension is which by just looking at the name.
image

    • Override tasks version ❻ is checked.
    • Extension visibility ❼ set to Private.
    • Extension pricing ❽ set to Free.
    • Share with ❾ set to our VSTS account, whereby you can configure other accounts as well to share your DEV extension.
  • Save the release configuration – not needed, but optional if you’re as paranoid as I am.
    SNAGHTMLb6393c1

Configure BETA environment

  • Select the DEV environment ❶, click on Add ❷ and select Clone selected environment ❸.
    SNAGHTMLb63d3f7
  • Change the environment name to BETA.
    SNAGHTMLb640613
  • Click on pre-deployment conditions ❶ and review the trigger ❷. Select Specific users ❸ for approval type and add your account to the list of approvers.
    SNAGHTMLb6434e3
  • Update the configuration of the cloned Publish Extension task.
    SNAGHTMLb6562d4
    • Extension Tag ❶ set to BETA to match the DEV environment.
    • Extension name ❷ set to Count Down Sample BETA.
    • Share with ❸ set to our VSTS account, whereby you’d deploy to a different environment (ring) in a production environment.
  • Save the release definition
    SNAGHTMLb64965c

It’s time to validate the release

  • Click on + Release and select Create Release
    SNAGHTMLb64cd0c
  • Review the Artifacts, which refer to our latest build, the Automated deployments, and click Queue
    SNAGHTMLb64fb21
  • Click on new release that’s been created to observe the deployment
    SNAGHTMLb651deb
  • Click on Logs and verify that the DEV release is successful and notice that the BETA release waits for manual approval, as configured
    SNAGHTMLb672833
  • Click on the Approvers ❶ icon and click on Approve ❷ to approve the release to the BETA environment
    image
  • Verify that the BETA release completes successfully as well
    SNAGHTMLb66fe44
  • Open a new browser tab and go to https://marketplace.visualstudio.com/manage/publishers
  • Verify your publisher is selected and that the Countdown Sample DEV and BETA extension have been publisher successfully
    SNAGHTMLb6a3340

IMPORTANT – We’re intentionally NOT implementing the PROD environment in our hands-on lab, which publishes a public version of the extension. It’s important we do not duplicate features on the marketplace, and review the extension product documentation before we flip the public switch. There are scenarios in which you cannot undo, for example uninstall, a public extension publication.

That’s it! Enjoy the new editor!!!

Create Bot for Microsoft Graph with DevOps 10: BotBuilder features – FormFlow 201 FormBuilder

$
0
0

In this article, I explain about FormBuilder, which builds a form from model, and it gives you flexible options to customize the form.

FormBuilder
In previous article, I used following code to build a form.

return new FormBuilder<OutlookEvent>()
 .Message("Creating an event.")
 .AddRemainingFields() // add all (remaing) fields to the form.
 .OnCompletion(processOutlookEventCreate)
 .Build();

I also get prompt string from model by using Prompt attribute. I uses AddRemainingFields to add all fields at once, but I use Field method this time to add them one by one.

Add a Field

If you want to add fields one by one, you can use Field method. In this example, I added Subject and Description only for the form.

return new FormBuilder<OutlookEvent>()
    .Message("Creating an event.")
    .Field(nameof(OutlookEvent.Subject))
    .Field(nameof(OutlookEvent.Description))
    .OnCompletion(processOutlookEventCreate)
    .Build();

Add Prompt

Let’s add prompt string.

return new FormBuilder<OutlookEvent>()
    .Message("Creating an event.")
    .Field(nameof(OutlookEvent.Subject), prompt: "What is the title?")
    .Field(nameof(OutlookEvent.Description), prompt: "What is the detail?")
    .OnCompletion(processOutlookEventCreate)
    .Build();

Control display

In case you want to control if the field should be displayed or not, you can use active parameter. In this case, I check if Hours field should be displayed depending on if it’s all day event.

return new FormBuilder<OutlookEvent>()
    .Message("Creating an event.")
    .Field(nameof(OutlookEvent.Subject), prompt: "What is the title?")
    .Field(nameof(OutlookEvent.Description), prompt: "What is the detail?")
    .Field(nameof(OutlookEvent.Start), prompt: "When do you start? Use dd/MM/yyyy HH:mm format.")
    .Field(nameof(OutlookEvent.IsAllDay), prompt: "Is this all day event?{||}")
    .Field(nameof(OutlookEvent.Hours), prompt: "How many hours?", active: (state) =>
    {
        // If this is all day event, then do not display hours field.
        if (state.IsAllDay)
            return false;
        else
            return true;
    })
    .OnCompletion(processOutlookEventCreate)
    .Build();

Validate the input

You can also validate the input by using validate parameter.

return new FormBuilder<OutlookEvent>()
    .Message("Creating an event.")
    .Field(nameof(OutlookEvent.Subject), prompt: "What is the title?", validate: async (state,value) =>
    {
        var subject = (string)value;
        var result = new ValidateResult() { IsValid = true, Value = subject };
        if (subject.Contains("FormFlow"))
        {
            result.IsValid = false;
            result.Feedback = "You cannot include FormFlow as subject.";
        }
        return result;

    })
    .Field(nameof(OutlookEvent.Description), prompt: "What is the detail?")
    .Field(nameof(OutlookEvent.Start), prompt: "When do you start? Use dd/MM/yyyy HH:mm format.")
    .Field(nameof(OutlookEvent.IsAllDay), prompt: "Is this all day event?{||}")
    .Field(nameof(OutlookEvent.Hours), prompt: "How many hours?", active: (state) =>
    {
        // If this is all day event, then do not display hours field.
        if (state.IsAllDay)
            return false;
        else
            return true;
    })
    .OnCompletion(processOutlookEventCreate)
    .Build();

Try with emulator

Run the application and try with emulator.

image

image

Now all the tests should pass as it behave exactly same as before.

Use current value for message

If you want to show message with current field value, you can do so by using following code.

return new FormBuilder<OutlookEvent>()
    .Message("Creating an event.")
    .Field(nameof(OutlookEvent.Subject), prompt: "What is the title?", validate: async (state, value) =>
    {
        var subject = (string)value;
        var result = new ValidateResult() { IsValid = true, Value = subject };
        if (subject.Contains("FormFlow"))
        {
            result.IsValid = false;
            result.Feedback = "You cannot include FormFlow as subject.";
        }
        return result;

    })
    .Message(async (state) => { return new PromptAttribute($"The current subject is{state.Subject}"); })
    .Field(nameof(OutlookEvent.Description), prompt: "What is the detail?")
    .Field(nameof(OutlookEvent.Start), prompt: "When do you start? Use dd/MM/yyyy HH:mm format.")
    .Field(nameof(OutlookEvent.IsAllDay), prompt: "Is this all day event?{||}")
    .Field(nameof(OutlookEvent.Hours), prompt: "How many hours?", active: (state) =>
    {
        // If this is all day event, then do not display hours field.
        if (state.IsAllDay)
            return false;
        else
            return true;
    })
    .OnCompletion(processOutlookEventCreate)
    .Build();

Confirmation

If you want to add confirm step, you can do so by using Confirm method. Again, you can use current value for fields.

return new FormBuilder<OutlookEvent>()
    .Message("Creating an event.")
    .Field(nameof(OutlookEvent.Subject), prompt: "What is the title?", validate: async (state, value) =>
    {
        var subject = (string)value;
        var result = new ValidateResult() { IsValid = true, Value = subject };
        if (subject.Contains("FormFlow"))
        {
            result.IsValid = false;
            result.Feedback = "You cannot include FormFlow as subject.";
        }
        return result;

    })
    .Message(async (state) => { return new PromptAttribute($"The current subject is{state.Subject}"); })
    .Field(nameof(OutlookEvent.Description), prompt: "What is the detail?")
    .Field(nameof(OutlookEvent.Start), prompt: "When do you start? Use dd/MM/yyyy HH:mm format.")
    .Field(nameof(OutlookEvent.IsAllDay), prompt: "Is this all day event?{||}")
    .Field(nameof(OutlookEvent.Hours), prompt: "How many hours?", active: (state) =>
    {
        // If this is all day event, then do not display hours field.
        if (state.IsAllDay)
            return false;
        else
            return true;
    })
    .Confirm(async (state) =>
    {
        if (state.IsAllDay)
            return new PromptAttribute("Are you sure if this is all day events?");
        else
            return new PromptAttribute($"Are you sure the event is {state.Hours} hours long?");
    })
    .OnCompletion(processOutlookEventCreate)
    .Build();

Try with emulator by yourself.

Summery

FormFlow is very easy to use, yet flexible enough. It also provides out-of-box features such as ‘help’ and ‘state’, so please read the document as you need.

GitHub: https://github.com/kenakamu/BotWithDevOps-Blog-sample/tree/master/article10

Ken

Enhance Power BI with DAX

$
0
0

Power BI (PBI) is a suite of business analytics tools that deliver insights throughout your organization. The ease of use and powerful features enable PBI to gain a great reception in the market since it was introduced in July 2015. You can find the latest releases, learning guide, detailed documentation as well as in-depth discussion on diverse topics, starting from here.

Data Analysis Expressions (DAX), is a functional and query language by Microsoft. It is the formula language used throughout Power BI. DAX for PBI provides capabilities allowing you to flexibly transfer, manipulate and carry out dynamic aggregation and other operations with your data, and mastering it will help you get the most out of your data. DAX is not a programming language though; its unique abstract concepts do not exist in other programming languages. A traditional study approach for a programming language is not effective for DAX learning. A good start point can be found here.

Recently, when working with PBI for visualization, we had an opportunity to use DAX to implement a feature required by a customer as there was nothing we could use directly to fulfill it in PBI. We solved it by utilizing DAX’s capability and the way PBI incorporate with DAX. Here are the two key difficulties – the need to:

  • Calculate some measures based on user’s input (PBI does not have an out-of-box feature to take a user’s input)
  • Toggle the visibility for a chart (based on user’s input) when the control criteria did not exist in, or could not be easily mapped to, any column in our existing tables.

The purpose of this blog posting is to discuss an approach that to some extent solves these problems.

Calculating measures based on a user’s input

Each issue involves taking a user’s input, so let’s look at that first. Within the PBI UI, the only component to use to solve the problem is the built-in Slicer visual. Because a Slicer can slice a table into a single row, this feature can be used to make it possible to capture user input. The flow diagram and the bulletin below illustrate the process.

  1. Create a standalone table containing all input values (in a column) in the data model. The first step is to create a table that contains all input values for the data model. Whether to create a new table or to duplicate an existing one depends on which option will work best for you. The minimum requirement is that one column in the table contains all possible values for selection. You might also need to include a column of description text to make it user friendly, and a column of numerical identities for each value to make the filtering calculation easier. Note that this table must be stand-alone, with no relationship with other tables.
  2. Properly design and run a Slicer instance on the report page to slice the new table. In the PBI desktop, create a Slicer instance on report page by using the table that was created in step 1. Be sure to design the slicer properly so that you can determine the value a specific user has selected. While it is best to slice to a single value, multiple identical values (likely in case you copy an existing table) are also workable with the help of the DISTINCTCOUNT function.
  3. Create a measure in data model to pick up the sliced value. Add a calculated measure, for example “myinput”, in the model and assign it the value from a suitable column (depending on how you use it in the calculation) of the standalone table. At this point, “myinput” can be used in the desired calculation. As a result, when user selects a different value, “myinput” will be re-evaluated. All measure(s) referencing “myinput” will also be re-evaluated, and propagate to the whole dependency chain. The corresponding visual is then re-rendered with new value(s).

It is important to note that after a table/column is loaded into Power BI, the data becomes “static.” Even if there is calculation logic in your model to dynamically change the value(s) in a column, the values will not be shown in your Power BI report until you reload the data. As a result, to view the change in the report on a user’s input, implement logic with “calculated measure,” and not with “calculated column”. The value of a calculated measure is evaluated at query time. This guarantees that every time a user chooses a value, the measure based on it will be re-calculated, and the corresponding visual will be rendered again.

Toggling the visibility for a chart

The second problem we encountered relates to the visibility of a chart. After input is picked up from a user, controlling the visibility of a chart can be tackled in various ways. Basically, the logic behind the visual must be implemented by “calculated measure.” That is, to perform the calculation for the measure upon user’s positive selection, or to assign a “BLANK” (the return value of BLANK() function) to the measure otherwise.

Below is a simple example that outlines the entire process – the content in the example is only for purposes illustration.

A farmer’s shop sells produce. The shop uses PBI to show volume of sales and the average unit price over the first week of 2017. Below is the data model and the visual built in PBI. Constructing this visual is straightforward, and it uses the built-in summarize function over the “Sales” column to derive total sales.

 

Now consider that the owner wants to toggle the bar charts for odd and even days’ sales in the same graph without affecting the line chart for average unit price. This would make it easier to determine if the local “odd-even-days” events have any impact on the sales. That is, when choosing odd days, the graph displays bars only for January 1st, 3rd, 5th and 7th, while no bars are shown for the 2nd, 4th, and 6th. When choosing even days, the bars display the other way around, but in each case, the line-chart is not impacted, as shown in the following images:

 

Let’s look at the process for implementing this functionality.

Step 1. Create a standalone table containing all input values (in a column) in the data model. The owner needs to choose between odd and even days, so the table should have a column of day type, and two values, one representing odd days and the other even days. Providing a numerical ID for each value will make the DAX calculation easier. The result is a 2-by-2 standalone table, called OddEven.

After adding this table, the data model will appear as shown in the following image:

Step 2. Properly design and run a Slicer instance on the report page to slice the table above. In PBI, create an instance of the built-in Slicer visual, choosing the “DateType” column in the field of the Slicer.

Step 3. Create a measure in the data model to pick up the sliced value.

DaySignature = IF(COUNT(OddEven[Signature])=1, MAX(OddEven[Signature]), 0)

This way, DaySignature holds 1 when and odd day is selected, and it holds 2 when an even day is selected. If neither or both are chosen – should you accidently set the Slicer to the wrong mode for this purpose – DaySignature holds value 0.

Finally, we can use the measure “DaySignature” in the calculation for the visibility control of the charts in our report.

Below is the calculation for measures for odd and even days’ sales. As discussed earlier, remember to implement visual logic with calculated measure to be refreshable upon data change, i.e. a change in a user’s selection. Because we need individual visibility control over odd and even days’ sales chart, it would be easier if we calculate them separately.

TotalSales = SUM(Fact_Sales[Sales])

TotalSalesEvenDays = IF([DaySignature]=1, BLANK(), CALCULATE([TotalSales], FILTER(Fact_Sales, NOT(ISODD(Fact_Sales[DateId])))))

TotalSalesOddDays = IF([DaySignature]=2, BLANK(), CALCULATE([TotalSales], FILTER(Fact_Sales, ISODD(Fact_Sales[DateId]))))

A better way to implement this would be with a single measure – it would be tidier and have more options when choosing visual types.

TotalSalsOddEven = SWITCH([DaySignature],        1, CALCULATE([TotalSales], FILTER(Fact_Sales, ISODD(Fact_Sales[DateId]))),        2, CALCULATE([TotalSales], FILTER(Fact_Sales, NOT(ISODD(Fact_Sales[DateId])))),            [TotalSales])

Hopefully, this simple example has provided you with a solid understanding of the topic. For your reference, the data and all related code is included in a pbix file you can access here (Example).

A final point to note here is that using this approach has a limitation. The input offered to the user for selection must be pre-defined and must be discrete/scattered values. In other words, be sure to put all of them into a table. If your scenario involves arbitrary input, for example 5.25 or 1.23456, this approach will not be feasible.

Recognition

The Data Migration Team would like to thank primary contributors Bin Zhao, Andy Isley, Kasper de Jong, and Mukesh Kumar for their efforts in preparing this blog posting. The detail provided has been harvested as part of a customer engagement sponsored through the DM Jumpstart Program.

 

Using PerfView with Azure Service Fabric Event Source Messages

$
0
0

This post is provided by Senior App Dev Manager, Mark Eisenberg, who spotlights the use of PerfView as a handy tool for debugging Azure Service Fabric applications.


The Service Fabric tooling provides a Diagnostic Events viewer for Visual Studio that displays Event Trace for Windows (ETW) messages generated by the event sources provided with the SDK, ServiceEventSource and ActorEventSource. When working on a project with eight actors, two of which had hundreds of instantiations, it did not take long to swamp the built-in view. It topped out at 5000 messages and then began dropping the oldest messages. In addition, it could not keep up with the message rate.

One thing I learned quickly when debugging an actor-based application, as is the case with any highly concurrent architecture, traditional debuggers prove to be not useful. And the limitations in the Diagnostic Events viewer also quickly made it not helpful. A full-length run of the system generated on the order of 50000 messages in about a minute and half. But I had to see what I had to see and I was assured that the problem was with the viewer and the not the ETW system.

A wise man pointed me to PerfView which despite my advanced years, I had never used. The challenges I ran in to are likely laughable to those people who have had opportunity to troubleshoot real-time problems in Windows-based systems, but this post is for everyone else. BTW, there are several other ways to capture ETW traces when the cluster is running on a real cluster of machines. This article is about using a cluster on a developer’s own machine.

Step 1 – download PerfView from Download PerfView.

It’s standalone so just put the exe someplace convenient.

Step 2 – Make a note of the name(s) of your event source(s)

Open up each of the ServiceEventSource.cs and ActorEventSource.cs files and make a note of the event source name:

[EventSource(Name = "Incelligence-TestWebService-TestWebApi")]

internal sealed class ServiceEventSource : EventSource

[EventSource(Name = "Incelligence-BuildIPMLApplication-Ipml")]

internal sealed class ActorEventSource : EventSource

Another way to accomplish this is by running your app with the Diagnostics Events viewer and looking for the “ProviderName” in the JSON for the events in which you are interested:

pv1

Step 3 – Fire up PerfView

pv2

Step 4 – Collect->Collect or Alt-C and expand the advanced options

pv3

Step 5 – Untick all of the provider boxes and fill in the Additional Providers field with the Service Fabric providers you need such as “*Microsoft-ServiceFabric-Actors” and “*Incelligence-BuildIPMLApplication-Helpers”. Don’t forget the “*”. It is important. Don’t know why, but nothing happens if you leave it out.

pv4

Step 6 – Click “Start Collection”

Step 7 – Run your application

Step 8 – Click “Stop Collection”

Step 9 – Wait until the processing phase completes which will take a while which results in this:

pv5

Step 10 – Double-click on Events. Couple of things to note here. The default maximum number of records return is 10000. In the screenshot below I have set it to 50000 and this filter returned 81600 as shown at the bottom of the window. You can select multiple Event Types (I have two selected) and then hit Update. I have also set a filter to only show the message column. Depending on what you are looking for the Text Filter can be invaluable.

pv6

Summary – The PerfView tool will reveal everything a developer needs to know about long running Service Fabric applications. It will catch all log messages where the integrated Diagnostics Event viewer can lose messages when the message rate goes to a high level. The developer needs to make sure they properly instrument their code, but if done properly problems cannot stay hidden for long.

Epilogue – I have not run this application in a couple of months and the Service Fabric has been updated since then. Previously, as I mentioned in the introduction, a run would generate on the order of 50000 messages. This run took almost 15 minutes, hit over 700000 records and 9000 ActorMethodThrewException that did not used to be there. Looks like I will I have use what I just wrote about to ferret out whatever has cropped up.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.


Serverless computing with Azure Function in a DevOps Model

$
0
0

In a DevOps Model, one of the crucial aspect for Engineers is to manage Infrastructure. No matter how much ever we automate this task becomes cumbersome especially when we have multiple servers involved. While Azure ARM templates helps us spin up an environment on the fly we still have the need to maintain and support the servers. To overcome this hurdle, the direction we are moving towards is to go “Serverless”, that is the cloud provider dynamically manages the allocation of machine resources, and bills based on the actual amount of resources consumed by an application, rather than billing based on pre-purchased units of capacity. This will reduce the cost incurred by an application utilizing the compute resource. In a nutshell, we are not going to pay for scale or for the capacity of resources, but just for the compute (how many times we run and how long we run). And Of course, we don’t have to think about managing these servers.
In this post, I’m going to talk about how we can build a solution using Azure Function (latest version with VS2017 Preview 3) in DevOps fashion to run small discrete piece of software in the cloud.

For those who have been using Azure Functions that was released with VS2015, might have noticed that the model has completely changed although the fundamentals are the same. Please visit Part1 and Part2 channel 9 video in which Jeff Hollan explains all in depth. The version that was released with VS2015 has been discontinued as the future focus is on pre-compiled functions with the intent to have .NET Standard 2.0, there are dependencies that will only exist in Visual Studio 2017 Update 3 and beyond.
Let’s dive in to the flow to see how can we Create, build and deploy Azure Functions.

Prerequisites

  • You must have either the “ASP.NET and web development” or “Azure development” workload installed

Create

Creation of Azure function in Visual studio is as easy as just selecting the project template. Then, it a matter of what logic you need which you can create by adding a Function from the available option.

Right-click on the project to add a new function which will then walk you through the selection wizard.

Cloud -> Azure Functions

 

In the Solution explorer, you will see that now we have an easy option to add dependent libraries from nuget.org or any nuget store. And yes, now its builds in to a class library. This might be new to people who worked with Azure Functions in VS2015

Azure Function (VS2017) changes

 

There is a list of operations that you can do when you create a new Azure function please ref this MSDN article for details for Azure triggers and bindings.. each of which is explained in detail.

I want to spend a minute looking at the folder structure created when we build the solution.

Inside the folder ..ganesh..ServerlessComputingSampleServerlessComputingSamplebinDebugnet461 you will see the following

And inside HttpTriggerCharp (the sample function that I added) we will see function.json

So, unlike VS2015 template function.json is now part of build output than being part of the solution.

Here is the content of it,

As you would have noticed function.json includes the details on bindings and the library location.

Note:

Connecting to On-prem SQL server from Azure Functions is possible and it is identical to how we do it for APP services. The only thing we should change is that during creation of Azure Functions we should use _App Service Plan_ instead of Consumption plan. Once we select App service plan, we can click on All settings and create a hybrid connection and follow through to connect to on-premise SQL server. To know the difference between Consumption and App service plan please refer this article.

 

Build and Release Tasks in VSO

Simple set of Build task include the following where we build the solution and copy the artifacts. Including MSBuild Arguments for the sake of clarity to package the solution.

Since Azure Functions builds as a DLL there is small change I have done to my solution structure. Although Azure function builds as a DLL , it follow App service path for deployment. As such, for deployment automation, I have added another empty App service project to my solution, with a Folder named Library and made that as the output path for Function project. Added the reference to the function DLL from the newly generated DLL in the Library folder. We use this project during our Deployment to simply deploy App service which in-turn deploys our function App.

Coming to the release definition we will see the following tasks(you can have additional tasks based on your project requirement) ,

The first task is for Resource Deployment. In the beginning I had mentioned about Deployment of Resources through Resource template, this is where we do that for our function app. There are many resource template available in Azure Quick Start Template that you can use in your Resource group project for deployment.

For automation Resource group creation I have added an empty Azure Resource Group project and modified it

Here is the sample Template detail that needs to be placed in azuredeploy.json

{
   "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
   "contentVersion": "1.0.0.0",
   "parameters": {
       "appName": {
           "type": "string",
           "metadata": {
               "description": "The name of the function app that you wish to create."
           }
       },
       "sku": {
           "type": "string",
           "allowedValues": [
               "Free",
               "Shared",
               "Basic",
               "Standard"
            ],
           "defaultValue": "Standard",
           "metadata": {
               "description": "The pricing tier for the hosting plan."
           }
       },

       "workerSize": {
           "type": "string",
           "allowedValues": [
               "0",
               "1",
               "2"
           ],
           "defaultValue": "0",
           "metadata": {
               "description": "The instance size of the hosting plan (small, medium, or large)."
           }
       },

       "storageAccountType": {
           "type": "string",
           "defaultValue": "Standard_LRS",
           "allowedValues": [
               "Standard_LRS",
               "Standard_GRS",
               "Standard_ZRS",
               "Premium_LRS"
           ],
           "metadata": {
               "description": "Storage Account type"
           }
       }
   },

   "variables": {
       "functionAppName": "[parameters('appName')]",
       "hostingPlanName": "[parameters('appName')]",
       "storageAccountName": "[concat(uniquestring(resourceGroup().id), 'functions')]"
   },

   "resources": [
       {
           "type": "Microsoft.Storage/storageAccounts",
           "name": "[variables('storageAccountName')]",
           "apiVersion": "2015-06-15",
           "location": "[resourceGroup().location]",
           "properties": {
               "accountType": "[parameters('storageAccountType')]"
           }
       },
       {
           "type": "Microsoft.Web/serverfarms",
           "apiVersion": "2015-04-01",
           "name": "[variables('hostingPlanName')]",
           "location": "[resourceGroup().location]",
           "properties": {
               "name": "[variables('hostingPlanName')]",
               "sku": "[parameters('sku')]",
               "workerSize": "[parameters('workerSize')]",
               "hostingEnvironment": "",
               "numberOfWorkers": 1
           }
       },
       {
           "apiVersion": "2015-04-01",
           "type": "Microsoft.Web/sites",
           "name": "[variables('functionAppName')]",
           "location": "[resourceGroup().location]",
           "kind": "functionapp",
           "properties": {
               "name": "[variables('functionAppName')]",
               "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
               "hostingEnvironment": "",
                     "clientAffinityEnabled": false,
                     "alwaysOn": true
          },
           "dependsOn": [
               "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
               "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
           ],
           "resources": [

               {
                   "apiVersion": "2016-03-01",
                   "name": "appsettings",
                   "type": "config",
                   "dependsOn": [
                       "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
                       "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
                   ],

                   "properties": {

                       "AzureWebJobsStorage": "[concat('DefaultEndpointsProtocol=https;AccountName=',variables('storageAccountName'),';AccountKey=',listkeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2015-05-01-preview').key1,';')]",

                       "AzureWebJobsDashboard": "[concat('DefaultEndpointsProtocol=https;AccountName=',variables('storageAccountName'),';AccountKey=',listkeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2015-05-01-preview').key1,';')]",

                       "FUNCTIONS_EXTENSION_VERSION": "~1"
                   }
               }
                     ]
       }
   ]
}

 

Wanted to point out that you will need service principle for this task which you create by following the details in this article.

Azure App service task deployment is where we will deploy our App Service Deployment project that we created earlier. I won’t go in to details as this is straight forward.

Configuration setup is done through PowerShell script. Here is the PowerShell Script that you can use to set up Azure Function app settings, you can run this from a script task for deployment automation.

Param(
[string]$myResourceGroup,
[string]$mySite)
$webApp = Get-AzureRMWebAppSlot -ResourceGroupName $myResourceGroup -Name $mySite -Slot production
$appSettingList = $webApp.SiteConfig.AppSettings
$setting = @{}
ForEach ($kvp in $appSettingList) {
$setting[$kvp.Name] = $kvp.Value
}
$setting['settingName1'] = <"Settingvalue1">
$setting['settingName2'] = <"Settingvalue2">
$setting['settingName3'] = <"Settingvalue3">
Set-AzureRMWebAppSlot -ResourceGroupName $myResourceGroup -Name $mySite -AppSettings $setting -Slot production
Write-Host “Done!”

It is important for us to have a proper test automation as that will decide how reliable our automation is.

Conclusion

Putting all together, in this post we saw how easy it is for us to stitch everything together were we have Resources being created from Resource Template, build and release is automated, Application settings created automatically through PowerShell for Serverless Azure Functions. Adding a CICD trigger in VSO will turn this to a model where each code check-in will trigger as build and its corresponding release, This, in a way, will help us get our code to production much faster.

Hope this helps while working with Azure functions…

————————————————————————————————————————————————————————————–

^Ganesh Shankaran

Event ID 1085 from “Internet Explorer Zonemapping” Part 2 – ZoneMap Troubleshoot tool

$
0
0

In the blog-post “Description of Event ID 1085 from “Internet Explorer Zonemapping” we already explained that an invalid entry within the Site To Zone Assignment List policy will cause the Event 1085, but it is still not easy to determine which exact entries are invalid and by that are not converted into the intended Zone-Mapping.

When examining a handful of those entries it may appear appropriate to enter the URLs as a Trusted Site within the Internet Options on a client that does not receive the Assignment policy until you find the invalid entry that causes the following message:

 

But when this list exceeds a few pages in the Group Policy Report the effort is very high then.

In order to help the Administrator to find such invalid entries I wrote the attached command-line utility  Site2ZoneMap.exe, which interprets the entries below the two registry-keys:

[HKCUSoftwarePoliciesMicrosoftWindowsCurrentVersionInternet SettingsZonemapkey]

[HKLMSoftwarePoliciesMicrosoftWindowsCurrentVersionInternet SettingsZonemapkey]

 

Command-line utility with parameters:

SWITCH

DESCRIPTION

-test Processes the keys temporarily only (deleted afterwards)
-erroronly like -test, but only entries with errors are displayed
-keeptest  Processes the keys to HKCUSoftwareSiteToZoneAssignmentTool
-process Processes the ZoneMapKey to the according Policies-hive (requires administrative elevation!)

EXAMPLE: Assume the following policy, which has a correct value and an invalid entry (*.com):

When you execute the tool with Parameter “-test“, you receive the following output in CMD:

C:>site2zonemap -test

Processing [HKCUSoftwarePoliciesMicrosoftWindowsCurrentVersionInternet SettingsZonemapkey]

==========================================================================

Success: *.microsoft.com

Error 87 for URL: *.com

1 Errors found.

When you execute the tool with Parameter “-erroronly “, you receive the following output in CMD:

C:>site2zonemap -erroronly

Processing [HKCUSoftwarePoliciesMicrosoftWindowsCurrentVersionInternet SettingsZonemapkey]

==========================================================================

Error 87 for URL: *.com

1 Errors found.


DOWNLOAD: Site2ZoneMap


This blog has been provided to you by Heiko Mayer and the IE Support team!

3 Legal Decisions Every Founder Must Face

$
0
0

Guest post by Nick McNamara – Paralegal at Lehman Walsh Lawyers

The recent example of Snapchat, along with countless others, demonstrate why professional legal advice is imperative for all startups. All founders are faced with crucial legal decisions that can be implemented with the assistance of a legal profession. Some of the decisions include when they should form a company, how they should allocate equity and how can they protect themselves and their product.

 

Forming a Company

Forming a company has the benefit of limiting the liability of the shareholders. Australian law views a company as a separate legal entity that can own and dispose of property, enter contracts and sue or be sued. It is distinct from the owners, managers, operators, employees and agents and thus protects their personal assets. Formation of a company should occur early in a startup’s life to safeguard the members from any liabilities that may arise in the future. Limiting a founder’s liability is critical as startups require a significant investment of time and money and have a high failure rate. 

One of the greatest challenges for a startup is raising and maintaining sufficient capital to continue the endeavour. Investors find companies as the most appealing business structure as equity can be provided for their investment. Options such as convertible notes can be used before an accurate valuation of the company is made, allowing for startups to generate capital in their earlier stages. Companies also have the benefit of a flat tax rate of 30 percent and startups may qualify for additional tax relief or financial aid via grants from the Australian Government.

 

Allocating Equity

Two decisions should be made about allocating equity. The first is how the equity will be allocated among the founders, other key members and investors. The second is when the equity will be vested. 

Founders must determine how the equity will be allocated among the current members; this should be completed with a view of what roles each member will undertake in the future. A shareholder agreement should be drafted to set out the rights, obligations and share allocations of each shareholder and provide a guide to how the company will be operated. Startups are often innovative, and the value of original ideas, sustained effort and varying tasks can be clouded over time. Therefore it is best to determine this early in the startup’s life. Snapchat’s recent lawsuit by former founder Reggie Brown is an example of why recording each member’s involvement from inception is crucial. Other examples include uBeam and Facebook, where disagreements between the founding members have resulted in lawsuits. 

When equity vests in each member are as important as allocating. Vesting Clauses dictate when each shareholder will receive their share. A share subscription agreement can be drafted to determine the process of vesting the shares. Usually, this will occur over a four-year period, with the shareholder receiving 25% of their share after each year. This provides an additional incentive for members to remain with the startup, reducing the risk of losing key resources to complete the startup’s goals.

 

Protecting the Founders and their Product

Scaling a startup’s team to operate within the confines of the law is essential. While limited liability is a benefit for forming a company, there are instances where this is not afforded to certain individuals within the organisation. In Australia, some examples of where limited liability will not apply include when a company knowingly trades while insolvent, where a director acts outside their authority and commits a wrongful act and where a director enters into an agreement to prevent employees from recovering their entitlements. There are many scenarios where limited liability will become unlimited; this is a contentious area of law so it is important for members of startups to be aware of how the law is changing and what actions could make them personally liable.

A startup’s key asset is its product; therefore, the founding members should ensure that the product is adequately protected from the startup’s inception. Active protection of the company’s intangible assets should be a priority with all employees entering into a non-disclosure agreement and assigning intellectual property to the company via an intellectual property assignment agreement. This is particularly important in the early stages of the startup when intellectual property is owned by individual members and not the company. Once the company is established, intellectual property that is created by an employee of the company during their employment will be owned by the company. However, if the company uses an independent contractor instead of an employee, then the intellectual property will likely be owned by the contractor. It can be difficult at times to determine if a company has hired an employee or enlisted an independent contractor. Features such as control over work, if the tax is deducted from remuneration and whether tools and equipment were provided are used to determine if the company has hired an employee or an independent contractor. For this reason, it is paramount that the employee/employer relationship is defined in the employment contracts and that the contract dictates who owns any intellectual property created.

The examples of Snapchat, Facebook and uBeam demonstrate the necessity of professional legal advice at the foundation. Startups are vulnerable in their initial stages, consulting a lawyer early on will assist in protecting their product, reducing their liability and remove ambiguity in any future disagreements.

Visual Studio中的C++代码编辑和导航

$
0
0

原文发地址:https://blogs.msdn.microsoft.com/vcblog/2017/04/20/c-code-editing-and-navigation-in-visual-studio/

作者:Augustin Popa

Visual Studio提供了一套有效的工具,使C++开发人员可以轻松的阅读,编辑和浏览代码。在这篇博文中,我们将深入了解这些功能。并介绍他们的工作。这篇文章是针对Visual Studio的新用户。

这篇文章涉及以下的概念:

  1. 阅读和理解代码

2.  浏览你的代码库

3. 编辑和重构代码

4. 键盘快捷键参考

5. 结论

阅读和理解代码

如果你像大多数开发人员一样,你可能花更多的时间来查看代码而不是修改代码。考虑到这一点,Visual Studio提供了一套功能来帮助您更好地可视化和了解您的项目工程。

基本编辑功能

Visual Studio 自动为您的代码提供语法着色,以区分不同类型的符号。未使用的代码(例如#if 0下的代码)的颜色更加褪色。此外,代码区添加轮廓线,使其易于展开或者折叠。

如果你的代码中有错误,这将导致编译失败,Visual Studio会在问题发生添加红色的波浪线。如果Visual Studio发现您的代码有问题,但这个问题不会导致编译失败。您将看见一个绿色波浪线。您可以在“错误列表”窗口中查看到任何编译器生成的警告或者错误。

如果将光标放在大括号上,“{“或”}”,则Visual Studio会突出显示其匹配的

您可以通过按住Ctrl键和滚动鼠标滚轮或选择左下角的缩放设置,在编辑器重放大或缩小。

“工具”>“选项”菜单是Visual Studio选项的中心位置,可让您配置各种不同的功能。 值得探索的是根据您的独特需求量身定做IDE。

您可以通过转到文本编辑器>所有语言>常规或通过快速启动(Ctrl + Q)搜索“行号”来添加行号。 可以为所有语言或特定语言(包括C ++)设置行号。

快速信息和参数信息

您可以将鼠标悬停在任何变量,函数或其他代码符号上,以获取有关该符号的信息。 对于可以声明的符号,快速信息显示声明。

当你写出一个函数的调用时,调用参数信息来澄清预期的输入参数的类型。 如果您的代码有错误,您可以将鼠标悬停在其上,快速信息将显示错误消息。 您还可以在错误列表窗口中找到错误消息。

此外,“快速信息”还会显示您所在的位置,只要您将鼠标悬停在该符号的定义上方,您可以轻松地查看代码中的文档。

滚动条缩略图模式

Visual Studio将滚动条的概念比大多数应用程序进一步提升。 使用滚动条地图模式,您可以同时滚动浏览文件,而不必离开当前位置,也可以单击栏上的任意位置进行导航。 即使关闭地图模式,滚动条也会突出显示绿色代码(保存的更改)和黄色(对于未保存的更改)所做的更改。 您可以在“工具”>“选项”>“文本编辑器”>“所有语言”>“滚动条”>“使用垂直滚动条的缩略图模式”或通过快速启动(Ctrl + Q)搜索“地图”来打开地图模式。

类视图

有几种可视化代码的方法。一个例子是类视图。您可以从“视图”菜单或按Ctrl+Shift+C打开“类视图”。“类视图”将显示可搜索的所有代码符号及其范围和父/子层次结构的树。并以每个项目为单位进行组织。您可以从“类视图设置”(单击窗口顶部的齿轮箱图标)配置“类视图”显示的类别视图

生成包含文件的图形

要了解文件之间的依赖关系链,请在任何打开的文档中右键单击,然后选择生成包含文件的图形。

您还可以选择保存图表供以后查看。

查看调用层次结构

您可以右键单击任何函数调用以查看其调用层次结构的递归列表(调用它的两个函数以及调用的函数)。列表中的每个功能都可以以相同的方式进行扩展。有关详细信息,请参考调用层次结构.

速览定义

您可以通过右键单击并选择速览定义,或者使用光标在该符号上按Alt + F12,一目了然地查看变量或函数的定义。 这是一个快速的方式来了解更多关于符号,而不必在编辑器中保留当前的位置。

浏览你的代码库

Visual Studio提供了一套工具,可以让您快速有效地浏览您的代码库

打开文档

右键单击代码中的#include指令,然后选择打开文档,或者使用光标在该行上按Ctrl + Shift + G打开相应的文档。

切换标题/代码文件

您可以通过在文件中的任意位置右键单击并选择“切换标题/代码文件”或按相应的键盘快捷键:Ctrl + K,Ctrl + O,可以在头文件及其相应的源文件之间切换,反之亦然。 解决方案资源管理器 解决方案资源管理器是您的解决方案中的文件之间管理和导航的主要方式。 您可以通过在解决方案资源管理器中单击导航到任何文件。 默认情况下,文件按其出现的项目分组。要更改此默认视图,请单击窗口顶部的“解决方案和文件夹”按钮以切换到基于文件夹的视图。

转到定义/声明

您可以通过在编辑器中右键单击代码符号来定义,然后选择“转到定义”,或按F12。 您可以从右键单击上下文菜单或按Ctrl + F12类似地导航到声明。 通过编辑器上下文菜单导航到定义或声明。


查找/在文件中查找

您可以通过查找(Ctrl + F)或在文件中查找(Ctrl + Shift + F),在解决方案中运行文本搜索。 查找可以范围选择,当前文档,所有打开的文档,当前项目或整个解决方案,并支持正则表达式。 它还会在IDE中自动突出显示所有匹配项。

在文件中查找是一个更为复杂的查找版本,在“查找结果”窗口中显示结果列表。 它可以配置为比Find更远,例如允许您搜索外部代码依赖项,按文件类型过滤等。 您可以在“查找结果”窗口中组织在两个窗口中查找结果或从多个搜索中附加结果。 “查找结果”窗口中的单个条目也可以在不需要时被删除。

查找所有参考

查找所有引用显示所选符号的引用列表。 有关查找所有参考资料的更多信息,请查看我们的博客文章,查找重新设计用于较大搜索的所有参考文献

导航栏 您可以使用编辑器窗口上方的导航栏导航到您的代码库周围的不同符号。

转到

转到(Ctrl + T)是可用于导航到文件,代码符号或行号的代码导航功能。 有关更多信息,请参阅导览到导演导演To Introduction。

快速启动 快速启动可以轻松导航到Visual Studio中的任何窗口,工具或设置。 只需键入Ctrl + Q或单击IDE右上角的搜索框,并搜索您要查找的内容。

编写和重构代码

Visual Studio提供了一套工具来帮助您编辑,编辑和重构代码。

基本编辑器功能

您可以通过选择它们,按住Alt并按向上/向下箭头键轻松地上下移动代码行。 要保存文件,请按IDE顶部的保存按钮,或按Ctrl + S。 一般来说,最好使用全部保存(Ctrl + Shift + S)一次性保存所有更改的文件。

变更追踪

每当您更改文件时,左侧会出现一个黄色条,表示未保存更改。 保存文件时,条形图变成绿色

只要文档在编辑器中打开,绿色和黄色条就被保留。它们代表自您上次打开文档以来所做的更改。

智能感知

IntelliSense是一个功能强大的代码完成工具,可在您键入时为您提供符号和代码段。 Visual Studio中的C ++ IntelliSense实时运行,在更新代码库时分析您的代码库,并根据您键入的符号的字符提供上下文建议。当您输入更多字符时,推荐结果列表将缩小。

IntelliSense如何在Visual Studio中工作  此外,一些符号会自动省略,以帮助您缩小您所需要的。例如,当从类外部访问类对象的成员时,默认情况下,您将无法看到私有成员,或者保护成员(如果您不在子类的上下文中)。 从下拉列表中选出要添加的符号后,您可以使用Tab,Enter或其他提交字符自动完成(默认为{} []()。,:; + – * /%|!?^ = @#)。

提示:如果要更改可用于完成智能感知建议的字符集,请在快速启动(Ctrl + Q)中搜索“智能感知”,然后选择文本编辑器 – > C / C ++ – >高级选项以打开IntelliSense高级设置页面。从那里,编辑会员列表提交字符与您想要的更改。如果您发现自己不小心提交了您不想要的结果,或者想要一种新的方法,这是您的解决方案。

高级设置页面的IntelliSense部分还提供了许多其他有用的自定义设置。例如,成员列表过滤器模式选项会对您将看到的IntelliSense自动填充建议的种类产生重大影响。默认情况下,它设置为Fuzzy,它使用复杂的算法来查找您键入的字符中的模式并将其与潜在的代码符号相匹配。例如,如果您有一个名为MyAwesomeClass的符号,您可以键入“MAC”,并在自动完成建议中找到该类,尽管省略了许多中间的字符。模糊算法设置代码符号必须满足的最小阈值以显示在列表中。 如果您不喜欢模糊过滤模式,可以将其更改为前缀,智能或无。虽然None不会减少列表,但Smart过滤会显示包含与您输入的字符匹配的子字符串的所有符号。另一方面,前缀过滤纯粹搜索从您键入的字符串开始的字符串。这些设置为您提供了许多选项来定义您的IntelliSense体验,值得尝试,以了解您喜欢的内容。

智能感知不仅仅是建议个别符号。一些IntelliSense建议以代码片段的形式提供,它们提供了代码构造的基本示例。代码段可以通过旁边的方框图标轻松识别。在下面的截图中,“while”是一个代码片段,它在提交时自动创建一个基本的while循环。您可以选择在高级设置页面中切换片段的外观。

Visual Studio 2017提供两种新的IntelliSense功能,可帮助您缩小自动完成建议的总数:Predictive IntelliSense和IntelliSense过滤器。 查看我们的博客文章,C ++ IntelliSense改进 – 预测智能感知和过滤,以了解更多关于这两个功能如何提高您的生产力。 如果您发现自己处于IntelliSense建议的结果列表与您要查找的内容不一致的情况,并且您已经预先键入了一些有效字符,则可以通过单击显示更多结果按钮来选择不清空列表 在下拉列表的左下角 – 看起来像加号(+) – 或按Ctrl + J。这将刷新建议,并添加一些新的条目。 如果您使用Predictive IntelliSense,这是一种可选模式,使用比平常更严格的过滤机制,您可能会发现列表扩展功能更加有用。

快速修复

Visual Studio有时会提出改进或完成代码的方法。 这是一些叫做快速修复的灯泡弹出窗口的形式。 例如,如果您在头文件中声明一个类,Visual Studio将建议它可以在单独的.cpp文件中声明一个定义。

重构特征

你有一个你不满意的代码库吗?你发现自己需要做一些彻底的变化,但是害怕打破你的建设,或者觉得这将需要太长时间?这是Visual Studio中的C ++重构功能所在。我们提供一套工具来帮助您更改代码。目前,Visual Studio支持C ++的以下重构操作:

•重命名

•提取功能

•更改功能签名

•创建声明/定义

•移动功能定义

•实现纯虚

•转换为原始字符串文字

许多这些功能在我们的公告博客文章“所有关于Visual Studio中的C ++重构”中被提及。之后添加了更改功能签名,但功能完全符合您的期望 – 它允许您更改功能的签名并在整个代码库中复制更改。您可以通过右键单击代码中的某个位置或使用“编辑”菜单访问各种重构操作。还要记住Ctrl + R,Ctrl + R来执行符号重命名; 它很容易是最常见的重构操作。 另外,查看C ++ Quick Fixes扩展,它增加了许多其他工具来帮助您更有效地更改代码。 有关其他信息,请查看我们的C ++中写入和重构代码的文档

使用EditorConfig执行代码样式

Visual Studio 2017内置支持EditorConfig,这是一种流行的代码样式执行机制。 您可以创建.editorconfig文件并将它们放置在代码库的不同文件夹中,将代码样式应用于这些文件夹及其下的所有子文件夹。 .editorconfig文件取代父文件夹中的任何其他.editorconfig文件,并覆盖通过工具>选项的任何格式设置。 您可以设置标签与空格,缩进大小等之间的规则。 当您将项目作为团队的一部分工作时,EditorConfig特别有用,例如,当您的团队通常使用空格时,如果开发人员想要检查使用制表符而不是空格格式化的代码。 EditorConfig文件可以轻松地作为代码回购的一部分进行检查,以强化您的团队风格。

键盘快捷键参考

有关Visual Studio C ++开发人员的一整套默认键绑定,请参阅我们的Visual Studio 2017键盘快捷键参考

结论

最后,您可以在docs.microsoft.com的官方文档页面中找到有关如何使用Visual Studio的其他资源。 特别是,对于开发人员的生产力,我们有以下文章可供选择: •在代码和文本编辑器中编写代码 – 了解该领域的更多功能。 •编写和重构代码(C ++) – 提供一些C ++生产力提示。 •查找和使用Visual Studio扩展 – 许多社区贡献者提交免费和付费扩展,可以提高您的开发体验。

 

Experiencing Data Access Issue in Azure Portal for Availability Data Type – 07/06 – Mitigating

$
0
0
Update: Thursday, 06 July 2017 03:37 UTC

We are aware that Application Insights is experiencing errors while accessing data in Azure portal and Application Analytics. Root cause has been isolated to one of our back-end services which is responsible for data retrieval. To address this issue we are taking the necessary mitigation actions. Some customers may experience data access failures till the issue is completely mitigated.
  • Work Around: None
  • Next Update: Before 07/06 06:00 UTC

-Sapna


Writing a killer submission for Microsoft Partner awards

$
0
0

LisaLisa Lintern, Communications Strategist and Writer

Lisa Lintern, a communications strategist and writer, provided some great hints and tips.

“We’re just not sure how to start,” are the words I often hear from businesses when they contact me seeking help for award submissions. Sometimes I can hear they are feeling overwhelmed.

Writing your submission for the Microsoft Australia Partner Awards (MAPA) is your chance to tell the world you are brilliant! But for some reason when it comes to getting that stuff down on paper, it can be a struggle to find the right words or the right order to put them in.

But the good news there is a winning formula to help you with your MAPA submissions. A formula that has been used since storytelling began thousands of years ago. It has four key elements, with a literal ‘twist’ at the end.

1. The hero with a superpower

In an award submission you are the hero – the one with the power to save the world.

So spend some time thinking: what is your superpower? Perhaps you only started your business two years ago and have already doubled your customer numbers? Perhaps you have launched the first product of its kind – your ‘secret weapon’. Perhaps you have highly talented people working for you?

Take the time to really describe who you are: where you operate, the number of employees you have, and the number of customers you serve. Or other interesting facts like, perhaps you have customers that are members of the ASX 200? Whatever it is that makes you unique.

2. The character who needs saving

In most cases it’s a customer who is facing some kind of challenge that plays this role. Perhaps it’s a customer struggling to get information quickly and safely to its workforce located in remote regions across the world? Perhaps it’s a company being held back by a slow and unresponsive IT service provider? Or perhaps the victim is a market place suffering from a lack of competitive offerings?

And again, paint the picture of this character with evidence. The amount of money that the customer is spending on outsourcing their IT needs. Numbers or statistics that show how inefficient that industry is. Or numbers or statistics that show the opportunity for that industry if only they could work more efficiently.

3. The villain (boo hiss!)

But of course, just to makes things even more interesting, enter the villain. The person or thing that could stop the hero in its tracks. This could be challenges like other competitors, tough regulatory red tape, or a last minute technical hitch. What is it that you as the hero must overcome in the inevitable struggle?

4. The hero saves the day!

And of course, like all good stories, the hero always wins. How? Well this is where you need to be very clear. What was the tangible increase in a customer’s productivity as a result of your solution? How many new customers did your client win as a result? How much money is your customer now saving every year? What other customers did you win as a result of this win?

This tangible ‘real information’ is very important. They are the proof points as to why your story is such a success story. Make sure you back up your story with as many of these proof points as possible. This is the stuff that can really make a submission stand out.

Turn your story up side down

Once you have your story, there is one final and very important thing you must do. You need to turn it on its head. So the happy ending becomes your introduction. That summary pitch that makes it an irresistible story for the judges. The hook that reels them in and encourages them to make the time to read the rest of your submission.

Know your audience

Which leads me to my final piece of advice, and it’s the most important rule I have learnt throughout my entire communications career – know your audience and write for them.

In MAPA’s case your audience is a panel of about three to four people per award who all work for Microsoft Australia. WPC’s MPN Partner of the Year awards are judged by Microsoft experts from all over the world with a minimum of three judges per award. WPC’s MPN Country Partner of the Year for Australia is judged by Pip Marlow and the Microsoft Australia Leadership Team.

Judges, like all of us, are time poor. So make it easy for them to read your submission by following these tips:

  • Write your words as though you are saying them. Writing as you would speak enables your writing to have a conversational and ‘authentic’ feel – a style that is much more convincing for the reader.
  • Use an active voice, not a passive voice. Active writing is easier to read as it puts the subject at the front of the sentence. For example, “Fred loves Angela” is active. “Angela is loved by Fred” is passive (and takes the mind a bit longer to work out!).
  • Assume the judge knows nothing. So this means avoiding acronyms and jargon like the plague!
  • Say it once and say it well. Don’t use slightly different sentences to make the same point over and over again.
  • Edit ruthlessly! Delete any words you don’t need. Unnecessary repetition is not your friend and spelling mistakes are your enemy.

But most importantly…don’t be late!

Submissions for MAPA open on 1 July at 12.01 Australian Eastern Standard Time and closes at pm Australian Eastern Standard time on 29 August 2017

For both awards there are no exceptions or extensions so set yourself a deadline a week prior to this date to deal with any last minute technical issues.

Good luck with your submissions and don’t forget you can find a wealth of information here:

 Watch the “How to write a Killer Award Submission”

Notification Delays on Visual Studio Team Services – 07/06 – Mitigated

$
0
0

Final Update: Friday, July 7th 2017 04:13 UTC

We’ve confirmed that all systems are back to normal as of 02:00 UTC. Our telemetry has confirmed that the mitigation has worked and we do not see any exceptions since the time the mitigation was applied. Customers should now have no delays in receiving email notifications. Sorry for any inconvenience this may have caused.

Sincerely,
Manjunath


Update: Thursday, July 6th 2017 23:02 UTC

We’ve completed the move to our secondary email servers. We are working to validate that the issue with email notifications is resolved.

  • Next Update: Before Friday, July 7th 2017 03:15 UTC

Sincerely,
Daniel


Update: Thursday, July 6th 2017 21:10 UTC

We continue to investigate issues with email notifications. At this time we suspect a problem with our primary email servers. We’re working to route our traffic to our secondary email servers. We currently have no estimated time for resolution.

  • Next Update: Before Thursday, July 6th 2017 23:15 UTC

Sincerely,
Daniel


Initial Update: Thursday, July 6th 2017 20:10 UTC

We are investigating Notification Delays affecting users of Visual Studio Team Services. Users in multiple regions may experience delayed notification for events like Work Item Changes, @mentions and Pull Request updates.

  • Next Update: Before Thursday, July 6th 2017 22:15 UTC

Sincerely,
Manjunath


Computer Vision Made Easy with Cognitive Services

$
0
0

This post is provide by App Dev Manager, Andrew Kanieski, who takes K9 obedience training to a new level with computer vision and Cognitive Services.


Computer vision is truly amazing technology! It can be used to distill rich information from images that can help you breathe life into your applications. Microsoft’s Cognitive Services – Computer Vision API gives developers the power to develop their software with the ability to truly see and comprehend the world around it.

When I come upon a new technology I try to make application in my life. I ask myself how could I see myself using this? There are many ways in which we can use Computer Vision to make our day to day lives easier!

Context: Enter the family dog

One problem in my home we always struggle with is keeping our family dog
out of our bedrooms. As much as we train him, he always sneaks in to nap on our beds. How can we write software to help us train our family friend?

Problem: I need to be alerted when our dog enters a room

This is where I applied my new found knowledge of the Computer Vision API. What if every time my dog entered our bedroom I was alerted via SMS message?

Prerequisites
  • IP Camera (or other means of collecting images) – I am using an
    Amcrest IP2M-841B
  • .NET Core Installed – Install Instructions
    Here
  • VSCode (or other editor) – Install Instructions
    Here
  • Twilio Account (Trial or Subscriber) – More Info on Twilio + Azure
    Here

Assuming you have .NET Core installed, let’s start by creating a new .NET Core Console Application:

From console:

dotnet new console
dotnet restore

This will create our initial application with a Program.cs and a dog-watcher.csproj

In Program.cs I added the needed variables that will be passed into our application via Environment. Other means of storing and working with configuration are also fine. I just happen to prefer environment variables.

static string IP_CAMERA_SNAPSHOT_URL = System.Environment.GetEnvironmentVariable("IP_CAMERA_SNAPSHOT_URL");
static string IP_CAMERA_USER = System.Environment.GetEnvironmentVariable("IP_CAMERA_USER");
static string IP_CAMERA_PASSWORD = System.Environment.GetEnvironmentVariable("IP_CAMERA_PASSWORD");

In this case I am using an IP based camera that provides you with a REST API for getting a snapshot from the camera via HTTP.

Now that we have our needed configuration collected, let’s begin by creating logic to fetch our snapshots. Our IP camera uses HTTP GET with basic auth.

First, lets add System.Net.Http;

dotnet add package System.Net.Http

Now you can go ahead and use the System.Net.Http namespace.

Program.cs

static byte[] GetSnapshotFromIPCamera(string url) {
        Client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", System.Convert.ToBase64String(System.Text.ASCIIEncoding.ASCII.GetBytes(IP_CAMERA_USER + ":" + IP_CAMERA_PASSWORD)));
        return Client.GetByteArrayAsync(url).Result;
    }

Now that we have logic to collect images, let’s create some classes that match with the response from the Computer Vision API. This is only mapping Tags since that is the data element we are working with. In this case tags will provide us with a list of descriptors that computer vision has identified and with what level of confidence it feels those descriptors are identified.

using Newtonsoft.Json;
 
namespace Vision {
    public class Response {
        [JsonProperty("tags")]
        public Tag[] Tags {get;set;}
    }
    public class Tag {
        [JsonProperty("name")]
        public string Name {get;set;}
        [JsonProperty("confidence")]
        public float Confidence {get;set;}
    }
}

Now let’s post those images to Cognitive Services for analysis. Notice the use of the variable AZURE_VISION_URL and AZURE_VISION_KEY these are coming directly from the Azure Portal after we’ve gone and added our Coginitive Services subscription.

static string AZURE_VISION_URL = System.Environment.GetEnvironmentVariable("AZURE_VISION_URL");
static string AZURE_VISION_KEY = System.Environment.GetEnvironmentVariable("AZURE_VISION_KEY");
       
static Vision.Response GetVisionAnalysisResponse(byte[] image) {
    Client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", AZURE_VISION_KEY);
    using (var content = new System.Net.Http.ByteArrayContent(image))
    {
        content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/octet-stream");
        string res = Client.PostAsync(AZURE_VISION_URL + @"/analyze?visualFeatures=Tags, Description", content).Result.Content.ReadAsStringAsync().Result;
        return JsonConvert.DeserializeObject<Vision.Response>(res);
    }
}

Now let’s wire up the loop that watches our room.

static int INTERVAL = String.IsNullOrEmpty(System.Environment.GetEnvironmentVariable("INTERVAL")) ? 3000 : Convert.ToInt32(System.Environment.GetEnvironmentVariable("INTERVAL"));
 
static void Main(string[] args)
{
    while(true) {
        try {
            var res = GetVisionAnalysisResponse(GetSnapshotFromIPCamera(IP_CAMERA_SNAPSHOT_URL));
 
            // Only trigger alert logic when CV is 90% sure a dog has been spotted!
            if (res.Tags
                    .Where(t => t.Name.ToUpper().Equals("DOG") && t.Confidence > 0.90)
                    .Count() > 0) {
                // TODO: ADD LOGIC FOR WHEN DOG IS FOUND IN THE ROOM
            }
 
        } catch (Exception ex) {
            Console.WriteLine(ex);
        }
        Console.WriteLine("Waiting for next interval");
        System.Threading.Thread.Sleep(INTERVAL);
    }
}

With Twilio, sending SMS messages from C# is very easy. Simply setup a trial account and install the needed nuget packages.

Note: that trial Twilio accounts can only send SMS messages to verified devices.

dotnet add package Twilio

Program.cs

static string TWILIO_API_KEY = System.Environment.GetEnvironmentVariable("TWILIO_API_KEY");
static string TWILIO_API_TOKEN = System.Environment.GetEnvironmentVariable("TWILIO_API_TOKEN");
static string TWILIO_API_PHONE = System.Environment.GetEnvironmentVariable("TWILIO_API_PHONE");
 
static void Main(string[] args)
{
    while(true) {
        try {
            var res = GetVisionAnalysisResponse(GetSnapshotFromIPCamera(IP_CAMERA_SNAPSHOT_URL));
 
            // Only trigger alert logic when CV is 90% sure a dog has been spotted!
            if (res.Tags
                    .Where(t => t.Name.ToUpper().Equals("DOG") && t.Confidence > 0.90)
                    .Count() > 0) {
                Twilio.TwilioClient.Init(TWILIO_API_KEY, TWILIO_API_TOKEN);
                Twilio.Rest.Api.V2010.Account.MessageResource.Create(
                    new Twilio.Types.PhoneNumber(TARGET_PHONE), 
                    from: new Twilio.Types.PhoneNumber(TWILIO_API_PHONE), 
                    body: "It looks like Jack has made his way into your room!"
                );
            }
 
        } catch (Exception ex) {
            Console.WriteLine(ex);
        }
        Console.WriteLine("Waiting for next interval");
        System.Threading.Thread.Sleep(INTERVAL);
    }
}

Now we can simply adjust our camera for ideal viewing angle and run our application.

Conclusion

Working with Microsoft’s Cognitive Services is an easy way to build intelligent applications. Additionally, Azure provides a seemless hosting environment for applications that interact with Cognitive Services.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Experiencing Issue while using Service Map in West Central US region

$
0
0
Update: Friday, 07 July 2017 03:37 UTC
We are aware that Service Map is experiencing an issue in West Central US region. Customer in this region may experience issue while requesting machines list and other features of Service Map. We are working on root causing the issue and provide more updates as we learn. This issue has only impacted users in West Central US , all other regions are unaffected and services are running as expected.
  • Work Around: None
  • Next Update: Before 07/07 07:00 UTC

-Arvind

Capturing dumps on multiple instances automatically using procdumphelper

$
0
0

Capturing dumps for intermittent issues happening only on certain instances could be very challenging in Azure App Service environment. Depending upon the scenario, you can capture dumps using Auto-heal by defining the triggers in the root web.config file of your web site and configure the actions to invoke procdump when these triggers are hit. With this approach, If you have multiple instances of your web app, it will only generate memory dumps for the instance that has hit this trigger and not all instances. Here is an example to capture dumps using FREB customAction* fields. This approach will have 5-10% performance hit and will require you to enable FREB. This approach of dump collection works best when your scenario fits to the available triggers. The triggers on which you can generate dumps are based upon attributes including: Count, timeInterval, statusCode, subStatusCode, win32StatusCode, and privateBytesInKB.

What if your w3wp.exe crashes with different trigger like an exception code, ex StackOverflow exception (0xC00000FD), AccessViolationException (0xC0000005), etc? In this scenario, you can use Crash Diagnoser siteextension, however it will only monitor the current Instance in the current Kudu site (which is a random instance).

This is where procdumphelper can be used. It runs as a continuous WebJob on all the instances of your web app, and acts as a proxy for procdump.exe automatically attaching to the right non-SCM instance of w3wp.exe. Below are the steps to install and configure procdumphelper WebJob.

Step 1: Create a storage account and setup Azure Dashboard storage account.

Step 2: Open KUDU Debug console of your web app and goto “D:homesitewwwrootapp_datajobscontinuous“. Create these folders if not already present.

Step 3: Download procdumphelper.zip to local drive and drag-drop the zip file to “D:homesitewwwrootapp_datajobscontinuous”. Once done, you should have “D:homesitewwwrootapp_datajobscontinuousprocdumphelper” with all the contents of the WebJob.

Step 4: As soon as the “D:homedataprocdumphelperparams.txt”file is updated or changed, prodcump.exe will be attached on all the instances.

Step 5: Reproduce the issue. Once the dump is available, download it.

Customizing the Warehousing Mobile App

$
0
0

Introduction

We last looked at the Warehouse Mobile Devices Portal (WMDP) in detail in a series of blog posts here, here, and here.  The last one covered how to build custom solutions and walked through building a new sample workflow for the WMDP.  This post will be updating that sample to cover some of the changes that have occurred with the Advanced Warehousing solution and the Dynamics 365 for Finance and Operations – Enterprise Edition warehousing application.

WMDP vs Dynamics 365 for Warehousing Mobile App

The Warehouse Mobile Devices Portal (WMDP) interface, which is an IIS-based HTML solution (described in detail here), is being deprecated in the July 2017 release of Dynamics 365 for Finance and Operations (see deprecated features list here). Replacing this is a native mobile application shipping on Android and Windows 10 devices.  The mobile app is a complete replacement for the WMDP and contains a superset of capabilities – all existing workflows available in the WMDP will operate in the new mobile app.  You can find more detail on the mobile app here and here.

Customizing the new Dynamics 365 for Warehousing Mobile App

The process for customizing the new mobile app is largely unchanged – you can still utilize the X++ class hierarchy discussed in the previous blog post.  However – I want to walk through some of the differences that enable customizations to exist as purely extensions.  The previous solution required a small set of overlayered code.  Moving forward this practice is being discouraged and we recommend all partners and customers create extensions for any customizations.

As before, we will be focusing on building a new workflow around scanning and weighing a container.  The inherent design concept behind the Advanced Warehousing solution is unchanged – you will still need to think and design these screens in terms of a state machine – with clear transitions between the states.  The definition of what we will build looks like this:

WHSWorkExecuteMode and WHSWorkActivity Enumerations

Just as in the previous blog post – to add a new “indirect work mode” workflow we will need to add values to the two enumerations WHSWorkExecuteMode and WHSWorkActvity.  The new enum names need to match exactly as one will be used to instantiate the other deep inside the framework.  Note that both should be added as enumeration extensions built in a custom model.  Once this has been done it will be possible to create the menu item in the UI – since the WHSWorkActvity enumeration controls the list of available workflows in the UI:

You can see the extension enumeration values in the following screenshots:

  

WHSWorkExecuteDisplay class

The core logic will exist within a new class you will create, which will be derived from the WhsWorkExecuteDisplay base class.  This class is largely defined the same way as the WMDP-based example, however there is now a much easier way to introduce the mapping between the Execute Mode defined in the Menu Item and the actual class which performs the workflow logic – we can use attributes to map the two together.  This also alleviates the need to overlay the base WHSWorkExecuteDisplay class to add support for new derived classes (as the previous WHSWorkExecuteDisplay “factory method” construct forced us to do).

The new class will be defined like this:

[WHSWorkExecuteMode(WHSWorkExecuteMode::WeighContainer)]
class conWhsWorkExecuteDisplayContainerWeight extends WhsWorkExecuteDisplay
{
}

Note that all the new classes I am adding in this example will be prefixed with the “con” prefix (for Contoso).  Since there is still no namespace support it is expected partner code will still leverage this naming scheme to minimize naming conflicts moving forward.

The displayForm method is required – and acts as the primary entry point to the state machine based workflow.  This is completely unchanged from the previous example:

[WHSWorkExecuteMode(WHSWorkExecuteMode::WeighContainer)]
class conWhsWorkExecuteDisplayContainerWeight extends WhsWorkExecuteDisplay
{
    container displayForm(container _con, str _buttonClicked = '')
    {
        container    ret = connull();
        container&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;con = _con;

        pass = WHSRFPassthrough::create(conPeek(_con, #PassthroughInfo));

        if (this.hasError(_con))
        {
            con = conDel(con, #ControlsStart, 1);
        }

        switch (step)
        {
            case conWeighContainerStep::ScanContainerId:
                ret = this.getContainerStep(ret);
                break;

            case conWeighContainerStep::EnterWeight:
                ret = this.getWeightStep(ret, con);
                break;

            case conWeighContainerStep::ProcessWeight:
                ret = this.processWeightStep(ret, con);
                break;

            default:
                break;
        }

        ret = this.updateModeStepPass(ret, WHSWorkExecuteMode::WeighContainer, step, pass);

        return ret;
    }
}

A detailed analysis of this code can be found in the previous blog post – we will skip forward to the definition of the getContainerStep method, which is where the first screen is defined.  The two methods used to define the first screen are below:

private container getContainerStep(container _ret)
{
    _ret = this.buildGetContainerId(_ret);
    step = conWeighContainerStep::EnterWeight;

    return _ret;
}

container buildGetContainerId(container _con)
{
    container&amp;nbsp;&amp;nbsp; ret = _con;

    ret += [this.buildControl(#RFLabel, #Scan, 'Scan a container', 1, '', #WHSRFUndefinedDataType, '', 0)];
    ret += [this.buildControl(#RFText, conWHSControls::ContainerId, "@WAX1422", 1, pass.lookupStr(conWHSControls::ContainerId), extendedTypeNum(WHSContainerId), '', 0)];
    ret += [this.buildControl(#RFButton, #RFOK, "@SYS5473", 1, '', #WHSRFUndefinedDataType, '', 1)];
    ret += [this.buildControl(#RFButton, #RFCancel, "@SYS50163", 1, '', #WHSRFUndefinedDataType, '', 0)];

    return ret;
}

Note that I am using a class to define any custom constants required for the Warehousing logic.  This was typically done with macros in the previous version – but these can cause some issues in extension scenarios.  So instead we are encouraging partners to define a simple class that can group all their constants together – which can then be referenced as you see in the code above.  The only area where this does not work is in attribute definitions – this will still need a Macro or String definition.  Here is mine so far for this project:

class conWHSControls
{
    public static const str ContainerId = "ContainerId";
    public static const str Weight = "Weight";
}

The other important thing to notice in the above code is that I have explicitly defined the data type of the input field (in this case extendedTypeNum(WHSContainerId)).  This is important as it tells the framework exactly what type of input field to construct – which brings us to the new classes you need to add to support the new app functionality.

New Fields

In the previous version of this blog we discussed the fact that since we are adding new fields to the warehousing flows that are not previously handled in the framework we must modify (i.e. overlayer) some code in the WHSRFControlData::processControl method.  This allows the framework to understand how to handle the ContainerId and Weight fields when they are processed by the WMDP framework.

In the new model these features are controlled through two new base classes to customize and manage the properties of fields.  The WHSField class defines the display properties of the field in the mobile app – and it is where the default input mode and display priorities are extracted when the user configures the system using the process described here.  The WhsControl class defines the logic necessary for processing the data into the field values collection.  For my sample, we need to add support for the ContainerId field – so I have added the following two new classes:

[WhsControlFactory('ContainerId')]
class conWhsControlContainerId extends WhsControl
{
    public boolean process()
    {
        if (!super())
        {
            return false;
        }

        fieldValues.insert(conWHSControls::ContainerId, this.data);

        return true;
    }
}

[WHSFieldEDT(extendedTypeStr(WHSContainerId))]
class conWHSFieldContainerId extends WHSField
{
    private const WHSFieldClassName&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; = "@WAX1422";
    private const WHSFieldDisplayPriority  Priority    = 65;
    private const WHSFieldDisplayPriority  SubPriority = 10;
    private const WHSFieldInputMode&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; InputMode&amp;nbsp;&amp;nbsp; = WHSFieldInputMode::Scanning;
    private const WHSFieldInputType&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; InputType&amp;nbsp;&amp;nbsp; = WHSFieldInputType::Alpha;

    protected void initValues()
    {
        this.defaultName&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; = Name;
        this.defaultPriority&amp;nbsp;&amp;nbsp;&amp;nbsp; = Priority;
        this.defaultSubPriority = SubPriority;
        this.defaultInputMode&amp;nbsp;&amp;nbsp; = InputMode;
        this.defaultInputType&amp;nbsp;&amp;nbsp; = InputType;
    }
}

Obviously my conWhsControlContainerId class is not doing much – it is just taking the data from the control and placing it into the fieldValues map with the ContainerId name – which is how I will look for the data and utilize it later in the system.  If there was more complex validation or mapping logic I could place that here.  For example, the following is a snapshot of the process logic in the WhsControlQty class – this manages the logic for entering in quantity values from the mobile app:

public boolean process()
    {
        Qty qty = WHSWorkExecuteDisplay::str2numDisplay(data);
        if (qty &amp;lt;= 0)
        {
            return this.fail("@WAX1172");
        }

        if (mode == WHSWorkExecuteMode::Movement &amp;amp;&amp;amp; WHSRFMenuItemTable::find(pass.lookup(#MenuItem)).RFDisplayStatus)
        {
            controlData.parmFromInventStatusId(controlData.parmInventoryStatusSelectedOnControl());
        }
        else
        {
            controlData.parmFromInventStatusId(controlData.getInventStatusId());
        }

        if (!super())
        {
            return false;
        }

        if (mode == WHSWorkExecuteMode::Movement &amp;amp;&amp;amp; fieldValues.exists(#Qty))
        {
            pass.parmQty(qty ? data : '');
        }
        else
        {
            fieldValues.parmQty(qty ? data : '');
        }

        //When 'Display inventory status' flag is unchecked, need the logic for #FromInventoryStatus and #InventoryStatusId
        this.populateDataForMovementByTemplate();

        return true;
    }

The buildGetWeight method is very similar to the previous UI method – the only real difference is the Weight input data field.  Note that we don’t need to define a custom WHSField class for this field because it already exists in the July Release.

Error Display

There was another minor change that was necessary before I could get the expected behavior, and it points to a slight change in the framework itself.  In the previous version of the code when I reported that the weight was successfully saved I did so with an “addErrorLabel” call and passed in the WHSRFColorText::Error parameter to display the message at the top of the screen.  This same code in the new warehousing app will now cause the previous step to be repeated, meaning I will not get the state machine transition I expect.  Instead I need to use the WHSRFColorText::Success parameter to indicate that I want to display a status message but it should not be construed as an error condition.

container processWeightStep(container _ret, container _con)
…
ttsBegin;
containerTable = WHSContainerTable::findByContainerId(pass.lookupStr(conWHSControls::ContainerId),true);
if(containerTable)
{
    containerTable.Weight = pass.lookupNum(conWHSControls::Weight);
    containerTable.update();
    _ret = conNull();
&amp;lt;strong&amp;gt;    _ret = this.addErrorLabel(_ret, 'Weight saved', WHSRFColorText::Success);
&amp;lt;/strong&amp;gt;    pass.remove(conWHSControls::ContainerId);
    _ret = this.getContainerStep(_ret);
}
else
{
    _ret = conNull();
    _ret = this.addErrorLabel(_ret, 'Invalid ContainerId', WHSRFColorText::Error);
    pass.remove(conWHSControls::ContainerId);
    _ret = this.getContainerStep(_ret);
}
ttsCommit;

 

Caching

The mobile app as well as the AOS perform a significant amount of caching, which can sometimes make it difficult to add new classes into the framework.  This is because the WHS code is heavily leveraging the SysExtension framework.  I find that having a runnable class included in the project which simply calls the  SysExtensionCache::clearAllScopes() method can help resolve some of these issues.

Conclusion

At this point I have a fully functional custom workflow that will display the new fields correctly in the mobile app.  You can see the container input field and weight input below.  Note that if you want to have the weight field display the “scanning” interface you can change the “preferred input mode” for the Weight EDT on the “Warehouse app field names” screen within the Dynamics 365 environment itself.

 

The Dynamics 365 for Operations project for this can be downloaded here.  This code is provided “as-is” and is meant only as a teaching sample and not meant to be used in production environments.  Note that the extension capabilities described in this blog are only available in the July Release of Dynamics 365 for Operations and Finance or later.

エア ギャップに対する用心: ネットワーク分離のコスト、生産性、およびセキュリティにおけるデメリット

$
0
0

2017 年 5 1 Paul Nicholas – Trustworthy Computing、シニア ディレクター

このポストは「 Mind the air gap Network separation’s cost, productivity and security drawbacks 」の翻訳です。

 

最近の政策立案者との話し合いにおいて、重要なサイバーセキュリティ ツールとしてネットワーク分離 (インターネットから機密ネットワークを物理的に切り離すこと) が提起されることが時々あります。その理由は、サイバー攻撃者は「エア ギャップ」を越えてターゲットに近づくことができないため、セキュリティの最高の目標である 100% の保護が保証されるからです。

しかしながら、ネットワーク分離は、政府のサイバーセキュリティ ツールキットで採用されているものの、重大なデメリットも存在する、と私は経験から学んでいます。このデメリットの例として、実装コスト、保守コスト、生産性の低下、 (直感に反して) セキュリティの重要側面の弱体化が挙げられます。全体的に見て、システムの相互接続性を基盤として、クラウド コンピューティングやモノのインターネット (IoT) がイノベーションを推進しているような世界では、ネットワーク分離は不適切です。このブログ記事では、これらの問題についてもう少し詳しく見ていきたいと思います。

ネットワーク分離は、重要分野 (軍機密ネットワークや原子力発電所など) において確立され、認められたセキュリティ手法です。これらのシステムが不正にアクセスされた場合に生じうるダメージは壊滅的なので、どれだけのデメリットがあろうともネットワーク分離をする意味はあるからです。しかしながら、より広範囲にわたるネットワーク分離の実装を政府が検討するとなれば、その費用対効果の計算も改める必要があります。

コスト面だけを見ても、複数の分離されたネットワークを構築することは、限られたリソースの消費を増大させ、スケール メリットを低下させることを意味しています。「エア ギャップ」を採用するには、独立したサーバー、ルーター、スイッチ、管理ツールなどを用いてまったく新しいネットワークを構築する必要があります。そのネットワークは、 (時々しか発生しないとしても) 予測可能なピーク需要に合わせて構築する必要があります。ほとんど使用されないこのキャパシティは実質的に無駄になります。一方、分離されていないネットワークの場合、一時的なクラウド リソースを用いて、必要に応じて「スケール アップ」するだけで済みます。分離すれば、物理的な保守でより多くの時間がかかると同時に、リモートの集中ハブでソフトウェアの保守を行うことができないため、さらにコストが増大します。

 

ネットワーク分離は、効率性、生産性、および利便性にも悪影響を及ぼします。「エア ギャップ」は、大半の行政職員が市民に最高のサービスを提供するうえで必要となる外部世界に対する障壁を構築します。どれが分離されているのか、どれがされていないのかに気をつけつつ、異なる複数のデバイス間で情報を移動させるのは、時間を無駄にするだけでなく、悪くすると混乱を招くことになります。そして、市民と直接やりとりすることを意図した数多くの行政サービスおよびシステムは、分離プロトコルによって速度が低下し、煩雑化するおそれがあります。政府がネットワーク分離のためにクラウドや IoT のメリットを放棄すれば、スマートな都市やスマートな国家のメリットが大きく損なわれることになります。

最後に、ネットワーク分離のセキュリティに関するメリットも絶対確実とは言い切れません。一例を挙げると、多くの場合、脅威から切り離されることは、サイバーセキュリティのイノベーションはもちろん、パッチなどの日常的なセキュリティ ツールからも切り離されることを意味します。さらに、従業員や管理者は、「エア ギャップ」の向こう側は安全であると思い込んでしまうことで、重要なセキュリティの基本事項を軽視してしまうおそれがあります。実際、組織のサイバーセキュリティ文化が未熟であれば、ソーシャル エンジニアリングやヒューマン エラーによってシステムに侵入する手段を悪意のある第三者に与えてしまうことになりかねません (: 従業員が面倒な要件を回避しようと、私用電子メール (たいてい安全性が低い) を利用する)

また、「エア ギャップ」自体が回避される可能性もあります。外部世界とのわずか 1 回の接続が悪意のある第三者に悪用される単一障害点を生じさせます。また、直接接続を一切用いなくても「侵入」できる手段は存在します。Stuxnet が示したように、USB ドライブなどのリムーバブル メディアを使用すれば、物理的に分離されたハードウェアにマルウェアを侵入させることが可能です。また、ハッキングの中には、「エア ギャップ」を「越える」ことができるものもあります。例として、USBee (「USB ドングルの電磁放射を利用して狭い範囲でデータの抜き取りをソフトウェアのみで行う手法」) AirHopper (コンピューターのビデオ カードを FM 送信機に変え、「エア ギャップ環境内」のデバイスからデータを収集する手法) が挙げられます。

サイバー攻撃の規模、頻度、高度化、および影響の増大を懸念している政府にとっては、ネットワーク分離を採用する正当な理由があるのかもしれません。ネットワーク分離は、機密ネットワークの保護などの限られた状況において、リスク管理に基づいた適切なサイバーセキュリティ対策の一部として採用されることもあります。しかしながら、このアプローチのコスト、利便性、および効果に関するトレードオフについて政府が理解することは非常に重要です。ネットワーク分離は、サイバーセキュリティに関するすべての懸念事項に対する正しい答えでもなければ、唯一の答えでもありません。

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>