Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

How to build a VSTS assistant – The Life and Times of a Kanban bot – Part 1

$
0
0

Greetings everyone,

If you're working as a developer, chances are you've heard about Visual Studio Team Services, and you're a master of Kanban boards, iterations, work items, code management, continuous integration pipelines, and you probably know the Agile manifesto by heart... 😉

We live in an era of thousands of tools and workflows that we have to use and pay attention to every time (or maybe I could do better at organizing myself...) Whatever the case may be, I always wondered how it would be like to have an assistant manage tasks for me.

If you relate to anything that I wrote above, then this article is for you. You probably even had moments like these:

Source: https://boingboing.net/2013/04/07/great-do-not-disturb-statu.html

and you are constantly looking for solutions to automate tasks and make you stay in "the zone".

Having to alt-tab constantly between Visual Studio and the task board all the time may make you go out of focus. 🙂 And just before a project manager or a scrum master wants to fill up the comment section with a lot of arguments against this, remember - this is supposed to be a fun post, and real world may differ significantly.

As we live in a world that's getting filled up with voice assistants and artificial intelligence, I thought of building a bot that can manage tasks from the Visual Studio Team Services board for me. This will be a multi-part series where I will explain what I did and show you how you can build your own assistant too.

Visual Studio Team Services has a REST API which you can use to do lots of tasks, such as managing work items, kicking off builds, doing test runs, adding widgets to dashboards, managing repositories, creating release definitions and many more: REST API Overview for Visual Studio Team Services and Team Foundation Server

I logged in to a VSTS test project and added a sample iteration and some tasks:

Next up, I decided to build an Azure Function, that would query the VSTS REST API and expose a list of my tasks.

First, I had to create an Azure AD Application that would have permissions for Visual Studio Team Services, and use ADAL to authenticate against it. A quick search would lead you to a very handy repository on GitHub: https://github.com/Microsoft/vsts-auth-samples

Because I would write C# code, I decided to look over the Managed Client sample, and quickly figured out how to configure the app, delegate permissoins, install and configure ADAL and get an authentication token. At the top, you would have to define a series of constant strings:

internal const string VstsCollectionUrl = "https://account.visualstudio.com"; //change to the URL of your VSTS account; NOTE: This must use HTTPS
internal const string ClientId = "<clientId>"; //update this with your Application ID
internal const string Username = "<username>"; //This is your AAD username (user@tenant.onmicrosoft.com or user@domain.com if you use custom domains.)
internal const string Password = "<password>"; // This is your AAD password.
internal const string Project = "<project>"; // This is the name of your project

and then set up an authenticationContext and login to the Azure AD app in order to get the accessToken:

private static AuthenticationContext GetAuthenticationContext(string tenant)
{
     AuthenticationContext ctx;
     if (tenant != null)
        ctx = new AuthenticationContext("https://login.microsoftonline.com/" + tenant);
     else
     {
        ctx = new AuthenticationContext("https://login.windows.net/common");
        if (ctx.TokenCache.Count > 0)
        {
            var homeTenant = ctx.TokenCache.ReadItems().First().TenantId;
            ctx = new AuthenticationContext("https://login.microsoftonline.com/" + homeTenant);
        }
     }
     return ctx;
}

The main function looks like this:


        [FunctionName("GetWorkItemList")]
        public static async Task Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            var ctx = GetAuthenticationContext(null);
            try
            {
                var adalCredential = new UserPasswordCredential(Username, Password);

                var result = ctx.AcquireTokenAsync(VstsResourceId, ClientId, adalCredential).Result;
                var bearerAuthHeader = new AuthenticationHeaderValue("Bearer", result.AccessToken);
                var speechData = GetWorkItemsByQuery(bearerAuthHeader);
                return new HttpResponseMessage(HttpStatusCode.OK)
                {
                    Content = new StringContent(speechData, Encoding.UTF8, "application/json")
                };
            }
            catch (Exception ex)
            {
                log.Error("Something went wrong.");
                log.Error("Message: " + ex.Message + "n");
                return req.CreateErrorResponse(HttpStatusCode.InternalServerError, "Error!");
            }
        }

A pretty basic function - I'm acquiring a token, constructing an authorization header and calling a function called GetWorkItemsByQuery, where I pass that authorization header, so I can get the list in a JSON format.

Here are some examples on how you can manipulate work items: https://www.visualstudio.com/en-us/docs/integrate/api/wit/work-items

Looking over the samples here: https://www.visualstudio.com/en-us/docs/integrate/api/wit/samples

I decided to make a function that would retrieve a list of the items assigned to me:


public static string GetWorkItemsByQuery(AuthenticationHeaderValue authHeader)
        {
            const string path = "Shared Queries/My Tasks"; //path to the query
            var speechJson = "{ "speech": "Sorry, an error occurred." }";
            using (var client = new HttpClient())
            {
                client.BaseAddress = new Uri(VstsCollectionUrl);
                client.DefaultRequestHeaders.Accept.Clear();
                client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
                client.DefaultRequestHeaders.Add("User-Agent", "VstsRestApi");
                client.DefaultRequestHeaders.Add("X-TFS-FedAuthRedirect", "Suppress");
                client.DefaultRequestHeaders.Authorization = authHeader;

                //if you already know the query id, then you can skip this step
                var queryHttpResponseMessage = client.GetAsync(Project + "/_apis/wit/queries/" + path + "?api-version=2.2").Result;

                if (queryHttpResponseMessage.IsSuccessStatusCode)
                {
                    //bind the response content to the queryResult object
                    var queryResult = queryHttpResponseMessage.Content.ReadAsAsync().Result;
                    var queryId = queryResult.id;

                    //using the queryId in the url, we can execute the query
                    var httpResponseMessage = client.GetAsync(Project + "/_apis/wit/wiql/" + queryId + "?api-version=2.2").Result;

                    if (httpResponseMessage.IsSuccessStatusCode)
                    {
                        var workItemQueryResult = httpResponseMessage.Content.ReadAsAsync().Result;

                        //now that we have a bunch of work items, build a list of id's so we can get details
                        var builder = new System.Text.StringBuilder();
                        foreach (var item in workItemQueryResult.workItems)
                        {
                            builder.Append(item.id.ToString()).Append(",");
                        }

                        //clean up string of id's
                        var ids = builder.ToString().TrimEnd(',');

                        var getWorkItemsHttpResponse = client.GetAsync("_apis/wit/workitems?ids=" + ids + "&fields=System.Id,System.Title,System.State&asOf=" + workItemQueryResult.asOf + "&api-version=2.2").Result;

                        if (getWorkItemsHttpResponse.IsSuccessStatusCode)
                        {
                            var result = getWorkItemsHttpResponse.Content.ReadAsStringAsync().Result;

                            // the work item list is exposed as a JSON object
                            var myWorkItemList = JsonConvert.DeserializeObject(result);

                            // I iterate through the list of work items and get each title and state, which I concatenate so the result can be 'spoken'
                            var response = myWorkItemList.value.Aggregate("", (current, item) => current + (item.fields.SystemTitle + " - " + item.fields.SystemState + ' '));

                            // Google Assistant-specific syntax
                            speechJson = "{ "speech": "" + response + "" }";
                        }
                    }
                }
            }
            return speechJson;
        }

As you can read through the code, I'm using a path to a specific query, as VSTS provides lots of queries. In my case, it would be "My Tasks".

Then, by using the authorization header I got earlier, I'm querying the API, and as the list is returned as a JSON object, I'm deserializing it - hint: you can use http://json2csharp.com/ to generate classes.

Because VSTS tasks have properties such as System.Id, System.State and System.Title, you must define the properties in the Fields class as below:


public class Fields
    {
        [JsonProperty("System.Id")]
        public int SystemId { get; set; }

        [JsonProperty("System.State")]
        public string SystemState { get; set; }

        [JsonProperty("System.Title")]
        public string SystemTitle { get; set; }
    }

After this operation, a "speech" JSON must be constructed, because you would want these to be spoken out loud by an assistant.

You can use Cortana, Alexa, Google Assistant, Siri or anything else you want, as long as it has a speech API implemented. Make sure you look the documentation up for what you want to use, and build the JSON object as described in those docs.

In my case, I used a Google Home Mini, so I had to look up the syntax for Dialogflow: https://dialogflow.com/docs/fulfillment

Done - ready to be consumed. Here's how the output looked like:

{ "speech": "Create the DB diagram - Active Create classes from DB tables - New " }

I went ahead and published it to Azure Functions, and copied the Function URL: https://yourfunctionappname.azurewebsites.net/api/GetWorkItemList?code=<token>

You can get this URL from the Functions portal, by clicking on Get Function URL:

And then I went to the Dialogflow console and created the bot. In the Fulfillment section, I had to paste the URL of the function in the Webhook field:

And at the end, I created an Intent, so that when it hears "show me my work items", it triggers the Webhook we configured, plays the response it receives, and ends the conversation:

That's it.

Let's see it in action.

The code for the Function App can be found here: https://github.com/julianatanasoae/VSTSFunctionApp

That's it for Part 1 of this series. In the next article, we'll see how we can add work items, assign them to a user, change the state and more.

Cheers!


RichEdit Clipped Text

$
0
0

This post describes three ways RichEdit may clip text along with possible solutions. Clipping can occur due to inadequate line height, lack of font vertical padding or insufficient painting of selected text. In some cases, improved rendering code could avoid clipping. Typographic compromises can avoid clipping in other cases.

Selection clipping

Acetate selection is discussed in RichEdit Colors. The principle is that the background color of the selected text is blended with the selection background color and the text is then painted on top with the regular text color. This differs from other selection methods, such as inverting the colors of the selected text, using different selection text and background colors, or enclosing selected text in rectangles. Office applications typically use acetate selection, whereas Windows apps such as Notepad use selection text and background colors. One advantage of acetate selection for RichEdit is that partial ligature selection doesn’t get clipped by the rectangular selection background. This is illustrated in the following image where the f of an fi ligature is selected

 

Since the f’s text color doesn’t change when selected, the f’s underhang and overhang aren’t clipped. In contrast without acetate selection, RichEdit appears to select the whole fi ligature and clips the f’s underhang since RichEdit doesn’t have the code to render the ligature glyph three times with appropriate text colors. You can try out acetate selection with an Arabic ligature too, e.g., type a lam aleph (لا – gh with an Arabic keyboard) and then shift+→ to select the lam alone. You’ll see the acetate highlight go half way thru the لا. Acetate selection in RichEdit works the same way as in Word.

Without acetate selection as in Notepad, RichEdit can clip overhangs and underhangs. For example, with selection text and background colors and no ligatures, selecting the f alone renders as

Notepad displays this text without clipping. Acetate selection is used by default in RichEdit. To disable acetate selection, send EM_SETEDITSTYLEEX with wparam = lparam = SES_EX_NOACETATESELECTION.

Nevertheless, even with acetate selection, RichEdit will clip in some scenarios. If the character format of adjacent space differs from that of a character with an underhang, the selection can clip it. For example, selecting “f” in RichEdit where the f is in Times New Roman and the leading space is in Segoe UI looks like

The f overhang is also clipped if the character following the f is selected but not the f. For some scripts, RichEdit automatically formats spaces with the same font as the character that precedes it. But ideally the code should display such scenarios enough times to paint all parts of a glyph with the appropriate background. This problem doesn’t occur in Word or Notepad.

The source of the problem is that RichEdit handles one character-format run at a time, first painting the background and then the text. When the format background changes, the new background gets painted over any overhang from the preceding run. A fix would be to paint all the background colors for a line first and then paint the text on top. Alternatively, one could paint a glyph as many times as necessary to display the various parts of the glyph unclipped (as in Notepad).

Baseline alignment

In well-formed typography, the baselines between different fonts coincide. This increases the line height when fonts with different ascents and descents appear on the same line. This is particularly true when Latin and Japanese scripts appear together. No one font can cover all of Unicode; Version 10 has 136,755 characters and TrueType fonts are limited to a maximum of 65535 glyphs (16-bit glyph indices). So multiple fonts must be used in general. Notepad, too, uses multiple fonts to display such text. As such virtually any editor has to have some degree of rich-text capability as discussed more in RichEdit Plain-Text Controls.

To illustrate how the line height increases when an East Asian font is used on the same line as a Latin font, we select text with a single font and then with a combination. The selection background color reveals the resulting line height

A way to avoid baseline alignment is to send the message EM_SETEDITSTYLEEX with wparam = lparam = SES_EX_DONTALIGNBASELINE (0x00000800). This flag isn’t currently defined in MSDN, but should be. For the example above, this choice produces

which fits in the same vertical space as the Latin text alone.

Southeast Asian fonts

Some Southeast Asian scripts can have glyph size or shaping that results in a large ascent and/or descent. This can make it hard to mix with other scripts and maintain the line height. Accordingly, font designers may leave little or no vertical padding for glyph clusters that push the limits. In single-line controls or when paragraph line-spacing-exactly is active, this may result in clipping. For example, consider the Thai character SARA AI MAIMUAN ใ (U+0E43). Enlarging it and showing the top and bottom glyph boundaries compared to the Latin letter A, we see that there’s no room for roundoff on the top

RichEdit clips the top of this letter unless there’s some paragraph space-before or line-spacing exactly with enough extra vertical space. While space before/after can eliminate such clipping for a single line, inside a multiline paragraph, one has to use a large enough line-spacing-exactly value. One could argue that the font designers made a mistake by eliminating all vertical padding for these large glyphs, but text renderers now need to deal with it. This font no-padding “feature” differs from the high fonts used in mathematics and in fancy fonts, which can have ascents and descents appreciably larger than those for standard glyphs.

Conclusions

While RichEdit provides a way to disable baseline alignment and to specify a larger line height to avoid Southeast Asian large glyph clipping, it doesn’t handle all kinds of selection clipping. It would be desirable to fix the remaining selection clipping scenarios and to offer a mode where the line height is automatically increased to avoid clipping of large Southeast Asian glyphs in fonts that provide inadequate vertical padding.

 

Java を使用して EventProcessorHost で IoT Hub に接続する方法

$
0
0

Java を使用して、Event Hub 用のEventProcessorHost IoT Hub からデータを受信したい場合に、Event Hub 用の以下のドキュメントのサンプルを、IoT Hub に接続できるよう、接続情報を変更する方法をご紹介いたします。

 

- Java を使用してAzure Event Hubs からイベントを受信する

< https://docs.microsoft.com/ja-jp/azure/event-hubs/event-hubs-java-get-started-receive-eph >

 

上記ドキュメントにおいて、IoT Hub の場合、接続情報を以下のように置き換えていただくことで、IoT Hub に接続できます。

"{説明文}" の部分をそれぞれご自身の環境に合わせて書き換えます。

 

-----------------

final String namespaceName = "{イベントハブ互換エンドポイントから'Endpoint=sb://' '.servicebus.windows.net/;' 以降を除いた部分}";

final String eventHubName = "{イベントハブ互換名}";

 

final String sasKeyName = "{IoT Hub の共有アクセスポリシー名。ここでは iothubowner とします}";

final String sasKey = "{iothubowner の 共有アクセスキー}";

 

final String storageAccountName = "{ストレージアカウント名}"

final String storageAccountKey = "{ストレージアカウントのキー}";

-----------------

 

namespaceName eventHubName については以下をご参考ください。

 

clip_image002

 

sasKeyName sasKey は以下をご参考ください。

 

clip_image004

 

[ご参考: IoT Hub の共有アクセス ポリシーについて]

上記の例では共有アクセス ポリシーとして iothubowner を指定していますが、D2C メッセージをイベントハブ互換エンドポイントから読み取るには、 Service Connect (サービス接続) のアクセス許可が必要となります。

 

- IoT Hub へのアクセスの制御

<https://docs.microsoft.com/ja-jp/azure/iot-hub/iot-hub-devguide-security >

 

上記の内容がお役に立てば幸いです。

 

Azure IoT 開発サポートチーム 津田

 

Lets bring in the New Year together – Make 2018 Epic by Learning a new technology!

$
0
0

With just one course, you can develop practical skills that can uncover a deep hidden passion. If you’re ready to learn how to harness Microsoft SQL Server to deliver mission-critical performance, gain faster insights on data, or drive your hybrid cloud strategy, you’re in the right place. These learning opportunities can help you get started quickly.

Architecture Big Data Analytics

Type: Technical (L300)

Audience: IT Professional / Architects

Cost: $699

Product: Microsoft Azure

Date & Locations: Canberra (February 12-14); Sydney (February 19-21); Brisbane (March 5-7); Melbourne (March 14-15)

The Azure Big Data and Analytics Bootcamp is designed to give students a clear architectural understanding of the application of big data patterns in Azure. Students will be taught basic Lambda architecture patterns in Azure, leveraging the scalability and elasticity of Azure in Big Data and IoT solutions. An introduction to data science techniques in Azure will also be covered.  Individual case studies will focus on specific real-world problems that represent common big data patterns and practices. REGISTER HERE

Next Up Exam Camp 70-473: Designing and Implementing Cloud Data Platform Solutions

Type: Technical (L300)

Audience: IT Professionals looking to earning formal qualifications

Cost: $399

Product: Data Platform

Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)

Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Data Platform is a must-have certification for anyone looking to prove their skills. REGISTER HERE

Next Up: 70-475 Designing and Implementing Big Data Analytics Solutions

Type: Technical (L300)

Audience: IT Professionals looking to earning formal qualifications

Cost: $399

Product: Data Platform

Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)

Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Data Platform is a must-have certification for anyone looking to prove their skills. REGISTER HERE

请预留时间并立即注册Microsoft 365 DevDays!

$
0
0

请为3月17日至18日在北京举行的Microsoft DevDays 预留参与时间!

此次为期两天的免费活动将打包奉上包含Office 365、SQL Data platform,以及Microsoft 互操作性相关的主题。实操工作坊将带您深度了解我们的前沿技术,并学习如何创建新的生产力解决方案。

What: Microsoft DevDays
When: 2018年3月17日至18日
Where: 北京市海淀区丹棱街5号
Who: 开发人员、解决方案提供商、独立软件开发商、IT管理员、企业级开发人员,学生
Cost: 免费!

我们期待各个水平的专业人士参与此次活动!
活动的所有详情请见这里

有任何问题可以通过microsoftdevdays@microsoft.com联系我们。

*请注意:本次活动的大部分会议将以中文形式进行。

Co přinesl rok 2017 v Azure v IaaS službách

$
0
0

Rok je v cloudu dlouhá doba. V této sérii zpětných pohledů se pokouším ohlédnout za největšími novinkami, které rok 2017 přinesl. V dnešním díle se zaměřím na základní kámen všeho – infrastrukturu, tedy compute a storage v Azure s tím, že na networking a kontejnery se podíváme v jiném článku.

Masivní příval nových compute prostředků

Pokud vaše datové centrum obnovujete jednou za pár let, Azure takhle nefunguje. Prochází v zásadě trvalým redesignem, přidáváním nových a nových typů strojů a navyšováním kapacity těch stávajících, otevíráním nových datových center uvnitř regionů i spouštění zcela nových regionů. Pokud jste loni používali nějaký stroj v režimu pay-as-you-go, kdykoli ho můžete vypnout a přejít na některou z dále uvedených novinek roku 2017. Přináší obvykle nové možnosti (procesory, velikosti) a/nebo snížení ceny.

A-series v2

Pokud hledáte základní server, je A-řada určena právě pro vás. Není zde specifikován konkrétní typ CPU, ale máte výkon core pro sebe. ACU (komparativní srovnání výkonu řad, které najdete zde https://docs.microsoft.com/en-us/azure/virtual-machines/windows/acu) uvádí u A-řady hodnotu 100, tedy řekněme polovinu nejběžnější D-řady. V lednu 2017 byla uvedena v2 generace této řady, která přinesla nižší cenu a přitom současně více paměti na core a také paměťově navýšené řady jako je A4m v2.

L-series (blíže neurčený Intel E5 v3)

Některé aplikace, konkrétně například NoSQL databáze jako je MongoDB, Cassandra a další, rádi použijí brutální storage výkon lokálního SSD a nepotřebují infrastrukturně perzistenci storage. Jednoduše o několikanásobné uložení dat se stará aplikace samotná, tedy DB si dělá mirror a je připravena na failover. Co tedy chcete je dostatečně velký (několik TB) lokální SSD disk. Přesně takhle je koncipována řada L, která přišla v dubnu 2017.

L v2 řada (AMD EPYC™7551) – preview

Vývoj jde neuvěřitelně dopředu a tak ještě ve stejném roce v prosinci 2017 přišla L-series v2 postavená (a to je velká novinka) na procesoru firmy AMD. Největší model L64s v2 nabízí lokální SSD o velikosti 15 TB … to už je pořádný mazlík, ne?

D v3 a E v3 řady (E5-2673 v4)

Historicky byl Azure postaven tak, že u všech typů VM se vCPU = fyzický core. Žádné sdílení a přeprodání jednoho core deseti zákazníkům. Toho se Azure drží i nadále a platí to i pro tyto v3 řady. Jenže fyzický core umí hyper-threading, který z něj udělá dva a díky pipelinám z něj dostanete o dost víc. U nejnovější generace D strojů (v3 a pokračováním D1x jsou E v3, tedy modely s více paměti) se využívá hyper-threading a vCPU = 1 thread na fyzickém core (tedy 2 vCPU mají jeden fyzický core). Ty jsou ale stále pro vás, nikdy se fyzický core nesdílí na víc jak jednoho zákazníka, proto u této řady najdete jen sudé počty core (neexistuje tedy 1-corový D v3). ACU klasického D v2 je 210 (až 250), D v3 disponuje výkonem 160 (až 190 při Intel Turbo). Odměna pro vás? Nižší cena, větší servery (64 vCPU) a na paměť velmi vstřícná řada E v3 s až 432 GB paměti! Za takový stroj jste v roce 2016 museli zaplatit o polovinu víc (použít G řadu).

B-series (burstovatelný E5-2673 v3 nebo v4)

Možná se vaše aplikace chová tak, že občas potřebuje pořádně zabrat, ale většinu času toho zas tak moc nedělá. A-series vám nevyhovuje, protože na špičky je moc pomalá a navíc nepodporuje SSD disky? D-series vám zase přijde zbytečná, když většinu času takový výkon nepotřebujete? Právě pro vás vznikla řada B. Je to jediná nabídka v rámci Azure kde nemáte výkon dedikovaný pro sebe, ale současně je míra sdílení a burstovatelnosti naprosto exaktně daná.

N-series ve všech variantách (GPU stroje)

Tato kategorie se bouřlivě rozvíjí. V prosinci 2016 tato éra začala stroji NV (NVIDIA M60) pro grafickou workstation a NC (NVIDIA K80) pro výpočty nad GPU. O rok později, tedy teď v prosinci 2017 už je k v general availability i řada NCv2 (NVIDIA P100) a ND (NVIDIA P40). Navíc ještě na konci 2017 bylo spuštěno preview NCv3 postavené na masivní výkonnosti karty NVIDIA V100.

F-series v2 (Intel Platinum 8168)

Předchozí generace F-series, která přišla ke konci 2016, byla v zásadě jen D v2 řada s modely s menším mírou paměti na core zaměřená na výpočetní úlohy. Běžné D stroje mají CPU:RAM 1:4, paměťově optimalizované E stroje 1:8 a F řada jde na opačnou stranu s poměrem 1:2. Nová generace Fv2 ale už nemá klasický serverový Intel procesor, ale speciální škálovatelnou řadu s kódovým označením Skylake. Jsou to minimálně v době uvedení, tedy v říjnu 2017, výpočetně nejsilnější stroje z public cloudu (z velké trojky Microsoft, Amazon a Google).

M-series aneb mamuti (E7-8890 v3)

Máte mamutí aplikace, které potřebují scale up? Třeba SAP HANA databázi nebo celý S4HANA systém? Co třeba VM o velikosti 128 vCPU a 3,8 TB RAM? Jasná práce pro M řadu.

Pokračovat ve čtení na tomaskubica.cz...

Where Did Application Insights Put my Performance Counter Data?

$
0
0

Premier Developer consultant Tim Omta recently shared this quick tip on his blog about where Application Insights performance counter data is stored after it’s pushed into Azure Diagnostics.


I ran into an issue finding performance counter values I had pushed to Application Insights and wanted to note it to save others some time.

You can configure Azure Diagnostics (WAD) to push diagnostic and performance data to Application Insights:

https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/azure-diagnostics-configure-application-insights

Continue reading on Tim’s blog here.

スタンバイモードでのトランザクション ログのリストア時にエラー 9004 が発生する事象について

$
0
0

 

 

今後、修正予定となっている問題について、ご紹介いたします。
なお、本記事は、サポート技術情報(KB)が公開されるまでの間に本問題をご案内する意図で掲載しております。

事象

スタンバイモードでトランザクション ログのリストア時にエラー 9004 、状態6 が発生し、リストアに失敗することがあります。
ログ配布をスタンバイモードで構成している場合は、セカンダリでのリストアジョブでエラー 9004 が発生します。
(※ ログ配布を構成していない場合でも、トランザクション ログの復元時に、スタンバイモード(RESTORE LOG WITH STANDBY)を実行する場合は本事象が発生することがあります。)

ログ出力例
2017-07-29 12:05:02.68 spid65      エラー: 9004、重大度: 16、状態: 6。
2017-07-29 12:05:02.68 spid65      An error occurred while processing the log for database 'Database_name'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.

 

原因

製品の問題により、稀なタイミングで、スタンバイモードでトランザクション ログのリストア時に、エラー 9004、状態 6 のエラーが発生することが SQL Server 2014 以降で確認されております。

※ 現時点では、SQL Server 2014、SQL Server 2016、SQL Server 2017 の将来の更新プログラムで、修正が計画されています。

 

事象発生後の対処

ログ配布を一度、解除し、現在のプライマリ データベースの完全バックアップをリストアし、ログ配布を再構成します。

[参考情報]
ログ配布構成からのセカンダリ データベースの削除 (SQL Server)
ログ配布構成へのセカンダリ データベースの追加 (SQL Server)

対策

トランザクション ログの復元時に、スタンバイモード(RESTORE LOG WITH STANDBY) を利用せず、RESTORE LOG WITH NORECOVERY を利用すれば、この事象は発生しません。

また、以下により、発生頻度を抑えることが可能です。

- トランザクション ログ ファイルの初期サイズ、拡張サイズをある程度(例えば、512 MB など)大きいサイズに設定する

 

※ 上記内容は、2017 年 12 月現在の情報となります。


Top stories from the VSTS community – 2017.12.29

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics.

TOP STORIES

VIDEOS

  • More Database DevOps with Redgate - Robert Green and Steve Jones
    In this second of two episodes, Robert is joined by Steve Jones to discuss how you can use Redgate's DLM Automation tools to extend DevOps practices to SQL Server and Azure SQL databases.

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

FEEDBACK

What do you think? How could we do this series better?
Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included

Skype for Business for iOS クライアントで、コンサルティブ転送時の保留音が途切れる

$
0
0

こんにちは Japan Lync / Skype for Business サポートチームです。

日本国内では、Skype for Business モバイルクライアントについて
電話機の代替として非常に多く、ソリューションとしてご利用されておりますが
一部、製品として対応が完全に実施されていない部分があり、こうした点の一つとして
保留音の動作についてご報告させていただきたいと思います。

保留音が流れるよう、構成、設定された環境において、Skype for Business for iOS から、Skype for Business リッチクライアント(PCクライアント)へ
コールした場合、SfB リッチクライアントで、そのコールを保留後、別ユーザーにコールしてコンサルティブ転送を実施する事が可能です。
(※コンサルティブ転送:一般的な日本の部門電話番号を共有している場合に実施する転送方法で、コールの仲介者(コール元の通話をピックアップした人)がコール先と会話した後、保留を解除、コール先とコール元の会話が開始する形態です)

この際、コンサルティブ転送時に SfB for iOS で聞こえる保留音が途切れ途切れとなる場合があります。
ただし、通話自体は転送を含めた処理は完了するため、影響は保留音が聞こえるタイミングに限定されます。

日本市場では、海外での利用に比べ、転送等のご利用が圧倒的に多いことなどから
これまで、この問題についての修正を開発部門で検討して参りましたが
最終的に、サーバー側の通信動作との協調を含め、思いのほか修正が複雑になることが判明し
現段階では実際の保留、転送自体に実害がないことから、現時点では修正を見送らせていただいております。

そのため、コンサルティブ転送をご利用のお客様で、お聞き苦しい点、またご迷惑をおかけする状況について大変申し訳ございません。

Experiencing Data Access Issue in Azure Portal for Many Data Types – 12/29 – Resolved

$
0
0

Final Update: Friday, 29 December 2017 21:34 UTC

We've confirmed that all systems are back to normal with no customer impact as of 12/29, 21:32 UTC. Our logs show the incident started on 12/29, 20:54 UTC and that during the 32 minutes that it took to resolve the issue 5% of customers experienced data access issue while accessing through azure portal.

  • Root Cause: The failure was due to an issue in one of our dependent platform services
  • Incident Timeline: 38 minutes - 12/29, 20:31 UTC through 12/29, 21:03 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Venkat


Initial Update: Friday, 29 December 2017 21:16 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Access Issue in Azure Portal. The following data types are affected: Availability,Customer Event,Dependency,Exception,Metric,Page Load,Page View,Performance Counter,Request,Trace.

  • Work Around: None
  • Next Update: Before 12/30 00:30 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Venkat

SQL Server DBA Morning Health Checks

$
0
0

Introduction: As a Microsoft Premier Field Engineer, I get to work with amazing colleagues who create incredible customer solutions. Patrick Keisler ( blog | | ) is a long time SQL Server professional, who also works as a PFE supporting customers throughout Europe. He recently created a very handy "SQL Server Morning Health Check" solution on short notice for one of his customers. The solution is very helpful and can be added easily to your morning routine as a DBA to monitor your SQL Server environment. Patrick, thanks for sharing!

SQL Server Morning Health Checks (Created by Patrick Keisler)

A few weeks ago, a customer asked me to help develop a way to improve their morning routine of checking the health of each SQL Server. This daily task would typically take about an hour for the DBA to complete. The solution I provided him, reduced that task to under one minute.

The DBA needed me to answer these questions for each SQL Server:

1.       What is the uptime of each SQL Server?

2.       What is the status of each database?

3.       What is the status of each Availability Group?

4.       What is the backup status of each database?

5.       What is the available disk space?

6.       Are there any SQL Agent failed jobs in the last 24 hours?

7.       What errors appeared in the SQL errorlog in the last 24 hours?

In addition, the customer asked to display the results using the typical stop light colors (red, yellow, and green) to represent the condition of each server.

You might be thinking this sounds like a dashboard; however, they just wanted something simple that could be run by the DBA each morning, but also something that could be run by the other non-DBAs that provide backup during off-hours.

I only had a few days to get a solution up and running, so I needed something quick. To make my task a bit easier, the customer had a Central Management Server (CMS) with all servers registered. Armed with this information, I proceeded to use PowerShell to automate each of these tasks.

Step one was to loop through each registered server within CMS. Luckily for me, I had previously written a CMS function (Get-CmsServer) for the SQLPSX module. This function was originally created by Chrissy LeMaire but I had heavily modified it for the SQLPSX project.

https://github.com/MikeShepard/SQLPSX

https://blog.netnerds.net/smo-recipes/central-management-server/

To use this function, you just need to provide the name of the CMS server and the CMS group to loop through. The output is an array of SQL Servers that you can loop through.

$targetServerList = Get-CmsServer –cmsServer ‘SOLOCMS’ –cmsGroup ‘PRODDUCTION’

In the example above, we’ll connect to the CMS server ‘SOLOCMS’ and get all the SQL Servers that are registered in the ‘PRODUCTION’ folder and all subfolders that may exist beneath it.

SQL Morning Health Checks

In the picture above, if you only wanted to loop through the two servers in the SQL2012 folder, then you would specify the CMS group as -cmsGroup ‘PRODUCTIONSQL2012’.

Now that we can get the list of SQL Servers, we need to use SQL Server Management Objects (SMO) to complete the other tasks. The SMO is a .NET component that allows you to perform any management task against a SQL Server. Going into the how of using SMO is beyond the scope of this article, but links at the end of this article will help guide you. For our purposes, we’ll just be creating a simple ServerConnection to SQL Server.

$srv = New-Object (‘Microsoft.SqlServer.Management.Common.ServerConnection’) ‘MySqlServerInstance’
$srv.Connect()

In the example above, we create a new ServerConnection object and then call the Connect method. From here, it’s a simple as passing TSQL queries and executing them.

$cmd = ‘SELECT sqlserver_start_time FROM sys.dm_os_sys_info;’
$results = $srv.ExecuteWithResults($cmd)

The output is stored in a dataset ($sqlStartupTime) which we can then use to calculate the uptime of the SQL Server by using New-TimeSpan.

$upTime = New-TimeSpan –Start ($results.Tables[0].sqlserver_start_time) –End ($results.Tables[0].current_timestamp)

Now that we have our up-time value, we can proceed with determining the condition level (critical, warning, or good) and then displaying the results. Each one of the condition checks are based on what my customer requested, but they can easily be changed for your needs.

Critical = SQL uptime < 6 hours
Warning = SQL uptime < 1 day but >= 6 hours
Good = SQL uptime > 1 day

We can just use a simple if statement to determine the condition.

if ($upTime.Day –eq 0 –and $upTime.Hours –lt 6) { #critical }
elseif ($upTime.Day –lt 1 –and $upTime.Hours –ge 6) { #warning }
else { #good }

For the “stop light” effect, we will use the Write-Host command and adjust the foreground and background colors. For example, for critical we will use a background of red and foreground of white.

Write-Host “CRITICAL:” –Backgroundcolor Red –ForegroundColor White

The resulting output can be seen below.

SQL Morning Health Checks

All of the other functions in this script are called and processed in much the same way, except for Get-AppLogEvents. Request #7 required me to scan the SQL Errorlog for any errors. While reading the contents of an Errorlog is simple, there is no real easy or efficient way to scan for specific errors or keywords. However, every time SQL Server writes an event to the Errorlog, it also writes the same event to the Windows Application Log. Knowing that, we can use Get-WinEvent to look for events that are classified as errors. One of the advantages to using Get-WinEvent is that we can use the FilterHashtable to filter our results on the target server before returning the results back to our client. This greatly reduces the amount of time to return results, but it also reduces the amount of data sent across the network.

$events = Get-WinEvent –ComputerName ‘MySqlServer’ –FilterHashtable @{LogName=’Application’;Level=2;StartTime=((Get-Date).AddDays(-1));ProviderName=’MSSQL$Instance’} –ErrorAction SilentlyContinue

In the example above, the FilterHashtable is used to pass four filters. The first gets the events from the Application log, the second only returns errors (Level=2), the third returns events that occurred within the past 24 hours, and the fourth filters on the source ‘MSSQL$Instance’. If you used Event Viewer, you could filter the same information by selecting these options in the picture below.

SQL Morning Health Checks

This function works only if your SQL Server does NOT use the “-n” startup option. Using that option prevents SQL Server from writing events to the Windows Application Log.

And there you have it; a simple PowerShell script to capture all that information from your environment.

SQL Morning Health Checks

The complete script can be downloaded from GitHub.

https://github.com/PatrickKeisler/SQLMorningHealthChecks

 

Additional references:

https://docs.microsoft.com/en-us/sql/relational-databases/server-management-objects-smo/sql-server-management-objects-smo-programming-guide

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/database-engine-service-startup-options

 

SQL Updates Newsletter – December 2017

$
0
0

Recent Releases and Announcements

 

Troubleshooting and Issue Alerts

  • Critical: Do NOT delete files from the Windows Installer folder. C:windowsInstaller is not a temporary folder and files in it should not be deleted. If you do it on machines on which you have SQL Server installed, you may have to rebuild the operating system and reinstall SQL Server.
  • Critical: Please be aware of a critical Microsoft Visual C++ 2013 runtime pre-requisite update that may be required on machines where SQL Server 2016 will be, or has been, installed.
    • https://blogs.msdn.microsoft.com/sqlcat/2016/07/28/installing-sql-server-2016-rtm-you-must-do-this/
    • If KB3164398 or KB3138367 are installed, then no further action is necessary. To check, run the following from a command prompt:
    • powershell get-hotfix KB3164398
    • powershell get-hotfix KB3138367
    • If the version of %SystemRoot%system32msvcr120.dll is 12.0.40649.5 or later, then no further action is necessary. To check, run the following from a command prompt:
    • powershell "get-item %systemroot%system32msvcr120.dll | select versioninfo | fl"
  • Important: If the Update Cache folder or some patches are removed from this folder, you can no longer uninstall an update to your SQL Server instance and then revert to an earlier update build.
    • In that situation, Add/Remove Programs entries point to non-existing binaries, and therefore the uninstall process does not work. Therefore, Microsoft strongly encourages you to keep the folder and its contents intact.
    • https://support.microsoft.com/en-us/kb/3196535
  • Important: You must precede all Unicode strings with a prefix N when you deal with Unicode string constants in SQL Server
  • Important: Default auto statistics update threshold change for SQL Server 2016
  • Analyze Network Latency Impact on Remote Availability Group Replica
    • When network latency becomes an issue the most common symptom you will observe is sustained or growing log send queue. [To monitor...] (1) Add the Log Send Queue size (KB) column in AlwaysOn Dashboard and (2) Add the SQLServer:Database Replica:Log Send Queue Counter
    • Measure Network Latency Impact Using Performance Monitor. On the secondary replica, launch Performance Monitor and add the following counters: (1) SQLServer:Database Replica:Log Bytes Received/sec for appropriate database instance and (2) SQLServer:Database Replica:Recovery Queue for appropriate database instance
    • On the primary replica, launch Performance Monitor and add the following counters: (1) SQLServer:Databases:Log Bytes Flushed/sec for appropriate database instance and (2) Network Interface:Sent Bytes/sec for appropriate adapter instance
    • In order to better understand how fast an application can push changes to the remote server, use a third-party network bandwidth performance tool, [such as] iPerf or NTttcp.
    • https://blogs.msdn.microsoft.com/alwaysonpro/2017/12/21/analyze-network-latency-impact-on-remote-availability-group-replica/
  • Availability Group Database Reports Not Synchronizing / Recovery Pending After Database Log File Inaccessible
  • Centennial apps/desktop bridge, SQL Server and error "The data area passed to a system call is too small."
    • Problem: Launching a Centennial application may fail with the following error: The data area passed to a system call is too small.
    • Cause: This issue may be due to miscommunication between two filter drivers, namely WCNFS (the desktop bridge) and RsFxXXXX.sys driver (filestream system driver). RsFx system driver doesn't honor flags being passed by WCNFS driver appropriately, which causes startup failure of any Centennial application with the aforementioned error.
    • Status: We will provide a fix for this issue in Cumulative updates for SQL Server versions which are still being serviced.
    • Workaround: Disable Filestream feature or Move Filestream data to a different volume
    • https://blogs.msdn.microsoft.com/sql_server_team/centennial-appsdesktop-bridge-sql-server-and-error-the-data-area-passed-to-a-system-call-is-too-small/

Recent Blog Posts and Articles

Recent Training and Technical Guides

Monthly Script and Tool Tips

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services

Lets bring in the New Year together – Make 2018 Epic by Investing in M365!

$
0
0

Microsoft regularly announces product updates to its Productivity, Enterprise Mobility + Security (EM+S), and Windows & Devices solutions. We have a wealth of training courses that will help you stay ahead.

Microsoft 365 Security Technical Series for Enterprise

Type: Technical (L300)

Audience: IT Professional / Developers

Cost: $99

Product: Office

Date & Locations: Perth (February 12-13): Melbourne (February 19-20); Brisbane (February 19-20); Sydney (March 19-20)

Ready to grow your Microsoft 365 Security Practice? Then join us for the Microsoft 365 Tech Series. These workshops are led by Microsoft-certified instructors and will help you get the latest insights to benefit your organization. Session Topic - Microsoft 365 Security and Compliance. We’ll help you with technical aspects of Microsoft 365 security mechanisms and compliance technology through discussion and hands-on labs. Specific topics will include Enterprise-Level Identity Protection, Windows Defender Exploit Guard, Windows Hello, Credential Guard, Azure Information Protection, Conditional Access using Health Attestation, and Ransom-Ware Protection. We’ll also address Security and Compliance Policy and discuss how it can be implemented. REGISTER HERE

Next Up Exam Camp 70-346: Managing Office 365 Identities and Requirements

Type: Technical (L300)

Audience: IT Professionals looking to earning formal qualifications

Cost: $399

Product: Office 365

Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)

Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Office 365 is a must-have certification for anyone looking to prove their skills. REGISTER HERE

Next Up Exam Camp 70-347: Enabling Office 365 Services

Type: Technical (L300)

Audience: IT Professionals looking to earning formal qualifications

Cost: $399

Product: Office 365

Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)

Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Office 365 is a must-have certification for anyone looking to prove their skills. REGISTER HERE

Master Microsoft 365. Take training to build a practice that takes your business to the next level.

  • Microsoft 365 Business Overview: Discover how Microsoft 365 Business can help your customers improve their productivity and protect their data from security threats. Learn More Today
  • Cloud Voice: See how this solution provides services, security, and support that traditional phone lines can’t match. Explore Cloud Voice
  • Security & Compliance: Learn how Microsoft 365 helps organisations with content security and data usage compliance. Explore Security & Compliance
  • Collaboration: Discover how Microsoft 365 solutions help your customers collaborate across their organisation. Explore Collaboration
  • Microsoft 365 powered device: Find out how you make device security top priority, while easing IT transition to cloud-based management. Explore Microsoft 365 powered device

Calculate Pi to measure processor performance

$
0
0

We know that computers can calculate very quickly, but how do we compare performance between code? I know that processors have been improving immensely since my first processor in 1971 (see https://blogs.msdn.microsoft.com/calvin_hsia/2005/10/30/my-toys-over-the-years/ ). As improvements come to processors, not all programs take advantage of them. As processor manufacturers come up with new improvements (such as 64 bit, SIMD SSE and AVX), programs must be recompiled to take advantage of the new available instructions. However, once so changed, the programs won’t work on existing computers that do not have these features.

I needed a math intensive calculation, that takes a while, so I used the Leibniz series to calculate Pi. The goal of the code below is to measure the time to calculate Pi using various options. The algorithm says
Pi= 4/1 – 4/3 + 4/5 – 4/7 + 4/9….

Which settings yield the fastest program for you? 32 bit? 64 bit? AVX? 

(as an aside, does your program run slower when running under the debugger?)

Start Visual Studio (most versions will do).
Use Visual Studio Online to create a free online account which can host your programming projects. It can store your source code using either GIT or Team Foundation, allowing you to store multiple versions of your code, allowing various computers you may use to sync and coordinate changes. You can also invite others to be a part of your team project.
I like to start this way because it creates a local repo that allows local commits that’s easy to push to VSOnline but it doesn’t require a push
 
From Team Explorer control bar, choose Projects->New Repository. Name it “CalcPi”
 
Then click on New… Solution->C++->Win32 Project->”CalcPi”. Click on Finish to accept all the defaults for the Win32 Project Wizard.
Paste the code below to replace the contents of “CalcPi.cpp”

Let’s add a menu item “Run” with the short cut key “ctrl+R”. Double click the CaclPi.Rc file in the solution explorer to open the resource editor. Drill into CalcPi.rc->Menu->IDC_CALCPI
Right click the Exit item->Insert New->”Run”   
 image

This generates a “ID_FILE_RUN” definition in the resource.h file
Drill into CalcPi.rc->Accelerator->IDC_CALCPI
Create a line for “ID_FILE_RUN” that specifies a Ctrl-R accelerator key:
 image


Experiment with the processor math functions. For example: Project->Properties->C/C++->Code Generation->Enable Enhanced Instruction Set->”Advances Vector Extensions 2 (/arch:AVX2)”
Make a 64 bit version of this C++ program: Build->Options->Configuration Manager-> Active Solution Platform-><New>

You can observe the assembly output by hitting breakpoints and viewing disassembly (Ctrl-F11). Or you can create an Assembly listing: Project->Properties->C/C++->Output Files->Assembler Output->”Assembly, Machine Code and Source (/FAcs)” Open the generated .COD file in the VS editor and examine it. Observe how the release version has optimized the code and inlined a lot of the functions.

 

 

<code>

// CalcPi.cpp : Defines the entry point for the application.
//

#include "stdafx.h"
#include "CalcPi.h"
#include "string"

using namespace std;

#define MAX_LOADSTRING 100


// Forward declarations of functions included in this code module:
LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam);

class CCalcPi;
CCalcPi* g_pCCalcPi;

#define M_PI 3.14159265358979323846
// Message handler for about box.
INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam)
{
    UNREFERENCED_PARAMETER(lParam);
    switch (message)
    {
    case WM_INITDIALOG:
        return (INT_PTR)TRUE;

    case WM_COMMAND:
        if (LOWORD(wParam) == IDOK || LOWORD(wParam) == IDCANCEL)
        {
            EndDialog(hDlg, LOWORD(wParam));
            return (INT_PTR)TRUE;
        }
        break;
    }
    return (INT_PTR)FALSE;
}

class CCalcPi
{
    HWND _hWnd;
    HINSTANCE _hInst;                                // current instance
    WCHAR _szTitle[MAX_LOADSTRING];                  // The title bar text
    WCHAR _szWindowClass[MAX_LOADSTRING];            // the main window class name

    POINTS _frameSize; // size of drawing area
    bool _fIsRunning = false;
    bool _fCancelRequest = false;
    int _nDelayMsecs = 0;
    long long _nIterations = 0;
    DWORD _timeStartmSecs;
    void ShowMsg(int line, LPCWSTR pText, ...)
    {
        va_list args;
        va_start(args, pText);
        wstring strtemp(1000, '');
        _vsnwprintf_s(&strtemp[0], 1000, _TRUNCATE, pText, args);
        va_end(args);
        auto len = wcslen(strtemp.c_str());
        strtemp.resize(len);
        HDC hDC = GetDC(_hWnd);
        HFONT hFont = (HFONT)GetStockObject(ANSI_FIXED_FONT);
        HFONT hOldFont = (HFONT)SelectObject(hDC, hFont);
        TextOut(hDC, 0, line * 20, strtemp.c_str(), (int)strtemp.size());
        SelectObject(hDC, hOldFont);
        ReleaseDC(_hWnd, hDC);
    }
    static DWORD WINAPI ThreadRoutine(void *parm)
    {
        CCalcPi *pCCalcPi = (CCalcPi*)parm;
        return pCCalcPi->DoRun();
    }

    void CancelRunning()
    {
        if (_fIsRunning)
        {
            _fCancelRequest = true;
            _fIsRunning = false;
            while (_fCancelRequest)
            {
                Sleep(_nDelayMsecs + 10);
            }
        }
    }
    void EraseBkGnd()
    {
        HDC hDC = GetDC(_hWnd);
        RECT rect;
        GetClientRect(_hWnd, &rect);
        FillRect(hDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));
        ReleaseDC(_hWnd, hDC);
    }

    DWORD DoRun()
    {
        EraseBkGnd();
        ShowMsg(0, L"Calculating Pi  %18.16f", M_PI);
        _nIterations = 0;
        double sum = 0;
        _timeStartmSecs = GetTickCount();
        while (_fIsRunning && !_fCancelRequest)
        {
            // let's calculate pi by infinite series:
            // pi = 4/1 - 4/3 + 4/5 - 4/7 ....
            auto term = 4.0 / (_nIterations * 2 + 1);
            if (_nIterations % 2 == 0)
            {
                sum += term;
            }
            else
            {
                sum -= term;
            }
            if (++_nIterations % 1000000 == 0)
            {
                int nCalcsPerSecond = 0;
                DWORD nTicks = GetTickCount() - _timeStartmSecs;
                if (_fIsRunning)
                {
                    nCalcsPerSecond = (int)(_nIterations / (nTicks / 1000.0));
                }
                auto diff = abs(M_PI - sum);
                auto lg = -log10(diff);
                ShowMsg(2, L"Iter(Mil) = %-10lld  calc(Mil)/sec =%6d %18.15f  %18.15f %8.3f  ", _nIterations / 1000000, nCalcsPerSecond / 1000000, sum, diff, lg);
                if (lg >= 10)
                {
                    ShowMsg(3, L"Reached 10 digits accuracy in %d seconds", nTicks / 1000);
                    break;
                }
            }
        }
        _fCancelRequest = false;
        _fIsRunning = false;
        return 0;
    }

public:
    LRESULT CALLBACK MyWndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
    {
        switch (message)
        {
        case WM_ACTIVATEAPP:
            break;
        case WM_SIZE:
        {
            _frameSize = MAKEPOINTS(lParam);
        }
        break;
        case WM_COMMAND:
        {
            int wmId = LOWORD(wParam);
            // Parse the menu selections:
            switch (wmId)
            {
            case ID_FILE_RUN:
                _fIsRunning = !_fIsRunning;
                if (_fIsRunning)
                {
                    auto hThread = CreateThread(
                        nullptr, // sec attr
                        0,  // stack size
                        ThreadRoutine,
                        this, // param
                        0, // creationflags
                        nullptr // threadid
                    );
                }
                else
                {
                    CancelRunning();
                }
                break;
            case IDM_ABOUT:
                DialogBox(_hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About);
                break;
            case IDM_EXIT:
                DestroyWindow(hWnd);
                break;
            default:
                return DefWindowProc(hWnd, message, wParam, lParam);
            }
        }
        break;
        case WM_PAINT:
        {
            PAINTSTRUCT ps;
            HDC hdc = BeginPaint(hWnd, &ps);
            // TODO: Add any drawing code that uses hdc here...
            EndPaint(hWnd, &ps);
        }
        break;
        case WM_DESTROY:
            CancelRunning();
            PostQuitMessage(0);
            break;
        default:
            return DefWindowProc(hWnd, message, wParam, lParam);
        }
        return 0;
    }
    //
    //  FUNCTION: MyRegisterClass()
    //
    //  PURPOSE: Registers the window class.
    //
    ATOM MyRegisterClass(HINSTANCE hInstance)
    {
        WNDCLASSEXW wcex;

        wcex.cbSize = sizeof(WNDCLASSEX);

        wcex.style = CS_HREDRAW | CS_VREDRAW;
        wcex.lpfnWndProc = WndProc;
        wcex.cbClsExtra = 0;
        wcex.cbWndExtra = 0;
        wcex.hInstance = hInstance;
        wcex.hIcon = LoadIcon(hInstance, MAKEINTRESOURCE(IDI_CALCPI));
        wcex.hCursor = LoadCursor(nullptr, IDC_ARROW);
        wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1);
        wcex.lpszMenuName = MAKEINTRESOURCEW(IDC_CALCPI);
        wcex.lpszClassName = _szWindowClass;
        wcex.hIconSm = LoadIcon(wcex.hInstance, MAKEINTRESOURCE(IDI_SMALL));

        return RegisterClassExW(&wcex);
    }

    //
    //   FUNCTION: InitInstance(HINSTANCE, int)
    //
    //   PURPOSE: Saves instance handle and creates main window
    //
    //   COMMENTS:
    //
    //        In this function, we save the instance handle in a global variable and
    //        create and display the main program window.
    //
    BOOL InitInstance(HINSTANCE hInstance, int nCmdShow)
    {
        _hInst = hInstance; // Store instance handle in our global variable
                           // Initialize global strings
        LoadStringW(hInstance, IDS_APP_TITLE, _szTitle, MAX_LOADSTRING);
        LoadStringW(hInstance, IDC_CALCPI, _szWindowClass, MAX_LOADSTRING);
        MyRegisterClass(hInstance);

        HWND hWnd = CreateWindowW(_szWindowClass, _szTitle, WS_OVERLAPPEDWINDOW,
            CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, nullptr, nullptr, hInstance, nullptr);

        if (!hWnd)
        {
            return FALSE;
        }
        _hWnd = hWnd;

        ShowWindow(hWnd, nCmdShow);
        UpdateWindow(hWnd);

        return TRUE;
    }
    int DoMessageLoop()
    {
        HACCEL hAccelTable = LoadAccelerators(_hInst, MAKEINTRESOURCE(IDC_CALCPI));

        MSG msg;

        // Main message loop:
        while (GetMessage(&msg, nullptr, 0, 0))
        {
            if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg))
            {
                TranslateMessage(&msg);
                DispatchMessage(&msg);
            }
        }

        return (int)msg.wParam;

    }

};

int APIENTRY wWinMain(_In_ HINSTANCE hInstance,
    _In_opt_ HINSTANCE hPrevInstance,
    _In_ LPWSTR    lpCmdLine,
    _In_ int       nCmdShow)
{
    UNREFERENCED_PARAMETER(hPrevInstance);
    UNREFERENCED_PARAMETER(lpCmdLine);

    CCalcPi calcpi;
    g_pCCalcPi = &calcpi;

    // Perform application initialization:
    if (!g_pCCalcPi->InitInstance(hInstance, nCmdShow))
    {
        return FALSE;
    }
    return g_pCCalcPi->DoMessageLoop();
}

LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
    return g_pCCalcPi->MyWndProc(hWnd, message, wParam, lParam);
}

</code>


Cryptocurrency’s liquidity problem

$
0
0

It was only a week ago I wrote How do I know which cryptocurrencies will go up in value. It's not easy, and is a lot of work.

My personal view of cryptocurrency and blockchains that their social value lies in their utility - what problems do they solve, and how effectively do they it? At the moment, a lot of cryptocurrencies' main proposition is speculation - the promise of you buying something and then praying it goes up in value so you can cash out. But the speculative part is part of blockchain's value proposition, if it truly is revolutionary, then it should accrue value to its inventors and developers.

Yet I, as a developer, only have so much time to devote to any particular platform. I spend most of my professional time fighting spam, malware, and phishing. And even though I have a primitive method of researching cryptocurrencies, I still consider it inadequate.

For example, on Dec 20, 2017, I published a screenshot of the top 12 cryptocurrencies on CoinMarketCap. Here's what they are today along with ones that have moved up highlighted in blue:

If I were to speculate in cryptocurrency, then only looking at the top 12 is insufficient. Because while four of the top 12 moved up in value since Dec 20, six moved down significantly in value (the other two also moved down but too little to constitute a movement).

I've read somewhere that there are ~1300 cryptocurrencies today, and 95% will fail. That means that 65 of them will succeed. How can I identify which of those 65 will stick around? Are they currently in the top 10? Top 25? Top 100? Are the ones in the Top 25 today going to be around for the long-term?

It's hard to know.

What I've discovered in my research is that it's hard to diversify your holdings if you want to speculate in cryptocurrency. For example, the most popular cryptocurrency exchange in the US is Coinbase, and you can only trade four currencies on it - BitCoin, Ethereum, LiteCoin, and BitCoin Cash. Yet from CoinMarketCap, you can see that there are far more cryptocurrencies than just those four, and the majority of them are far more interesting (and more risky and speculative) than either of the four on Coinbase.

So if you want to diversify your holdings, you have to diversify how you acquire them, and that means going through different exchanges.

Whenever I want to buy stocks or bonds or ETFs, I can do it through my retail brokerage account. I can buy almost anything I want to buy in the US stock market, and many times I can get foreign stocks as well if they are part of a fund, or have a US-exchange equivalent. Cryptocurrency exchanges are not quite that user-friendly, no doubt because it is so new.

There are some exchanges that let you deposit US dollars, and then use that to buy cryptocurrencies. But there's tons of exchanges out there, how do I know which ones are legitimate, and which ones are scammy or are prone to getting hacked?

The US-dollar method is the most intuitive way for a noob like me to speculate in cryptocurrency. Unfortunately, not every cryptocurrency lets you purchase with US dollars. Many of them only allow you to do exchanges, that is, exchange one cryptocurrency for another, usually Bitcoin or Ethereum. For example, if I wanted to get into Cardano, I'd have to first buy some Bitcoin and pay a fee, and then exchange Bitcoin for Cardano (and pay another fee? I'm not sure because I haven't done that yet but I assume I would). The fees on Bitcoin purchases using USD are high, and one of the things you need to do when speculating is minimize friction (fees, bid/ask spreads, slippage between the order price and execution prices, and taxes).

Or, if I wanted to stick to US dollar purchases [1], I'd have to find an exchange that trades the cryptocurrency that I want, and then find another exchange that trades a second cryptocurrency that I want. Depending on how many I want to hold onto, I'd have a lot of research to do in order to minimize the number of cryptocurrency exchanges I have to maintain. This the opposite of how I buy stocks (that is, I only need a single brokerage to purchase traditional securities).

What a pain.

Thus, at the moment, the lack of liquidity for alt-coins is a problem waiting to be solved.


[1] I've recently become aware of a cryptocurrency called Tether which tries to address the liquidity problem. Basically, Tether is a company that lets you convert fiat currency like US dollars or Euros to Tether, and then convert Tether to another cryptocurrency. The Tether cryptocurrency is backed by actual US dollars and Euro reserves. Thus, some cryptocurrencies list USDT (US dollars backed by Tether) as an acceptable form of payment.

This sure sounds great because it lets people get into and out of alt-coins more easily, but I don't know how much on the up-and-up this all is. Are they running ahead of financial regulation? Will my money be safe there? Does it alleviate my need to hold cryptocurrency in a wallet?

I don't know.

But it seems to me like this is the "picks-and-shovels" approach to cryptocurrency. During the 1840's  and 1850's gold rush in California, most of the money was made not by the miners (other than the initial ones at the very start of the gold rush) but by the people selling tools to the miners who were trying to cash in on the gold.

 

 

Hello world!

Sharing assistive technology through the Microsoft Store

$
0
0

This post describes how a free tool to help people with low vision was made available at the Microsoft Store. In this case the tool was not built as a traditional Microsoft Store app, and so leveraged the Desktop Bridge for making desktop apps available at the Store. The tool is now available at Herbi HocusFocus at the Microsoft Store.

Apology up-front: When I uploaded this post to the blog site, the images did not get uploaded with the alt text that I'd set on them. So any images are followed by a title.

 

Introduction

Hey devs, here's a question for you…

If you felt that a software tool running on Windows could help someone you know, and that tool isn't available today, could you build it?

 

There are certainly valid reasons why the answer might be "No". For example, perhaps the technology available to you today just doesn't support what you need. Or maybe it does, but you really don't have time to build the tool, given everything else you need to get done during the day. But perhaps in some cases, the answer might be "Yes". Perhaps today's technology does support what you need, and by grabbing a few hours here and there, you could build a first version of the tool and try it out. And if over time you build on that, you might end up with a tool that's helpful to a friend or family member, and one that you could ultimately make available to everyone.

So I do think it's important for devs to consider whether there's an opportunity to plug gaps themselves in what's available to people today.

Such an opportunity came my way last year when I was asked if I could build a tool that helped people to follow keyboard focus and the text insertion point while they were working with apps in Windows. By the time I got this request, I'd already built some related functionality in earlier explorations into leveraging the Windows UI Automation (UIA) API, and so it was fairly straightforward for me to build a first version of the new tool. I described some of that work at A real-world example of quickly building a simple assistive technology app with Windows UI Automation.

The app certainly has some constraints, and I think that's fine. I'd much prefer to release an app that works well in many situations, and not in others, rather than not releasing an app at all. (That's assuming I make people aware of where it doesn't work well.) For example, the app can't highlight things on the Start menu, and sometimes it struggles to highlight things in regular menus. But in many other places in many other apps, it works well, and is worth getting in front of people who might find it helpful.

And that brings me to another important question. What's the best way to get a Windows app in front of people? For me, the answer is the Microsoft Store. I'd already made the app available through my own site at Herbi HocusFocus, but if I can make the app available at the Store, then I don't have to carry on maintaining my own installation stuff. Instead I can build it all and release an update to the Store directly from inside Visual Studio. And a lot more people around the world who might find it helpful get to learn about the app.

My understanding is that today, it's not possible for a traditional Microsoft Store app to leverage the UIA Client API. My app relies heavily on that API, and so the next question is – how did I make the app available at the Store?

Below I've described what it took to get my WinForms desktop app up at the Store. Overall it was a pretty smooth process. And I'd say that's an impressive thing made possible by the Microsoft Store team, given that there was no way I was going to read any instructions if I could avoid it.

 

Figure 1: The Herbi HocusFocus app highlighting a Checkbox in its own UI.

 

Hold on a moment - why does the app have to be for someone else?

That's a good point. I asked the question above about you building a tool that could help someone you know. So that would include a friend, a family member, and yourself. All technology should be accessible, including the tools used to build software products. For example, Visual Studio is becoming more accessible with each release, and so should be usable to developers who also use assistive technology tools such as on-screen keyboards, magnifiers and screen readers. If you're aware of issues with the accessibility of the latest version of Visual Studio, send the team feedback – they want to hear from you.

 

Making the app available at the Microsoft Store

Ok, I must confess, I did read some of the instructions. The steps listed at Package an app by using Visual Studio (Desktop Bridge) got me off to a great start. I did have to upgrade the free version of Visual Studio that I had on my machine in order to be able to add a Packaging Project to my Visual Studio solution, but once I had the required version, that worked fine. And I'd reserved the name of my app at the Store in the same way that I would for any app at the Microsoft Store.

There were only two things I had to react to when trying to rebuild my WinForms app for the Store. Visual Studio spotlighted these for me as it hit them, so they were quick to resolve.

1. My original app had a "More information" link, which when invoked would call Process.Start() with the URL for my web site. The Process.Start() call is not available in the Store version of the app, and so I removed the link. This isn't a concern for me, as the listing of the app at the Store has the link to my site.

2. I'd not replaced the placeholder Store assets with my own assets. It was really handy for me that Visual Studio checked for that. It would have been rather unimpressive if a bunch of placeholder images later appeared for my app. So I duly added my own images for such things as the app logo.

 

With the above changes made, I filled in the form at Have you already converted your app with the Desktop Bridge? and a while later I was contacted by someone from the Microsoft Store team who could help with the next phase. They asked me to supply the appx package for the app, so that they could run it.

This is where I did have an "Oh no…" moment. The person from the Store team got back to me saying that after I'd supplied the appx package, the app wouldn't start. Uh-oh.

 

And this is where I felt I really should have been paying a little more attention to what I was doing. When the app starts, it dynamically selects an image to load. It does this so that it can show an image that accounts for whether a high contrast theme is active. (I mentioned this action when I turned the app into a case study on accessibility, at Considerations around the accessibility of a WinForms Store app.) However, the way I'd built the WinForms app, the image files got copied out to wherever the exe was running, and got picked up from there. I doubt this was deliberate action on my part when I first built the app. In fact, if the app run ok after I first it, I probably didn't even consider where the images were. Given that I'd not taken action to ensure the images were available after the app was installed via the new appx package that I'd built for the Store, the images couldn't be found when the app started, and so the app didn't start.

So to address this, I updated the app to embed the images as resources in the exe. This meant the app ran just fine when installed via the appx package. Hopefully I'll not make that mistake again.

 

Following this, the person from the Microsoft Store team could get familiar with the app, and approved it for uploading to the Store. Sure enough, I could then upload the appx package to the Store in the same way as a traditional Store app. I also went through the rest of the traditional steps for making an app available at the Store, such as uploading screenshots. In my case I chose a screenshot of keyboard focus being highlighted by the app in Edge, and the text insertion point being highlighted in Word 2016.

 

Figure 2: The Herbi HocusFocus app highlighting where keyboard focus is in the Edge browser.

 

Figure 3: The Herbi HocusFocus app highlighting where the text insertion point is in Word 2016.

 

Next steps

One of my goals in making the app available at the Store was that I'd get more feedback on how the app can be enhanced to become more helpful in practice. As it happens, as soon as the app was available at the Store and downloaded to a device that it had not run on before, I learned that my custom color picker can sometimes not show all the colors available. On one device, only 12 of the 16 expected colors appeared. I don't know if some mix of that device's display resolution and scaling is related to the issue. And I also learned that on one device the app's highlighting in Edge didn't appear as expected after the page had been scrolled.

So I've some fun investigations and updates to be making over the coming weeks. I'd say this sort of thing isn't too surprising for early versions of apps once they become available to a wide range of devices. I'll look forward to getting more feedback on where the app can be made more helpful.

 

Summary

Overall the process for getting my WinForms app up at the Microsoft Store was straightforward, through the helpful instructions at Package an app by using Visual Studio (Desktop Bridge), and by Visual Studio drawing my attention to things I needed to address. This means I can now reach more people around the world who might find the tool useful, and I can update it based on their feedback. The tool is freely available at Herbi HocusFocus at the Microsoft Store.

I really do feel privileged to be in a position to do this sort of thing. I'm familiar enough with, and have access to, certain technologies such as Visual Studio and UI Automation, and it would be wrong for me to not at least consider whether it'd be practical for me to build a tool that might plug a gap in what's available to someone. Sometimes I can't help, but sometimes I can. Also, I put the source for one version of the app out at Herbi HocusFocus 2.0 source code, for anyone interested in learning how the app leverages the UIA API.

So please do consider whether you could build a tool that a friend or family member, or yourself, might find helpful. And if you do build it, consider sharing it with everyone through the Microsoft Store.

Guy

Lets bring in the New Year together – Make 2018 Epic with Technical presales assistance and training benefits

$
0
0

“Customers first” is the mantra of any successful business, and as Microsoft partners, you are the connection between customers and Microsoft. The Microsoft Partner Network offers support benefits and paid support options to help you delight your customers with the Microsoft-based solutions you build, sell, and deliver.

Partners with a Microsoft Action Pack subscription or a Microsoft Partner Network competency receive technical benefits, customised presales assistance, and training as a benefit. Take advantage of the technical resources available to you to differentiate your business and accelerate sales, deployments, and customer usage of Microsoft cloud and hybrid solutions.

PREPARE TO SELL WITH PERSONALISED EXPERT TECHNICAL GUIDANCE

Partners with a silver or gold competency in the Microsoft Partner Network can request one-on-one consultations with Microsoft experts and receive technical guidance for these activities that take place prior to finalizing a sale:

  • Proof of Concept
  • Business value proposition
  • Product feature overview and comparison
  • Request for Proposal (RFP)
  • Technical licensing recommendations

Partners that have used these services have found that they gain a competitive edge in Microsoft cloud solution sales and can shorten both the overall sales cycle and deployment times after the sale.

BUILD YOUR TECHNICAL TEAM'S SKILLS AND KNOWLEDGE

Whether you are getting started with a new product or technology or focusing on customer scenarios, use your technical training benefit to build your team's presales technical skills and deployment skills. Partners with an Action Pack Subscription or a Silver or Gold competency have access to unlimited, instructor-led technical training webcasts.

The technical training webcasts can help you:

  • Build a repeatable, scalable deployment practice
  • Manage the most common customer presales and deployment scenarios
  • Use your Internal Use Rights (IUR) benefit to grow and manage your business

Learn more about technical presales and deployment services

Watch Microsoft Partner Technical Services videos

Co přinesl rok 2017 v Azure v kontejnerech

$
0
0

Rok je v cloudu dlouhá doba. V této sérii zpětných pohledů se pokouším ohlédnout za největšími novinkami, které rok 2017 přinesl. Co se stalo v mé snad nejoblíbenější oblasti: ve světě kontejnerů?

Kontejnery na začátku roku 2017

Jak vstoupil Azure do roku 2017 z pohledu kontejnerů?

V roce 2016 šla do plné dostupnosti služba Azure Container Service, řešení pro automatizovaný provisioning kontejnerových orchestrátorů „na kliknutí“. V té době vůbec nebylo jasné na koho z velké trojky orchestrátorů vsadit a zákazníci sami tím byli docela zmatení. DC/OS byl relativně tradiční, velmi robustní a zajímavý tam, kde se kontejnery kombinují s Big Data. Docker Swarm byl nejmladší, funkčně trochu pozadu, ale nabízel API téměř shodné s klasickým single-host Docker API. A pak tu byl Kubernetes, který od začátku udělal spoustu věcí správně a byl nejslibnější, ale pro zákazníky trochu nový a složitější na uchopení. ACS tedy podporovala (a dodnes podporuje) všechny tři.

Na konci 2016 také přišel Azure Container Registry, tedy váš privátní repozitář kontejnerových obrazů.

Možná to nejdůležitější – do Microsoftu nastoupil Brendan Burns, jeden ze tří zakladatelů (tenkrát samozřejmě v Google) Kubernetes (ti další dva, Joe Beda a Craig McLuckie založili po odchodu z Googlu Heptio, firmu specializující se na Kubernetes). Brendan se stává hlavním strůjcem kontejnerové strategie Azure a to bylo v roce 2017 vidět.

Kontejnerová jízda roku 2017

Akvizice Deis a příklon ke Kubernetes

V dubnu 2017 došlo k akvizici open source firmy Deis. Ta se specializovala na vývoj open source nástrojů nad Kubernetes včetně dnes již defacto standardu pro tvorbu aplikačních šablon Helm (z jejich dílny pochází ale i další nástroje, například Draft a také mají prsty v service brokeringu). Microsoft tím jasně ukazuje, že s Kubernetes to myslí velmi vážně a zaměstnanci Deis se stávájí chloubou firmy.

Azure Container Instances – kontejnery bez serverů

Ve všech hlavních cloudech bylo nutné vždy nejprvet alokovat VM a uvnitř nich potom pracovat s kontejnery. Nebylo možné pustit kontejner přímo v cloudu samotném – bez alokovaných zdrojů. Pustit kontejner a platit jen za něj, ne za podpůrnou infrastrukturu. To se změnilo v červnu 2017 kdy Microsoft jako první uvedl tento koncept – více jak 6 měsíců před AWS (Fargate).

Pokračovat ve čtení na tomaskubica.cz...

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>