Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

Pourquoi Microsoft?

$
0
0

La Situation

  • Le cycle s’accélère, nous sommes entrés dans la 4ieme révolution industrielle. Aujourd’hui il est urgent de se réinventer pour survivre. Une simple amélioration de l’existant n’est pas suffisante.
  • Mais on ne construit pas un futur ambitieux sans un socle, une infrastructure technique, fonctionnelle, économique, sociétale solide.

Microsoft vous propose une infrastructure prête pour la 4iéme révolution industrielle

  • Infrastructure pour la 4ième révolution, les exigences:
    • Maîtriser le risque en réduisant les inconnus
    • Force de projection en proposant et démocratisant l'ensemble des technologies qui participent cette révolution
  • Que l'on peut résumer par: « Opportunity & Responsibility» Satya Nadela

 

Maîtrise du Risque Industriel - Une infrastructure Industrielle extrêmement solide

  • Gestion de l’IP – apprendre à valoriser son IP et ne pas se la faire confisquer
  • Les données – être sûr qu’elles ne seront pas perdues, qu’elles resteront accessibles et qu’elles ne seront pas utilisées sans autorisation
  • La scalabilité – Une puissance mondiale pour accompagner des croissances exponentielles
  • La régulation – pour être en accord avec les différentes législations  ( nous avons +40 régions)
  • La géo-politique – Pour ne pas être à risque même si des conflits se produisaient. (un cloud hybride et +50 régions)
  • Les défaillances économiques – une résilience forte grâce à notre business model et notre taille.
  • La capacité à déployer et de gérer la qualité – nous l’avons démontré au cours du temps – une grande partie de la société tourne sur les technologies Microsoft
  • Trouver des ressources – probablement un des écosystème le plus grand au monde du matériel, logiciel et savoir faire
  • Rester libre – Opensource, respect des standards, …
  • Innovation responsable – Nos actions et investissements sont guidés par une éthique très forte notamment concernant l’IA
  • Investissement sur le long terme – recherche fondamentale, quantum computing, …
  • Modèle fournisseur ou modèle partenaire – rapport de force ou partage de flux
  • Impact environnemental – quel est le coût environnemental de la solution choisie
  • Un mixte industriel 80/15/5

Force de Projection - Une Infrastructure qui offre les services « enablers » de la quatrième révolution industrielle

  • Intelligences de proximités, environnementales et/ou centralisées
  • Intelligence Artificielle
  • Le Cloud
  • L’IoT
  • La blockchain
  • Les graphes
  • Les APIs
  • *Reality
  • Business Data/Process Model
  • DevOps – Business Factory
  • ….

En résumé: Why, What, How - Microsoft

  • Why (la culture):
    • Donner à chaque individu et chaque organisation les moyens de réaliser ses ambitions
  • What (la stratégie):
    • Proposer l’INFRASTRUCTURE pour la société de la 4ieme révolution industrielle – business model pure IT à +90% – avec un Mixte “industriel”
  • How (les outils pour réaliser):
    • Maîtrise du risque: IP, scalabilité, data, business model, géopolitique, environnement, éthique, ...
    • Fournir les briques d’infrastructure: Intelligent Edge/Cloud, IA, IoT, Blockchain, Social Framework, API eConomy/graphe, *Reality, Business core data/functions, Factory
    • Dans le respect de des valeurs/la culture et de la stratégie

Join the Azure Gov DC Meetup July 25: IT Governance for Cloud – Gov Best Practices

$
0
0

One of the biggest concerns slowing down government digital transformation is the lack of effective IT governance. Unresolved concerns including privacy, security and organizational silos limiting data sharing and analysis continue to pose challenges for agencies.

To discuss ways to overcome these challenges, we invite you to RSVP and join us for the Microsoft Azure Government DC Meetup, IT Governance for Cloud - Gov Best Practices, on Wednesday, July 25 from 6 – 8:30 p.m. at 1776 Crystal City, Virginia.*

During this Meetup you’ll hear from industry-leading experts who will share insights and strategies on achieving effective IT governance in areas including identity, portfolio and records management.

Featured speakers:

 

John Peluso, CTO,
AvePoint Public Sector
 

Karen Britton, Director,
Advisory Services, LMI
and former EOP CIO
 

Kent Cunningham, CTO, Enterprise Cloud Services,
Microsoft

 

Be sure to reserve your spot today to join us for an evening of engaging presentations, discussions, excellent networking opportunities and refreshments. As always, this event is free and open to the public – please invite your colleagues and connections to join, too!

*IMPORTANT: Due to construction at our usual 1776 DC location, the July and August Meetups will be in 1776’s Crystal City location.

The Microsoft Azure Government DC User Community, a growing community of 1,800+ members, hosts monthly Meetups that include industry and government professionals sharing best practices, lessons learned, and insights on government cloud innovation. Please join us!

 

Data imports and other asynchronous jobs are slow in all regions – 07/13 – Mitigated

$
0
0

Final Update: Friday, July 13th 2018 20:02 UTC

The team has implemented a short term mitigation that has allowed Data Import jobs to continue. After stopping the deployment and adding additional capacity, the issue was mitigated. We apologize for any inconvenience this may have caused.

Sincerely,
Daniel


Update: Friday, July 13th 2018 17:42 UTC

Our DevOps team continues to investigate issues with slow jobs processing. Root cause is not fully understood at this time, though it is suspected to be a regression in some steps performed as part of a deployment. The problem began at 2018-07-12 22:40 UTC. The long running Data Import jobs have completed now, and we are pending verification for other potentially active jobs. We are also in the progress of adding additional capacity to process additional jobs.

Sincerely,
Daniel


Initial notification: Friday, July 13th 2018 16:30 UTC

  • We're investigating slowness in some asynchronous jobs across all regions
  • Data Import jobs are known to be taking longer than usual to proceed
  • Other jobs may be impacted, full impact is still being determined

Next Update: Before Friday, July 13th 2018 17:05 UTC

Sincerely,
Dexter

Archived Webcast – Modern Workplace Fire Away Friday 07-13-2018

$
0
0

A big thanks to all those who were able to join this weeks live webcast for Modern Workplace Fire Away Friday. Covered this week were:

  • Introductions
  • Modern Workplace Top 3 – Coverage of the top three announcements in Microsoft Modern Workplace over the last 2 weeks.
  • Modern Workplace Tips – Each broadcast I deliver real world tips, with demos, on how to increase collaboration and productivity in the Modern Workplace. This week:
  •     Demo - SharePoint Online Record Center and Microsoft Teams... like a hand and a glove for Enterprise Productivity.
  •     Demo - Need an Enterprise ready solution for Frontline workers with built industry leading safeguards around compliance and security? Learn more about how Microsoft Staff Hub and Office 365 Groups work seamlessly together to deliver flexible, intuitive, on the go secure scheduling and collaboration.
  • Fire Away Friday Open Q&A Forum – Ask me your questions about all things collaboration with Microsoft 365!
  • Wrap Up/Sign Off

You can grab the PDF version of the broadcast slides by clicking here Modern Workplace Fire Away Friday 7-13-2018 Slides

Links from today's webcast:

Additional Links:

Have requests for future content coverage? Sync up with Michael on LinkedIn here

Michael Gannotti

Top Stories from the Microsoft DevOps Community – 2018.07.13

$
0
0

It's been another busy week for VSTS and DevOps, and we're excited to see some interesting articles and podcasts about DevOps on Azure.

Moving a Git Repo from Bitbucket to VSTS
One of the great things about Git is that it's easy to move your repository from one hosting provider to another - and maintain the full history along the way. Gerald Versluis shows you how simple it is to migrate your repositories from Bitbucket to VSTS.

Create VSTS Work Items or GitHub Issues for Application Insights from Azure Portal
Application Insights provides great monitoring of your web applications - but how do you get visibility into the problems it finds? Abhijit Jana shows you how to create VSTS work items (or GitHub issues) based on exceptions, using the Azure portal.

Back to basics: Building .NET Core app in VSTS
Since creating build and deployment pipelines is so easy and flexible, we sometimes lose sight of how easy it is to handle the basics. Utkarsh Shigihalli shows how to build a .NET Core application with VSTS.

Microsoft Cloud Show: Office 365, Azure and VSTS News + Moving Hyperfish to AKS
In this episode of the Microsoft Cloud Show, Andrew Connell and Chris Johnson cover the latest news from Azure and Visual Studio Team Services, including a discussion of the new UI improvements to VSTS.

How to deploy and host a Jekyll website in Azure blob storage using a VSTS continuous deployment pipeline
Jekyll is an amazing static site generation tool, popularized by GitHub Pages. But if you need to serve serious traffic, you might need to upgrade to Azure Blob Storage and Azure's CDN. Carl-Hugo Marcotte shows you how to scale your Jekyll installation up using VSTS and Azure.

A case study in investigating why Narrator’s not announcing a change in UI state

$
0
0

This post describes the approach taken when I recently investigated why the Narrator screen reader wasn't announcing a change in checked state of menu item UI in an in-development product.

Apology up-front: When I uploaded this post to the blog site, the images did not get uploaded with the alt text that I'd set on them. So any images are followed by a title.

 

Introduction

A few days ago I was contacted by a developer who had a bug assigned to them, relating to Narrator not announcing a change in state of their checkable menu item. I found the subsequent investigation really interesting, as I don't think I've encountered a situation quite like this before. So I thought it'd be worth sharing the steps I took, in case anyone else discovers a similar bug with their UI.

Spoiler alert: The root cause seems to be due to the behavior of a UI library being hosted by the product. If that really is the case, then it would seem possible that any product hosting this library will exhibit the same accessibility bug.

 

The UI in question related to a menu in a dropdown. At any given time only one menu item could be checked, and when it was checked, its background changed from white to grey. The following images are my own simulation of the UI. The first image shows a menu containing three items, and the middle item has a grey background, indicating that it's checked, and a black border, indicating that it has keyboard focus.

 

Figure 1: A dropdown menu containing the three menu items of "Chickadee", "Towhee", and "Grosbeak". The menu descends from a button showing a bird icon.

 

When the down arrow key is pressed while the menu is in the above state, keyboard focus moves to the next item in the menu, as shown in the following image.

Figure 2: The second menu item remains grey, indicating that it's checked, and keyboard focus has moved to the third menu item.

 

And finally, if the spacebar is then pressed, the third item becomes checked and the second item unchecked, as shown in the following image.

 

Figure 3: The second menu item is now unchecked, and the third menu item is now checked and has keyboard focus.

 

The bug is that when the menu item becomes checked, Narrator says nothing, (other than "Space" if the keyboard echo setting is on). The change in checked state is not announced by Narrator when the state of the menu item changes, and so the customer is not made aware of the change. They'll no longer know what state the UI is in, and that's not acceptable.

So the steps below describe one approach to investigating this bug. It's important to remember that Narrator only cares about the programmatic representation of the UI, as exposed through the UI Automation (UIA) API. The fact that the background of some menu item happens to be grey, means nothing to me in this investigation.

Tip: For a very quick introduction to UIA, check out UIA at a glance. For a more detailed introduction, check out Introduction to UIA: Microsoft's Accessibility API.

 

 

Step 1: What state is the UI really in after it was meant to change?

The first thing I'm interested in is whether the checked menu item is really checked from a UIA perspective. Semantically, if a UI element can have a state of checked or unchecked, then it would support the UIA Toggle pattern. By using the Inspect SDK tool, I can verify whether the Toggle pattern's ToggleState value is "On" for the checked menu item, and that it's "Off" for the other items in the menu. Sure enough, the ToggleState values were all as I'd expect them to be, so that's good.

Note that in some cases, we might find a bug like this is due to the checkable UI not supporting the UIA Toggle pattern at all, and we'd need to figure out why it doesn't support the pattern. But that's not the case here.

Figure 4: The Inspect SDK tool reporting that the second menu item has a UIA Value property of "Towhee", and a ToggleState property of "On".

 

And for completeness here, now that I know that the UI seems to be in the expected programmatic state, I can arrow away from the checked menu item, and back to it, and verify that Narrator does announce the new checked state. When I did that, Narrator's announcement included the new state of the menu item just fine.

 

Step 2: Remove Narrator from the equation

So, if the Narrator experience is not as it should be, then it must be a Narrator bug right?

Wrong.

Sure, Narrator can be improved, just like any product can, and occasionally some customer experience issue crops up which is caused by Narrator itself. But Narrator's only one of many components that are involved with delivering the customer experience.

This is the stack that I usually care about:

  • Narrator, the UIA client app.
  • UI Automation itself.
  • The UI Framework, which implements the UIA provider API on behalf of the product.
  • The UI implemented by the product developer

 

My first step when investigating the problem with the menu item interaction, is to try to remove Narrator from the equation. If I can do that, then that helps pinpoint where the root cause of the bug may lie.

Narrator is a UIA client app, and so interacts with the product UI of interest through the UIA client API. So if I point another UIA client app at the product UI, and that other UIA client app behaves exactly as expected, then that might suggest the problem lies with Narrator. But if instead, the other UIA client app also struggles at the product UI, then that might suggest I should focus on the product UI, rather than on Narrator.

The bug relates to how Narrator reacts to a change in state of the product UI. In order for narrator to react to a change in the state of UI, the UI must raise a UIA event to make Narrator aware of the change. If no event is raised, then Narrator isn't made aware of the change, and can't make your customer aware of the change. For a one-minute introduction into UIA events, check out Part 4: UIA Change Notifications.

So the next question is: Did the appropriate UIA event get raised when the menu item was checked?

The AccEvent SDK tool is a UIA client app, and can report details of UIA events being raised by UIA. So I'll point the tool at the UI, and check whether a ToggleStatePropertyChanged event is being raised when I select the last item in the menu.

The following image shows what I would have expected to have been reported in AccEvent if everything was working as it should.

 

Figure 5: The AccEvent SDK tool reporting a UIA ToggleStatePropertyChanged event being raised by a menu item.

 

However, it turned out that there was no property changed event reported by AccEvent when the menu item was checked. As such, this seemed to suggest that this bug isn't caused by Narrator. Rather the UI isn't making Narrator aware of the change.

 

Step 3: Why isn't the UIA event being raised?

Now, this is where my investigation became a real learning experience for me.

Traditionally I've found that if AccEvent doesn't report a UIA event, then the event wasn't being raised. So I then ask the product teams for details on exactly how the UI is implemented. In desktop UI, if a standard control is being used from the Win32, WinForms, WPF or UWP XAML frameworks, then a UIA ToggleStatePropertyChanged event would be raised by the UI framework as required. And for web UI hosted in Edge, if the UI is defined using semantic, industry-standard HTML, (including in this case making sure it has a role of "menuitemradio" and uses "aria-checked"), then when an element is checked, Edge would raise the expected UIA ToggleStatePropertyChanged event.

And often during a bug investigation such as this with web UI, we'd examine how the UI was defined and find it didn't use semantic, industry-standard HTML. Rather the UI was built to react visually to customer interaction, but not programmatically. Consequently, the product team would update their UI to use semantic, industry-standard HTML, and the bug would be resolved. Jolly good.

In this particular case the UI was defined in HTML, but by a UI library not created by the product team. So it wasn't quite as straightforward for us to investigate what might be happening.

 

Step 4: Did the UI change far more than it seemed when its state change?

I'm going to confess that at this point, I was stuck. I flailed around somewhat, trying to think of any way to make progress on this. But after a while, I did discover one critically important piece of information, which at least unblocked the product team. So I'll jump to the useful ending of the story here…

In my experience, if AccEvent doesn't report a UIA event, then the event wasn't raised. (That's assuming I set the AccEvent to report the events of the appropriate type and scope, in its Settings UI.) But what if AccEvent did receive the event from the UI, yet didn't report it? When AccEvent receives an event, it goes back to the sender of the event to learn more about the sender. AccEvent will gather up details of the sender, and display those details in its UI. Perhaps if AccEvent were to have problems gathering up the sender's details after receiving the event, then the event won't get reported?

So, say the menu item raised the expected ToggleStatePropertyChanged event. Having done that, the product then destroys that menu item. It then creates a new menu item, in the checked state, and inserts the new menu item into the menu, in the same place that the previous menu item was. While this is happening, a UIA client app listening for events, (such as AccEvent or Narrator,) receives the event, and returns to the source element to gather details about it. The attempt to get those details fails, because the sender's been destroyed. As such AccEvent and Narrator both discontinue the attempt to react to the event. And given that the new menu item was always checked once inserted into the UIA tree, no ToggleStatePropertyChanged event was raised by that new menu item.

That hypothesis seemed rather unlikely to say the least, but it would match the results that we were experiencing.

So to pursue that line of thought further, we needed to know whether the UIA element representing the menu item of interest was the same UIA element both before and after the change in checked state. Comparing UIA properties such as the Name or AutomationId wouldn't help here, as they're very likely to be the unchanged throughout. But if the UIA RuntimeId property has changed, then it's not the same element. (I mentioned something about RuntimeIds a while back, at Don't use the UIA RuntimeId property in your Find condition.)

So I pointed the Inspect SDK tool at the menu item when the item was not checked, and then again once it had become checked. And having done so, I found the RuntimeIds were different. For example, in one run-through, the RuntimeId changed from "2A.80A44.4.816C", to "2A.80A44.4.81C5".

 

Figure 6: The Inspect SDK tool reporting the UIA RuntimeId property for a menu item.

 

At this point, it seemed that we knew enough to follow up with the owners of the library which created the menu UI. I strongly suspect that the action being taken to replace the menu item UI during the interaction, is leading to UIA clients like Narrator not being able to make the customer aware of the change in state at the time the change happens. Whether the ultimate resolution is to have the library updated to not recreate UI in this way during customer interaction, or to replace the use of the library with semantic, industry-standard HTML which simply checks or unchecks an item which lives through the interaction, I don't know. But at least the product team is unblocked.

 

Summary

This investigation has been a reminder of a couple of things:

  • The visual representation of UI is no indication of the programmatic representation. Our customers require both representations to be a good match for the meaning of the UI, but depending on how the UI is implemented, that requirement might not be met by default, particularly for web UI. So be sure you're familiar with both representations.

     

  • Before considering leveraging any library to present UI in your product, where the action taken by the library is outside of your control, always be sure that the UI that the library provides is fully accessible. Whether that UI is a menu item or a full-blown complex chart, you don't want to risk shipping UI that's inaccessible. Your customers won't care that it's some library UI hosted in your product that's blocking them. All they care about is that they can't use your product.

     

And by the way, while this particular investigation has involved web UI, the principles apply to any UI. For example, say you want your UWP XAML app to make your customers that are using a screen reader aware of some important change in the app's UI. You call FrameworkElementAutomationPeer.FromElement() to get the AutomationPeer associated with a control, then call that AutomationPeer's RaisePropertyChangedEvent(), and then for some reason destroy the control. If Narrator or any other UIA client quickly comes back to the app to learn more about the UI that raised the event, it's not going to be able to do much that's helpful to your customer if that UI's already been destroyed.

 

'Til next time.

Guy

Investigating performance degradation issue South Central US – 07/14 – Investigating

$
0
0

Initial notification: Saturday, July 14th 2018 11:10 UTC

  • We're investigating a performance degradation issue in South Central US.

  • Next Update: Before Saturday, July 14th 2018 11:45 UTC

Sincerely,
Pedro

Salmon fishing Washington with some of the Power BI MVPs

$
0
0

In a couple of days a a bunch of the Power BI MVPs are in town for the Microsoft Business Applications Summit.  While here i offered to take some of them (Mike Carlo, Seth Bauer, and Phil Seamark) out Salmon fishing and a couple of weeks ago Phil asked me how we will be fishing.

While there are a ton of books on the topic of Salmon fishing we will be using one of the four techniques from either a diver or a down rigger

  1. Hoochies
  2.  Spoons
  3. Whole Herring
  4. Whole Anchovies

I have listed these in what seems to be most productive/popular...

(to be finished when i get back from the beach!)

Hoochies:

Hoochies are smallish (3.5-4") hollow vinyl skirts that typically will have a plastic body with Tinsel inserted into it and fished with a small piece of herring.

Unlike the other lures hoochies have no actual action themselves and are typically fished behind a flasher which imparts flashing/darting movements.  Below is a picture of a hoochies and another image with flasher with a spoon)

....and since you can not fish a flasher behind the Saltydawg/Seastriker divers i use this relegates hoochies to downrigger only fishing on my boat

Spoons

While on your table, made from thin stamped brass/steel then chromed and powder coated these guys are really do not look like something a salmon could mistake for a meal but at ~3mph in the water their flashing and darting is amazingly similar to a bait fish and unlike hoochies do not need bait, self-clear weeds and can be fished without a flasher making them amazingly productive and a common "go to" technique.   The picture below shows the typical patterns we fish and a couple of the divers i use.

Whole Herring

Arguably my favorite technique.  While spoons work just as well behind a flasher or "naked" herring tend to work better naked...fished directly behind a diver often quite shallow/close to the boat.

I think the reason for this is the fact the flashers darting motions tends to wash the herring out and beat them up.   Below is are standard rigging techniques, maybe worth noting i do not rig my herring like this nor do fish them as cut plugs and will post a picture on my next trip.

 

Whole Anchovy

Deadly when the salmon are keying small bait fish - which is often the case! Unlike herring these are fished with helmets to keep them from washing out.  Coupled with the ones you get from the bait receiver tend to be small enough to need smaller hooks makes them a little trickier to fish than herring.

Downriggers and divers

We will be trolling around 3mph and fishing over 300' deep.  to get your gear down to the fish you need some assistance and this can either be a diver or a downrigger which is an electric reel, steel boom and a 15lb lead weight that is drags your gear to the correct depth.   When a fish hits a release lets them separate enabling you to fight the fish directly.   This enables fishing from the surface down to ~400'.   Alternatively and my preference,  you can use a small planing surface to drag your spoon/herring down, called a diver.  While this is much simpler and faster it want fish as deep and you can not use flashers as they will trip the diver -causing it to plan back to the surface.  Also divers will put a lot of pressure on the rod possibly impacting your rod selection to something heavier than you would fishing with a downrigger.

 

p

 

 

 

 

 

 

 

 

 

 


OSD Video Tutorial: Part 3 – Task Sequencing

$
0
0

Part 3 of the series just went live – check it out here.

This is session three of a series that details the Operating System Deployment feature of Configuration Manager.  This session focuses on the task sequence.   A detailed discussion of a basic image deployment task sequence is provided along with a review of all default task sequence steps.

Microsoft vs Work or School Accounts

$
0
0

La mayoría de los servicios en Microsoft solicitan una cuenta para ingresar dándonos la elección entre usar una Cuenta Microsoft o una Cuenta de Escuela o Trabajo (algunos servicios incluso permiten utilizar cuentas de otros proveedores como Facebook, GitHub, etc.); muchas personas piensan que simplemente se trata de un correo más que nunca va a ser utilizado, pero realmente hay mucho más involucrado.

Work or School Accounts

Este tipo de cuentas están respaldadas por un directorio en Azure y tienen mecanismos avanzados de administración; los usuarios o credenciales son gobernados por organizaciones y es posible asignar permisos o suscripciones a servicios como Office 365 y PowerBI entre otros.
Todos los directorios cuentan con la forma predeterminada dominio.onmicrosoft.com pero también pueden tener dominios personalizados como cualquier correo electrónico.

Microsoft Account

Las personas normalmente asocian estas cuentas con cuentas de correo de Hotmail o Outlook, pero no es necesario tener un correo electrónico de estos dominios; una Cuenta Microsoft es una credencial asociada a un correo electrónico, puede ser un correo corporativo o un correo de un proveedor público como Gmail o Yahoo.

Múltiples cuentas para un correo

Algo que confunde a muchas personas es el hecho que es posible tener una cuenta de escuela o trabajo y una cuenta Microsoft utilizando un mismo correo; se tratan de credenciales diferentes, con contraseñas, suscripciones y pertenencia a servicios diferentes. Al iniciar sesión con un correo con múltiples cuentas aparece un cuadro de diálogo pidiendo que escojamos cual cuenta deseamos utilizar, es fácil diferenciarlas a partir del ícono utilizado, las cuentas de escuela o trabajo están representadas con una imagen de una identificación como las utilizadas en las empresas.

Este es un artículo simple pero importante para otros que están por venir, hasta la próxima!
--Rp

Lift off this coming academic year with superb training free from Microsoft

$
0
0



Mark Anderson is a former teacher and school leader and now award-winning author, blogger, speaker, thought-leader and trainer around all things to do with teaching, learning and effective use of technology in the classroom.

Mark firmly believes that education is a force for good and under his moniker of the ICT Evangelist he strives to demonstrate how technology is something that can help to make the big difference to the lives of learners and teachers alike.

He’s taking over as our guest editor over the summer with a series of blog posts highlighting the great things you can do with technology so that it can have the impact it so rightly should!



Imagine if you went to the Doctor with an ailment and they told you that they hadn't had their knowledge about medicine and their skills updated since they left University. You'd be horrified, right?

Continuing Professional Development is key to ensuring that all professionals are able to make the most of cutting-edge research, knowledge and skills in their various disciplines. As education professionals, the same is true for us as teachers. Given the nature of our work, we have a moral imperative to ensure that we do the best we can by our learners and part of that is updating our skills in aspects of our professional work.

When it comes to professional learning opportunities for teachers, the courses we undertake to update our skills aren't always accredited; not with Microsoft though! The Microsoft Educator Community (MEC) is a fantastic place where you can explore lots of different types of learning activity to support your innovative practice as a teacher where you can gain accreditation for it too!


How do you get started?


The first thing you'll need to do is to sign up for the Microsoft Educator Community at education.microsoft.com. Many choose to use their school email address to help identify you within the community (should you wish to do that). There are more than a quarter of a million educators on there, not just learning with you but also sharing their ideas and resources, asking and answering questions and much more.

Once you're signed up to the community all you then have to do is choose which courses you'd like to complete and follow the simple instructions. The community gives you access to rich content to help embed Microsoft solutions into your curriculum and now is a great time to start thinking about doing that as we roll towards the start of the new academic year.


What can I learn?

Access to all of the courses available to you can be found in the MEC. You can choose from many different types of learning foci from many different types of classroom activity types with opportunities exploring Project Based Learning, Collaboration, Creativity, Critical Thinking, Pedagogy and more. Embedding Microsoft tools in the classroom has never been more supported with courses on teaching with technology, teaching with Minecraft in the classroom, and many more.

Each course is rated by its difficulty ranging from Beginner to Advanced, with various Intermediate levels in-between. Each course is also worth a number of points which you are awarded upon your successful completion of the courses.

Once you have achieved a score of 1,000 points or more you will be awarded your Microsoft Innovative Educator (MIE) badge. A proud achievement and a great way of benchmarking your skills too. Alongside the MIE badge there are lots of other great badges you can earn for undertaking different activities through the MEC.


For those of you who are really keen to develop your skills further, beyond those shown above, there are lots of other badges you can earn linked to things such as Pedagogy, Minecraft, use of OneNote, sharing lesson plans and more. For the full low down on the badges you can earn, check out the badges page on the MEC here.

Join the MEC today and really lift off in your innovative classroom this coming academic year!


Follow Mark Anderson on social now! 

Twitter > @ICTEvangelist

Instagram > @ICTEvangelist

Facebook > /theictevangelist

LinkedIn > /themarkanderson

Blog > ictevangelist.com

 



Next on the Menu – Ring Buffer Target Deep Dive

$
0
0

In this SQL Snacks™ we will examine the Ring Buffer Extended Event target. Arguably the most flexible of the targets (despite a few of its shortfalls) we will use it to start the examination of the suite of targets available to us.

In a future SQL Snacks™ we will examine the file target which is very powerful. We will examine programmatic ways to consume the file target and we will also dive deep on the capabilities in the SSMS UI for manipulating Xevent target data.

Go Back and Go Forward buttons in Microsoft Outlook to navigate items history like in a browser

$
0
0

If like me you often jump from a mail item to another mail item or to a calendar item and then you need to get back to the previous item, it might really be a time saver using the Go Back and Go Forward buttons in Microsoft Outlook.

To do so, go in File > Options > Quick Access Toolbar as in the following picture:

And you "Go Back" and "Go Forward" buttons will appear in the upper left corner (Quick Access Menu). Now you can easily navigate back and forth your Outlook items history.

Supercharging the Git Commit Graph IV: Bloom Filters

$
0
0

We've been discussing the commit-graph feature in Git 2.18 and how we can use generation numbers to accelerate commit walks. One area where we can get significant speedup is when presenting output in topological order. This allows us to walk a much smaller list of commits than before. One place where this breaks down is when we apply a filter to our results.

For instance, if we filter by a path, then we are asking for a set of commits that modified that path. The following Git command returns the commits that changed "The/Path/To/My/File.txt" in the master branch:

git log master -- The/Path/To/My/File.txt

If the path is not modified very often, then we need to walk more of the commit graph in order to return the results. What's more, we need to walk a number of trees to determine if the path was modified. In the example above, we need to walk five trees: the root tree and four nested trees for the directories above "File.txt".

These behaviors combine to make some history calls slow. But in Visual Studio Team Services (VSTS) we speed up file history calls using some powerful tech: Bloom filters.

Today, I'll describe how we use Bloom filters to accelerate file history calls in VSTS and how we plan to extend the commit-graph feature in Git to use them, too.

File History in Git

Before we get into the details of Bloom filters, we first need to describe how file history works in Git.

Each commit in Git points to a complete description of the working directory at that point in time. In this way, Git is a Merkle tree. Files are represented by blobs, and directories are represented by trees (which reference blobs and other trees), and commits reference their parent commits and their root tree object. A diagram below shows a simple repo with three commits and a few trees and blobs.

Listed above the commits are the paths that are added or modified by that commit. Note that trees and blobs are shared between commits if they have the same contents. This is the critical ability of the Git object database to have each commit store a snapshot of the entire working directory without storage growing out of control. This also means that when we are performing a checkout or file history operation, we are walking the object graph, a supergraph of the commit graph.

This makes Git very fast when you want to do a git checkout, but can make file history more difficult. In order to determine if a commit "modified" a path, we actually need to determine if that path is different from the path in a parent. To compare a path between two commits, we need to walk two lists of tree objects, parsing each one before finding the next. This is more expensive the deeper the path.

This is further complicated in that Git uses file history simplification by default, and the most important condition is whether the first parent is different. If a path is the same on the first-parent, then we say the commit did not modify the path and we only walk to the first-parent. This is very important to our discussion below, so I recommend you read this article about file history simplification if you want to follow along in full detail.

This computation to determine if a path is different between a commit and a parent is very expensive, especially as the number of subfolders increases. Each folder in the path is another tree that needs to be found, parsed, and examined. We can short-circuit if two trees at the same level of the path are equal, but if our full path doesn't change very often and a sibling path does change often, that is not enough to help us.

For example, suppose we are performing file history for a path /The/Path/To/My/File.txt. The animation below shows a line of eight commits and the chains of trees that we need to walk as we check each commit to see if it changed the blob at that path. Only three commits fit our file history filter, as they have a different blob at that path than their parent; the last commit is a root commit containing the path, so it is an "add".

Instead of walking these trees, we would like to have an oracle that can tell us "These two commits have the same content at this path" or "These two commits have different content at this path". One such oracle could be a list of all commit-path pairs where that path changed at that commit, and we could look up each commit-path pair for our path.

That solution requires a lot of space. There was a point in time where VSTS stored that list in SQL and that table was over 60 GB just for the Linux kernel repository. That's more than the data you get when you clone that repo! (We have since deleted the table, since we don't need it anymore!)

Instead, we will settle for an oracle that can provide these two answers:

  1. This path is the same between this commit and its first parent.
  2. This path is probably different between this commit and its first parent.

The "probably" in that second answer is the reason we can use Bloom filters to implement this oracle. If we get a "probably" then we can compare the two commits and find out the real answer. We normally did that work anyway, but now we will skip comparing the commits if we get the first answer. In the figure above, the light-colored trees are objects we could avoid walking if we had an oracle like this one (and it was always right). In real-world repos, the density of "skippable" trees is much higher than in this toy example.

We'll talk about the full application, but first let's learn about Bloom filters.

What is a Bloom filter?

A Bloom filter is a probabilistic set. To use one, we create a memory region of a certain size (relative to the expected number of elements), then "add" elements to the set by flipping some bits in that region to "on". We then ask the set

Do you contain the element X?

A regular set would provide two answers: "yes" or "no". A probabilistic set relaxes these answers by allowing "maybe". For a Bloom filter, we specifically allow the following responses:

  1. X is definitely not in the set.
  2. X is probably in the set.

The power of a Bloom filter is that we can avoid doing some hard work when we get the answer "definitely not". To be correct, we need to check -- using other means -- that the "probably" answer is actually "yes", but if we've done it right the false positives are very rare.

To create a Bloom filter, we need two magic constants. Recommended values of these constants are 10 and 7, so I'll just use the concrete numbers instead of constants. These numbers roughly correspond to the size and density of the filter.

Size: If we expect the Bloom filter to contain N elements, reserve at least 10N bits. These bits all start in the "off" position.

Density: For each element X we add to the filter, we will set 7 bits to the "on" position. We will use seven hash values based on X to determine these positions. Turns out that we don't need seven independent hash functions but instead can combine two mostly-independent hash functions (such as the .NET hashcode and Murmur3) and combine them into an arbitrary number of hash values.

Here is where the magic happens. To check if the Bloom filter contains an element, we see if the 7 bits corresponding to the 7 hash values are on. If any are missing, then we definitely did not add that element. If they are all on, then we probably added that element, but it is possible that we had enough hash collisions that this is a false positive. The trick is setting the magic constants to balance the expected false positive rate with the storage cost. The values 10 and 7 give roughly a 1% false-positive rate.

In the animation below, we create a Bloom filter and fill it with three elements, x1, x2, and x3.

After adding the elements, we then test that x1, x2, and x3 are in the set, which succeeds. We then test elements y1 and y2. The first, y1, is not in the set as it is missing a bit for one of the hash values. However, y2 reports being in the set. Since the image is colored, we can see that the bits were set by different additions, but we only store single bits. Thus, we must say that y2 is probably in the set.

With the magic constants, we expect less than 7/10 of the bits being on (we expect some collisions when adding the elements). If the hash algorithms are sufficiently distributed, then we expect a probability less than 7/10 that any single hash value has its bit on. To have all seven values on then multiplies across this probability. For a full description of how to calculate a false-positive rate, go read a paper on it. Whatever values you use in your implementation, it may be good to generate random inputs and measure the false-positive rate yourself.

How does a Bloom filter help with file history?

For each commit, we can compute the list of paths that change relative to the first parent of that commit. For single-parent commits, this is usually the changes introduced by a single developer in a small unit of time. For merge commits, this is usually the list of changes introduced by a pull request. Remember: if a file changed, then every parent folder also changed.

If the number of changed paths is not too large (we use 512 as a limit in VSTS) then create a Bloom filter and seed it with the values for those paths. If there are more than 512 changes, then we mark the commit as "Bloom filter too large" and check every path. We selected this limit after finding the number of changes in a commit against its first parent roughly follows a log-normal distribution and commits with more than 512 changes are very rare. These commits are usually large, automated refactoring changes that affect most paths in the repository.

In the animation below, we repeat the file history query from the previous animation. This time, we have Bloom filters for the changed paths at each commit. If the Bloom filter returns with "definitely not", then we do not need to walk trees to check if the commits have the same file at that path. Otherwise, we walk trees to check equality. In one case, the Bloom filter provides a false positive.

In reality, there are many more "same" commits than "edit" commits for deep paths, so the filters save a lot more walking than in this small example.

With a false-positive rate of 1%, this means that we can theoretically speed up our file history algorithm by 100x by avoiding 99% of the computation. In reality this is closer to a 6x speedup for a random sample of paths. However, we have observed speedups as high as 20x for rarely-changed paths, which was the main application of this effort. The best part is that this requires less than 100MB of extra storage for a repository the size of the Linux kernel.

I personally believe this application of Bloom filters is quite novel, since we are testing one "key" against many small filters. Normal applications check many keys against a single, very large filter.

What does this mean for Git?

The Bloom filter feature already exists on the VSTS Git server, helping file history questions since November 2016. Our goal now is to deliver this for all Git customers!

Earlier, we discussed the commit-graph file format. This file is organized into a list of "chunks", sorted by a table of contents at the beginning of the file. We can add an optional Bloom filter chunk, so users who want to do some extra computation during Git maintenance can create a commit-graph file containing this Bloom filter information. We could also send the commit-graph file as a new verb in Git Protocol V2. We discussed this addition to the file format and protocol at the Git Merge 2018 contributor's summit and on the Git mailing list.

Work to implement the feature is planned, so keep an eye out for it in a future version of Git!

宏转换为Contexpr

$
0
0

[原文发表地址]Convert Macros to Constexpr

[原文发表作者]Augustin Popa

Visual studio 2017 15.8 版本已经在预览阶段可以下载使用了。 今天,Preview 3 的版本已经发布了, 这个版本里我们做了很多功能的提升,来提高开发人员开发体验。 15.8 里面其中一个红药的主题就是代码现在化, 宏是这里面的关键角色。 在15.8 Preview 1, 我们已经可以展开宏的快速信息提示文本,现在,在Preview 3, 我们很高兴的宣布VS可以将他们转化为现代C++ contexpr表达式,并提供可相应的选项来完成这个转换, 这个方式可以帮你清理你的代码,使你的代码更加现代化。这个功能(和通用编辑器其他功能一样) 是可以配置,也可以根据自身的需求打开或者关闭。

Macro 转为常量的快速修改

当你在编辑器里查看你的代码的时候,你可能会注意到一些”...” 在#define 指令中,某些宏的下面。这些“...” 被称为建议,是另外一种关于错误的单独的分类(红色的曲线;对大多数代码问题), 和警告(绿色的曲线,一些简单的代码问题)。一个“建议”是涵盖一些低风险的代码问题。

当选中这个选项, 一个预览窗口将会出现,展示你将要所做的修改:

一旦这个修改被应用, 代码编辑器里就会出现转换为常量的表达式:

这个功能对常量有用,对一些基本的表达式,像宏一样的方法也可以应用

你可能会注意到,上面代码里面的“Max” 下面没有’”...”, 对于像宏一样的方法, 我们还没有充分全面的预处理来保证这个转换是可以成功,这是为了保证VS的集成开发环境的稳定性能。 因为我们只想对那些我们认为的合理转换 去显示”建议“的提示。 虽然我们不会显示”...“指示器,然而,你还是可以能够从灯泡菜单项里找到转换的选项, 但你在预览窗口里面选择应用改变的时候然后我们才会在充分处理宏的转换。 这样, 宏就会被转换为下面的模板:

通常,你是可以选择自己将宏转为contexpr, 但是当你看不到”...”时也不要期待它一直起作用。不是所有的宏都是可以转化为contexpr的, 因为宏的范围很宽泛, 很多宏和contexpr或者contexpr表达式是无关的。

工具>选项 配置

你可以在工具>选项>文本编辑器>C/C++>查看> 宏转换为常量 里配置。在那里,你可以根据自己的喜好选择是否显示为建议(默认操作),警告(绿色下划线),错误(生成中断,红色曲线), 或者 无(隐藏文本编辑提示器)。

请给我们反馈!

这是我们第一次发布这个功能, 我们会非常感谢如果你们能在下面给我们留言反馈如何让我们做的更好。 如果你们遇到任何产品问题, 请通过集成开发环境里的“帮助>发送反馈>报告问题 "让我们知道。


Are you not receiving release notifications for a group?

$
0
0

Today a customer reported that VSTS is not sending approval pending email if the approver is a VSTS group. Customer also mentioned that the approval email is sent if the approver is an individual user.

I was able to reproduce the problem on my test account and here are the steps that I did to come out of it.

1. Saw the account level notification admin settings and observed that the default delivery option for all groups is set to "Do not deliver".

2. Saw the notification admin settings for my test group and observed that the default delivery option is set to "Do not deliver".

3. Changed the default setting for my test group to "Deliver to individual member" and it worked.

Enjoy !!

When you cannot use Azure IoT Device Provisioning Service

$
0
0

Hi!

You may know already about the Azure IoT Device Provisioning service, if not, head to https://docs.microsoft.com/en-us/azure/iot-dps/ for more information.

The idea behind DPS is short and simple: Imagine, you're the manufacturer of an IoT device and you want enable your device to "just work" once it's delivered to your customer. So the customer unpacks the device, plugs in Ethernet and the device lights up, starts talking to the cloud, receives device configuration information. So far, so easy, you think, in production, let's just provision an Azure IoT hub device connection string or a x.509 certificate and that's all.

So why would you need DPS? Let's say your device has been configured at the factory, but it's been sitting on a shelf for years. And then it gets exported to a country where you never thought you would sell devices to, back in the days when you created your service. That's when you decided that you would use a x.509 certificate and a single iot hub. But now, a couple of years later, you have 10 iot hubs in different geographies and the initial certificates you installed in the device have expired since you set their validity to 3 years. This is where DPS comes in. Your device can now go to the global DPS endpoint and ask: "is there a new configuration for me?". DPS now looks through its database to find a matching configuration. If it does, it encrypts it in a way that only this particular device can decrypt the information and sends it back to the device. The device then decrypts the configuration information using its built-in hardware security module and in it, it finds the configuration for an iot hub. It then connects with the obtained credentials, receives additional configuration information such as OS and application update instructions and suddenly works. Your customer is delighted and you are too since you now have another device talking to your backend service.

So why wouldn't you do this all the time? Well, there is one important prerequisite for using DPS: You need to add a device-individual public/private key pair to the device at production time and you need to record the public key in a secure way at this time (or install a certificate used for group registration to be precise). Doing so isn't very hard (e.g. if your devices have a built-in TPM 2.0, you can just use the built-in "EK" of the TPM for this) but the HSM adds BOM cost and reading out the information adds time in the manufacturing process.

Now imagine, you have a very simple device such as the teXXmo IoT Button (http://www.iot-button.eu/) which does not have a HSM, but needs an Azure IoT Hub device connection string. You can now go back to the initial approach and provision every device with an individual connection string at production. But maybe you don't want to give your manufacturer full access to your production IoT hub when provisioning devices, but you still need an automated way to generate these connection strings.

This is where my simple Quick Device Registration Service sample comes handy. (Or did somebody say Quick & Dirty Registration Service?) Instead of handing over the keys to the castle (i.e. the iot hub owner connection string) you install this service as an Azure function and provide your manufacturer with the access codes to this service and he can produce device-individual iot hub connection strings while producing devices. The sample client included in the solution calls the service with a given device serial number and gets back the iot hub connection string for this device. The service also checks that no serial number is used twice and that each serial number is valid. (In the sample, it just checks if a serial number is divisible by 7, but you can test in whatever way you can imagine, e.g. CRCs, min/max etc.) Once the manufacturer has obtained the device connection string, he can write it into the device. Once the device is connected to the Internet, it has all information necessary to talk to your Azure IoT Hub.

But there is another usage for this. Imagine you want to use IoT hub in a software-only product, let's say a digital signage solution that is "just" an application that an end user can install on an existing PC. Or it's a driver package that comes with your device, and the device is a PC peripheral that does not talk to the cloud directly. In both cases, there is no factory provisioning possible. And while there may be a device-individual identity (e.g. a device serial number) you may not have any space to store (let alone secure) a per-device secret. But you still want to use IoT hub since it's such a neat way to get information from your device, send information from the cloud to the individual devices and manage your devices using the device twin.

So you add code to your application or driver stack that calls the registration service during installation. The code reads out the serial number information (or even asks the user to enter it?) and then calls the service to obtain the device-individual connection string. It then stores the string locally (let's say in a configuration file on disk or in the Windows registry) and then uses it to connect to an IoT Hub.

But here, a word of warning is required: If you implement such a solution, the credentials used to call your registration service need to be in your client application or driver stack. So, if there is information, it can be found using reverse engineering. Still, this is better than leaving the IOT hub owner credentials in the client application or, even worse, leaving behind a signing key that would be able to create valid group registration certificates for DPS. Nevertheless, you should implement some protection against reverse engineering and monitor your registration service carefully to identify potential attackers that may have found the credentials and try to compromise your service.

If you need a secure solution, use DPS (and a hardware security module!) If you can't, then have a look at https://github.com/holgerkenn/qdrs and see if it fits your need.

Hope this helps,

H.

 

 

 

Office 365 Live Events Use Cases – Michael on the Go

$
0
0

In this episode of Michael on the Go, Microsoft’s Michael Gannotti discusses three separate use cases around the newly announced Live Events for Office 365. Use cases discussed include Corporate Communications, Training, and Employee Engagement. Additionally, Michael discusses extending Live Events usage with custom pages and more.

To learn more about Live Events see:

Michael Gannotti

Michael Gannotti, Principal Technology Specialist
Microsoft, Health & Life Sciences
https://www.linkedin.com/in/mikegannotti

Azure -Windows VM RDP Port got disabled on MS Firewall

$
0
0

Hello All,

On today Scenario i will demonstrate how to recover from a mistake where you have blocked the RDP Port within your Local MS Firewall.

In my Demo i used a windows 2016 Datacenter.

for this Demo I have created a Rule to block RDP Requests.

Once i press finish I got thrown from that session.

Below we can see the connection got cut and we are not able to reach the VM via RDP.

 

Recover Steps:

  • Navigate to Azure Portal
  • Go to the VM facing that issue
  • Select Extension and Press +ADD
  • On you Desktop create a Files and insert the below

PS C:UserstzachieDesktop> cat .disable_MSFW_All_Profiles.ps1

Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False

Save the Files as with any name you like i gave it this name disable_MSFW_All_Profiles.Ps1 Powershell

Upload the Ps1 file you have created.

Verify that the Extension was provisioned succeeded

 

 

 

 

 

 

Testing the Port from PowerShell

Port is responsive.

You should be able to RDP Now.

 

This is the current Firewall state (VM is at Risk)

-Fix your Firewall rules soon as possible and Turn On MS Firewall.

 

Healthy  MS Firewall state.

If you get stuck during the Process or this debug is not for You Please raise a ticket to Our Support and an engineer will help you to Mitigate the issue.

 

Thank You,

Tzachi Elkabatz

Azure Container Registry Build Supports All Windows Versions (Preview of Preview)

$
0
0

ACR Build, a cloud-native container build capability of the Azure Container Registry now supports all supported versions of Windows Containers.

In May of 2018, Azure announced ACR Build (Preview), a component of Azure's OS & Framework Container Life-cycle Management Today, we wanted to give customers early access to Windows Builds, in addition to the Linux and ARM builds customers could do since last May.

ACR Build Enables All Supported Versions of Windows Containers

The most significant element of Windows support is the support for all supported version of Windows, including ltsc2016, 1709 and 1803. In addition to Windows Server Core, Nano Server is also supported by ACR Build.

A suite of tests for Windows containers are available here: https://github.com/AzureCR/acr-builder-os-tests/tree/master/Windows

What is (Preview of Preview)

Updates to the az cli will not be publicly available until July 31. There are two ways to get ACR Build for Windows Container support today (July 16th, 2018)

  1. Create a build task with the public az cli
    build-task already supports the --os parameter, so you're good to go if you just want build-tasks
  2. To use az acr build; the equivalent of docker build, use a preview build of the az cli, which can be
    1. installed directly: https://github.com/Azure/azure-cli#edge-builds
    2. or with a docker image: https://github.com/Azure/azure-cli#docker
      docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:dev

A quick test

We'll use the most consistent way, a docker container of the dev builds of the az cli

  • On Windows, or Mac, instance a docker container, running the latest dev build, including updates to az acr build 
    docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:dev
  • Note: ACR Build is currently supported:
    • East US
    • West Europe
    • Central US
    • South Central US
      More regions are coming on weekly, with full global coverage coming "soon"
  • Build a Windows Server Core image, with a .NET Framework app
    az acr build
    -t helloworld-windowsservercore:multi-arch-{{.Build.ID}}
    -f helloworld-windowsservercore/multi-arch.Dockerfile
    --os windows
    https://github.com/AzureCR/acr-builder-os-tests.git#master:Windows/Servercore

How this works:

  • az acr build is the functional equivalent of docker build. Rather than having to install the docker client, az acr build sends the context to an ACR Build server in Azure
  • -t uses the standard docker tag, with the added ability to get a unique build id.
  • -f uses the standard dockerfile syntax to reference a Dockerfile that isn't in the root of the context
  • --os tells ACR Build to use a Windows host. As many FX and platform tags are multi-arch, this parameter is required to specify which OS you require for docker build
  • https://github... is a remote context for the build. While . is supported for a local directory, acr build and build-task can also specify remote git repos, avoiding the necessity of first git clone

Since this references the multi-arch tag for Windows and the .NET Framework, this will produce an 1803 version of Windows Server Core. To get a specific windows version, use the following:

  • Windows Server 1709, using the 1709.Dockerfile
    az acr build
    -t helloworld-windowsservercore:1709-{{.Build.ID}}
    -f helloworld-windowsservercore/1709.Dockerfile
    --os windows
    https://github.com/AzureCR/acr-builder-os-tests.git#master:Windows/Servercore
  • Windows Nano Server 1709, using the 1709.Dockerfile for Nano Server and .NET Core
    az acr build
    -t helloworld-nanoserver:1709-{{.Build.ID}}
    -f helloworld-nanoserver/1709.Dockerfile
    --os windows
    https://github.com/AzureCR/acr-builder-os-tests.git#master:Windows/Nanoserver

Automated Build Tasks, based on Git Commits, or Based Image Updates

The primary scenario for ACR Build revolves around Container OS & Framework patching. Just as your builds kick off, based on git-commit changes, as base images are patched, your builds can also be triggered.

See Overview of ACR Build, targeting OS & Framework Patching for more details.

Creating automated builds

Note: this can be done directly with the publicly released az cli, however, we'll use the az cli docker image for consistency.

  • Fork the https://github.com/AzureCR/acr-builder-os-tests repo
  • Create a Personal Access Token.
    For more info, see the ACR Build Docs for creating a build task.
  • Set the PAT to an environment variable
    PAT=[pastePAT]
  • Create an acr build-task
    az acr build-task create
    -n HelloworldWinServercoreLtsc2016
    -t helloworld-windowsservercore:ltsc2016-{{.Build.ID}}
    -f helloworld-windowsservercore/ltsc2016.Dockerfile
    -c https://github.com/AzureCR/acr-builder-os-tests.git#master:Windows/Servercore
    --os windows
    --git-access-token $PAT
  • Trigger a build manually
    az acr build-task run HelloworldWinServercoreLtsc2016

For more information

Any other feedback, feel free to leave larger comments here, user voice, twitter (@SteveLasker)

Thanks,
Steve and the great members of the ACR team

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>