Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

How to check the version of .NET Core on an Azure App Service

$
0
0

If you would like to check the version of .NET core installed on the Azure App Service platform you can execute the following command from KUDU/SCM, which i discuss further here.  Also show in Figure 1.

dotnet --version

image

Figure 1, how to check .NET Core version, which .NET Core version is running on Azure App Service

You can also check the release notes for dotnet core on GitHub here.


How to delete your Azure App Service deployment credential

$
0
0

I wrote this article “Resetting FTP password, using Publish Profile credentials, Azure App Service” here which may be of interest to you as well.  In this article, if your goal is to deleted the deployment credential, then you landed on the right page.  If you access the Deployment credentials blade shown in Figure 1 you will see this text:

  • “Git and FTP can't authenticate using the account you’re signed in with, so create a new user name and password to use with those technologies”
  • “Use this user name and password to deploy to any apps for all subscriptions associated with your Microsoft Azure account”

image

Figure 1, azure app service deployment credentials

There are two ways to delete the deployment credential:

  • Using the Azure Resource Explorer
  • Using Azure PowerShell

Read through the below for details on both possibilities.

Using the Azure Resource Explorer

I have written a few articles on the Azure Resource Explorer accessible here, read through them to get a better understanding of what else is possible.

Once logged into the Azure Resource Explorer, navigate to:

https://resources.azure.com/providers/Microsoft.Web/publishingUsers/web

And set both the publishingUserName and publishingPassword to “” as seen in Figure 2.  You need to select the EDIT button and change the mode from Read Only to Read/Write the first.

image

Figure 2, how to delete azure app service deployment credentials

Then press the PUT button and once successful, access the Deployment credentials blade in the portal (I had to logout and then back in) and all the fields will be empty as seen in Figure 3.

Using Azure PowerShell

I have written many articles on Azure PowerShell, perhaps some of these will help give you a better understanding of other actions you can perform:

The first thing you need to do, after your “Configure PowerShell for running Azure Cmdlets” is to login to your Azure account, you can use the cmdlets below to login and then if you have multiple subscriptions put focus on that one.

Login-AzureRmAccount
Set-AzureRmContext -SubscriptionId "########-####-####-####-############"

Once logged in, execute this PowerShell script:

$newUserName = ""
$newPassword = "" 
$PropertiesObject = @{"publishingUserName" = $newUserName; "publishingPassword" = $newPassword;}
Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceId /providers/Microsoft.Web/publishingUsers/web -ApiVersion 2016-03-01 -Force 

After the script executed successfully and after logging out and back in, I saw the credentials were gone, as seen in Figure 3.

image

Figure 3, delete azure app service deployment credentials

Xamarin Update Days Praha, 20-21. 2. 2018

$
0
0
XUpdateDaysPNGJiž od zítřka probíhá konference Update Days: Xamarin, Praha 2018.

Uvidíte zde celkem 12 přednášek, které vás kompletně provedou světem vývoje mobilních aplikací v .NETu. 
Budete se moci ptát odborníků, kteří mají dlouhodobé praktické zkušenosti s tvorbou, laděním a nasazováním mobilních aplikací. 

O konferenci Xamarin Update Days Praha

Od chvíle, kdy Microsoft koupil Xamarin a zpřístupnil jejich produkt všem vývojářům zdarma, uplynul již nějaký ten pátek. Xamarin již dávno není jen doplněk do Visual Studia, který zkompiluje váš C# kód tak, aby fungoval na telefonech s iOS nebo Androidem.
Seznámíme vás s možnostmi a nástroji, které platforma Xamarin nabízí. Uvidíte, jak zprovoznit emulátory zařízení, jak debugovat na vašem telefonu, nebo co potřebujete ke kompilaci aplikace na iOSu. Dozvíte se, k čemu je Visual Studio App Center nebo jak při vývoji zajistit kontinuální integraci a automatizované testování.
Ukážeme vám také best practices, kterými byste se měli při vývoji reálných aplikací řídit, jako například použití MVVM frameworku nebo dependency injection. Dále se můžete těšit na dvě přednášky o tom, jak pro mobilní aplikaci vytvořit REST API, jak jej verzovat a generovat k němu pomocí technologie Swagger klientské třídy.

Jako ochutnávku na závěr budete mít také možnost nahlédnout do světa React Native, což je alternativní způsob, který se dá při vývoji multiplatformních mobilních aplikací využít.

Pořadatelem akce je společnost Update Conference s.r.o., která v kooperaci s DotNetCollege s.r.o. organizuje konference pro vývojáře po celé České republice a poskytuje konzultace týkající se vývoje aplikací na platformě Microsoft .NET.

Jiří Burian

What is the current .NET version on an Azure App Service

$
0
0

I have an App Service here (currentdotnetversion) that shows the output of the current (when I wrote this article) .NET version using C# code.  Here is a partial code snippet:

protected void Page_Load(object sender, EventArgs e)
{
  const string subkey = @"SOFTWAREMicrosoftNET Framework SetupNDPv4Full";
  using (RegistryKey ndpKey = RegistryKey.OpenBaseKey(RegistryHive.LocalMachine,
         RegistryView.Registry32).OpenSubKey(subkey))
  {
   if (ndpKey != null && ndpKey.GetValue("Release") != null)
   {
     LabelPublished.Text = ".NET Framework Version: " +
       $"{CheckFor45PlusVersion((int)ndpKey.GetValue("Release"))}";
   }
   else
   {
     LabelPublished.Text = ".NET Framework Version 4.5 or later is not detected.";
   }
 }
 LabelTargeted.Text = "The targeted .NET Framework version is: " +
    $"{System.Web.HttpRuntime.TargetFramework.ToString()}";
}

I took the code from here and I got the Microsoft.Win32 assembly via a NuGet package, see Figure 1.

image

Figure 1, where to get the Microsoft.Win32 assembly, how to find the .NET versions

The code basically checks for the release version in the registry, see Figure 2.  Sure you can dump it our using REG QUERY, I discuss more ways to use this command in these other articles:

image

Figure 2, How to find the .NET versions on an Azure App Service

You can also check is out in the Azure Portal on the App Service –> Application Settings blade, see Figure 3.

image

Figure 3, How to find the .NET versions on an Azure App Service

I thought about coding the same to get the version which is targeted by my ASP.NET app because it can be different than the most current version installed on the platform.  I immediately jumped to this static property, which is right:

System.Web.HttpRuntime.TargetFramework.ToString();

Then I saw that there were targetFramework attributes in 2 places within my web.config file and needed to find out which one had precedence and what this meant.

<configuration>
 <system.web>
 <compilation debug="true" targetFramework="4.7.1">
 </compilation>
 <httpRuntime targetFramework="4.7.1"/>
 </system.web>
</configuration>

I found this article very helpful “All about <httpRuntime targetFramework>” which states:

Use <httpRuntime targetFramework="#.#" /> to opt-in to new 4.5 behaviors.

Use <compilation targetFramework="#.#" /> identifies which reference assemblies are used when when performing compilation.

I interpret that to mean that use the httpRuntime which focuses your application on the 4.5 or greater version of the CLR, while compilation points the ASP.NET app to a specific version of the .NET framework.

I discuss the difference between side-by-side and in-place upgrade here “Lab 5: Basic and Advanced Application Pool Settings” which you should also understand to get the total context.  There is also a link to this article as well “.NET Versioning and Multi-Targeting - .NET 4.5 is an in-place upgrade to .NET 4.0”.

image

Microsoft Windows Education Leadership Summit 2018

$
0
0

Digitale Bildung im Fokus:
Der Microsoft Windows Education Leadership Summit 2018

Am Freitag dem 23. Februar findet der Microsoft Windows Education Leadership Summit 2018 im Rahmen der Bildungsmesse didacta 2018 statt. Auf dem Expertentreffen in Hannover erhalten die Teilnehmenden einen spannenden Überblick über die neuesten Möglichkeiten und Lösungen im Bereich der digitalen Bildung. Alle Pädagoginnen und Pädagogen aus Deutschland und Westeuropa sowie Anbieter von Apps im Bildungsbereich (ISV) sind herzlich eingeladen, sich an dem innovativen Summit zu beteiligen.

Zeitgemäße Lehr- und Lernwerkzeuge von Microsoft und ISV-Partnern

Im Raum „Berlin“ im Kongresszentrum können Sie von 10 bis 14 Uhr lernen, wie sich der Unterrichtsalltag an Schulen und Bildungsinstituten mit Bildungs-Apps verbessern lässt. So erfahren Sie unter anderem viel Wissenswertes über die neuesten Windows-10-Lösungen für das Bildungswesen. Der Fokus liegt insbesondere auf Apps für die Bereiche Wissenschaft, Technologie, Ingenieurwesen und Mathematik. Hinzu kommen zeitgemäße Softwareprodukte für Mixed Reality, barrierefreies Lernen, Sprachen oder Sozialkunde. Sie erhalten einen umfassenden Überblick über die Möglichkeiten und Vorteile von digitalen Lehr- und Lernwerkzeugen im Unterricht. Zudem stellen wir Microsofts Vision für das digitale Klassenzimmer des 21. Jahrhunderts vor.

Die Agenda im Überblick

  • Check-in (10:00 – 10:15 Uhr)
  • Customer Session (10:15 – 11:30 Uhr)
  • Partner App Showcase (11:30 – 12:30 Uhr)
  • Partner Session (12:30 – 14:00 Uhr)

Microsoft Windows Education Leadership Summit:
Buchen Sie jetzt Ihren Sitzplatz!

Die Plätze für den Microsoft Windows Education Leadership Summit 2018 sind begrenzt und alle Interessierten haben bis zum 20. Februar die Chance, sich für die Teilnahme zu registrieren. Weiterführende Informationen und die Buchungsmöglichkeiten finden Sie hier.

VSTS Gems- Identify stale branches

$
0
0

VSTS is a great platform, but did you know about its gems?

Follow this short post series where we undercover some of its coolest features.

Gem

Identify stale branches

<click> on image for animation.



Area

Code

TFS Availability

TFS 2018

Value

  • Maintaining a clean repository when there are many branches and contributors

Full Description

Keeping your repository clean by deleting branches you no longer need enables teams to find branches they care about and set favorites at the right granularity. However, if you have lots of branches in your repo, it can be hard to figure out which are inactive and can be deleted. Now it is easier to identify “stale” branches (branches that point to commits older than 3 months).
To see your stale branches, go to the Stale pivot on the Branches page.


Find more gems on the Rangers blog

Find more gems on Twitter

DfE and Microsoft bring Open Data to life with Power BI

$
0
0

Microsoft and the Department for Education (DfE) have recently collaborated to showcase how open data published by the UK government can be modelled and visualised with Power BI to bring their data to life and make it more engaging.

School Workforce Census

Since 2010, the DfE has published their annual School Workforce Census data. This includes statistics on the number and characteristics of teachers, teaching assistants and other non-classroom-based school support staff in all schools in England. The data is published as part of their Open Data Strategy and includes a copy of the raw data as spreadsheets in ODS or Microsoft Excel format. This is accompanied with a static PDF report, prepared by the DfE's statisticians, that summarises and describes the key findings.

To understand why we undertook this project and the benefits we wanted to achieve, let's briefly consider the current state of Open Government Data in the UK.

What is Open Government Data?

The UK government is a strong advocate and practitioner of making public sector information available to the public as open data. This means that it is: accessible to all at no more than the cost of reproduction; available in a digital, machine readable format; and free of restriction on its use or redistribution (under the Open Government Licence terms).

A key driver for publishing this data is transparency. This includes making data available to allow members of the public to make informed choices about public services, holding central and local government accountable for spending and outcomes, and enabling the private sector to develop innovative new services.

In their 2012 Open Data Whitepaper, The government "committed to making open data an effective engine of economic growth, social wellbeing, political accountability and public service improvement". To date, they have published over 40,000 datasets on the Data.Gov.UK site, adopted the principles of the Open Data Charter, and the UK is now ranked first in the world by The Open Data Barometer for readiness, implementation and impact, and 2nd in the Global Open Data Index.

Who uses Open Government Data?

The government provides statistics on the number of views and downloads for published datasets and views for the publishers of that data. This provides us with some insights, but it cannot tell us anything about the profiles of the persons or organisations accessing the data or what they did with it.

The government's open data principles focus on the quality, consistency, availability and open access of data. Open data should be in a re-usable format, which means it is structured and stored in a machine-readable form based on open standards. This is typically one or more CSV files depending on the volume of the data, which restricts its usage to those with the tools and skills to analyse the data. Alternatively, it is made available through Applications Programming Interfaces (APIs). It is reasonable to assume, therefore, that the primary users are predominately government departments themselves, researchers, journalists, commercial organisations, or software developers.

Most individuals and organisations are dependent on the private sector to develop services and applications on top of open data to make it easy to use and relevant for their needs. There is a growing industry of organisations and start-ups building these solutions and the government will actively promote apps that utilise open data on its open data portal. Transport for London is a great example of the impact that open data can have, while also recognising that this success has been built on a determined effort over 10 years.

What were the objectives of this project?

In the words of Isi Avbulimen, Statistical Officer at the DfE, who worked on the project…

"The School Workforce Power BI dashboard is a joint project between Teachers Analysis Division at the Department for Education and Microsoft. It is an interactive dashboard that will allow users to visualise National, Regional and LA Level data in a colourful, interactive, and graphic form utilising a user-friendly format.

The dashboard is based entirely on published data from the School Workforce Census (SWC). The data is currently published within a large, complex non-machine readable Excel file comprising of over 20,000 rows, making the data largely inaccessible to the general public who do not have expertise in data manipulation. This dashboard will allow users to pull out the information they are interested in and perform their own analysis.

The dashboard will allow users to examine time series data for a range of different measures at the click of a button, e.g. users can easily examine how the headcount of teachers within the schools of York has varied over time."

How did we prepare the data for analysis?

We performed several steps in Power BI to get the data into the correct shape before we created the visualisations. These included loading and consolidating multiple year files into single tables, joining it with additional school data from 'Get Information About Schools', conforming the data to ensure consistency, converting the flat files back into a data model with dimension and fact tables, and adding some calculated measures and shape map files.

Screenshot of one of the source files downloaded from Data.Gov.UK

We also scaled back the scope of the project. A decision was taken to exclude all data for 2010 because there were too many structural differences with other years. We also removed the school level data because it is subject to data suppression to prevent the identification of individuals. This meant that school level aggregations could be slightly different from local authority level aggregations and might lead to confusion. An extension of the project could be to republish this data separately.

Screenshot of one of the Power BI report pages

What lessons did we learn?

The opportunity to analyse trends and changes over time are key insights would not have been possible without going through the process of consolidating multiple files.

Combining data from multiple sources (e.g. Get Information About Schools) would not have been practical without transforming the data into a related data model. Conversely, building data models provides the opportunity to combine data sources from multiple public sector bodies, a key objective of the Government Transformation Strategy.

The latest UK assessment by the European Data Portal identified issues with users' awareness of open data and understanding the value of these datasets. This project took a single dataset and reimagined how it could have been presented (alongside the open data) to lower the barriers to its use.

There are real opportunities to make open data easier to find, access and use. These include: predesigned reports that allow people to get started immediately and interactively query the data; agile tools that empower analysts to explore the data and build their own reports; and natural language interfaces and AI-powered agents that will query the data on your behalf.

The latest generation of self-service visualisation and analytics tools, such as Power BI, will provide all these capabilities. For many requirements this will be a more cost-effective and viable option than directly analysing the open data files or developing an application to do it.

Finally…

Click here to try the report for yourself

Release Notes for Field Service and Project Service Automation Update Release 4

$
0
0

Applies to: Field Service for Dynamics 365, Project Service Automation for Dynamics 365, and Universal Resource Scheduling (URS) solution on Dynamics 365 9.0.x

 

We’re pleased to announce the latest update to the Field Service and Project Service Automation applications for Dynamics 365. This release includes improvements to quality, performance, and usability, and is based on your feedback and requests.

This release is compatible with Dynamics 365 9.0.x. To update to this release, visit the Admin Center for Dynamics 365 online, solutions page to install the update. For details, refer How to Install, Update a Preferred Solution

Field Service enhancements (v7.4.0.74)

Bugs

  • Fixed: Error message editing any field on Product Receipt when changing PO status to Products Received
  • Fixed: Purchase Order Product Price calculation wrong by factor of 10000 when using standard Euro currency format settings
  • Fixed: End Date on Agreement is incorrectly updated when start date is in DST but end date is not
  • Fixed: Address Suggestion dialog shows up after save of User record on D365 online org
  • Fixed: Address Not Found error when selecting address from suggestion popup
  • Fixed: [French/iPhone] [Mobile Offline]: Duration field is not getting auto-populated after filling the Start time and End time fields on Resource booking form
  • Fixed: [iPhone] [Mobile Offline]: Pricing tab shows blank form while creating a work order product
  • Fixed: [Offline only] - "Estimation Duration" is not set to default value "zero" on Work Order Incident form
  • Fixed: [Unified Client] Documents not available under related even when entity is document management-enabled

Project Service Automation (v2.4.0.48) Enhancements

Bugs

  • Fixed: [Unified Client Only] Not able to Add Products to Invoice when all solutions are installed
  • Fixed: Upgrade Project Service solution to version 2.2.0.76 on D365 9.0 org leaving below 3 attributes of msdyn_approval entity incorrect
    • regardingobjectypecode,
    • regardingobjectidname,
    • regardingobjectidyominame
  • Fixed: Contract confirmation is not correctly processing unbilled actuals coming from journals; the expected behavior is that when an actual is from a journal line, there should be no reversal of the actual when the contract is confirmed
  • Fixed: Contract confirmation is not correctly processing unbilled actuals created after a credit note is processed
  • Fixed: Estimated cost on project entity page is not taking expense estimates into account
  • Fixed: Resource Requirement created from the Generate Project team with tasks with decimal hours have off-by-one end date
  • Fixed: [DKK currency issue] The Total Sales Price does not appear accordingly with the decimal precision

Universal Resource Scheduling Enhancements

NOTE: Improvements and bug fixes for Universal Resource Scheduling apply to Field Service and Project Service Automation, as well as to other schedulable entities in the Sales or Service applications.

Improvements 

  • New User Experience for Resource Sorting on Schedule Board
  • Added Day of week to horizontal hourly Schedule Board
  • Added date to hourly vertical Schedule Board
  • Substitute onsite intraday bookings
  • Option to remove hatch design from partially available cells
  • Automatically slide up booking requirement grid when user clicks book on schedule board toolbar
  • Inform user when there are overlapping bookings when using Schedule Assistant
  • Automatically calculate travel using mathematical calculation if default mapping service (Bing) cannot be reached
  • Specify Pattern UI now uses the same design as the demand panel on schedule board
  • UI improvements in top section of specify pattern experience
  • Change the column width in specify pattern experience

 

 Bugs 

 

  • Fixed: Lack of format validation on the time window start and time window end fields on the resource requirement and work order entities
  • Fixed: Issues when switching between horizontal and vertical hourly schedule boards, when a resource name is populated in the schedule board resource search box
  • Fixed: Schedule Board error messages displayed on Schedule Assistant toolbar now fade if there are multiple errors
  • Fixed: Issues creating booking alerts if user is under a child business unit
  • Fixed: When creating bookings from booking form, if a booking rule is triggered and the user opts not to create booking, booking is no longer created
  • Fixed: Schedule Board colors now fallback to use the colors of the default Schedule Board if there is no color set
  • Fixed: Issues with loading default saved filters on Schedule Board.
  • Fixed: Expand all and collapse all resource button no longer remains in a selected state after being clicked
  • Fixed: When creating a time off record in the calendar for a few days, and then adding a one-day schedule with working hours on top of one of those days, resource availability will now be properly reflected
  • Fixed: When in Specify Pattern experience, if user changes the first or last requirement detail to be earlier or later, but it still falls within the original requirement dates, the requirement from and to dates are now properly updated
  • Fixed: Multiple localization fixes
  • Fixed: In the booking requirements grid, users can now filter to show requirements that do not have a value in the territory field
  • Fixed: Issues opening booking requirements panel on Schedule Board when using touchscreen devices on Firefox
  • Performance Improvement: Improved performance when expanding multiple resources at once on the Schedule Board
  • Usability Improvement: Tooltip no longer shows if the entire error message fits in the error notification bar

 

For more information:

 

Feifei Qiu

Program Manager

Dynamics 365, Field & Project Service Team


Working with WordWaiter class with Microsoft Dynamics NAV

$
0
0

SinceDynamics NAV 2017 Cumulative Update 13 – and also in Dynamics NAV 2018 – the Office Integration experience has been enriched by a new interop assembly to handle non-modal calls to Microsoft Word and retrieve document close events.

The .NET Type added to Microsoft.Dynamics.Nav.Integration.Office DLL are the following:

Microsoft.Dynamics.Nav.Integration.Office.Word.WordDocumentWaiter
Microsoft.Dynamics.Nav.Integration.Office.Word.WordDocumentWaiter+DocumentCloseHandlerEventArgs

A typical example where this could be used is when there are multiple documents to be open at the same time that needs to be checked and/or reviewed (revision) by users. This task might be interrupted by e.g. a call or whatever other activities in between. If used modally or in any other context scenario, users would have been forced to close the document(s) to go back to client activity or run another client session in parallel.

The WordDocumentWaiter paradigm helps registering an open document for the close event and expose it to the client in order to take action consequently. Below a practical and simple proof of concept

  1. Create 2 NEW Word document files in e.g. C:TEMP folder and call it like
    - ALPHA.docx
    - BETA.docx
  2. Import the page object that we attached as content to this blog post. Save, compile, and run it.
  3. In the FileName field type the path of the first file (C:TEMPALPHA.docx) and click Register File. The word document specified will open. NOTE: the page must not be closed.
  4. In the FileName field type the path of the first file (C:TEMPBETA.docx) and click Register File action. The word document specified will open. NOTE: the page must not be closed.
  5. Now you can go back to the client and perform whatever activity you like, such as searching for a customer, browsing the item list, etc.
  6. Go back to one of the 2 documents open (e.g. ALPHA.docx) and do some modifications to the document (this is not mandatory) and then save and close the document. The Test Word Close Events page will trigger the close event and report the following
  7. Back to the remaining document, BETA.DOCX, do some modifications to the document (this is not mandatory) and then save the document with a different name (save as) like. E.g. C:TEMPGAMMA.docx and close it. The Test Word Close Events page will trigger the close event and report the following

WORDWAITER USAGE CONSTRAINTS

  • The DotNet data types that are used with Microsoft Word must be declared as GLOBAL variables in the page with [RUNONCLIENT]. Therefore, it is not possible to move or centralize their declaration within a codeunit.
  • Class instance must also enable the events [WITHEVENTS].
  • Any WordDocument variable declared in the local function scope will be disposed when the function return and the application cannot keep the function running as the client will be inaccessible while the action is running. In other words, the WordDocument(s) object in AL must be a local variable as it will be invalidated next time you register a document. Global cannot be used within a multiple document context.
  • OriginalFileName member contains the name that was registered. Each COM Word Document do have an internal immutable DocID which you can link to a file name. WordWaiter uses a dictionary with the registered original file name values and DocID as the key. The event handler is designed to handle registered files only and will remove the keys when their associated files are closed.
    NOTE: DocID is unique PER DOCUMENT. This means that if you make a COPY of a document, DocID is MAINTAINED hence if you have document ALPHA and copyofdocumentALPHA ,then these have the same DocID and using OriginalFileName might become difficult. This is a constraint that must be taken into consideration during development.

NAV2017.Test-Word-Close-Events

NAV2018.Test-Word-Close-Events

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

Duilio Tacconi (dtacconi)
Microsoft Dynamics Italy
Microsoft Customer Service and Support (CSS) EMEA
Thanks to Niels-Henrik Sejthen (Server Backend team MDCC Lyngby)

This maths teacher’s mistakes help high-achieving students to learn

$
0
0

The Microsoft Innovation Educator (MIE) Expert program connects high-potential teachers across Australia – and from around the world – to share their vision and expertise in applying technology to the classroom. We caught up with Jarrad Strain, Maths Teacher at Perth Modern School, to hear about how digital technology has helped extend his gifted and talented pupils–and prepare them for a high-performing future in STEM.

Jarrad, how have you used technology to engage your students in the maths curriculum?

Perth Modern is an academically selective school, so most of our students already have a pretty high level of motivation when it comes to their studies. When I introduce digital technology to my classes, I try to do so in a way that’s both creative and open-ended. Last year, for example, we designed sets of “misleading graphs” using Excel, which gave the kids a hands-on experience of how certain charts or visuals in the media can be designed to give a less-than-honest impression of the facts. The students also got together in small teams to create a class mascot using (3D design software) TinkerCAD, then vote on their favourite submissions through a combination of Forms and Excel. A lot of the time, I’ll give them a project and they’ll take it to lengths that surprise even us teachers!

It sounds like your lessons teach more than just mathematical principles as well.

I’d have to agree. Our students tend to excel in their academic abilities, but they need training in collaboration and critical thinking as much as anyone of their age. With our mascot-designing project, for example, each group presented using either PowerPoints or videos that they recorded, then shared with the rest of the class via URL. Not only did it encourage them to receive feedback on, and improve, their designs, it also meant far less “dead time” spent passively listening to everyone else’s presentations! You could just click on the link, watch, and comment in your own time. At the same time, I have visibility over all the feedback between groups, which lets me guide them on how to provide constructive feedback…and why some suggestions may not be so appropriate.

And in terms of critical thinking?

I try to align my lessons with the real world as much as possible. A lot of our students are doing maths at levels beyond their current year, so more advanced tools like 3D modelling come in handy for testing skills. With one class, I challenged their trigonometric skills to find the height of the belltower in one of our old buildings: you couldn’t just do it by measuring the base and angle of inclination. They ended up using isometric and orthogonal drawings of the structure, modelling the building, and finally demonstrating their thought process through a combination of PowerPoint and Morph. In fact, 3D models had just become available in Morph, and the kids jumped on the update as soon as they realised what it could do.

Do you ever find it challenging to keep up with such avid learners?

I think the secret is admitting that you don’t know everything. In fact, I often use my mistakes as a teaching tool! In my OneNote notebooks, I share various exercises with my senior students that not only show my work, but also errors that I’ve made, as well as the corrections and how I’ve improved on my own processes. My “whiteboard” now is my screen-casted Surface, which lets the kids see everything I write down in real time, including any uncensored slips that I make. Sometimes I stage these mistakes so students have an example to learn from, but at other times I’ve genuinely overlooked something.

When as a teacher, you admit you’re not infallible, you also give your students a healthier perspective on failure. Showing them my mistakes helps them see that what matters isn’t so much getting the right answer every time, but constantly learning and refining our approach to the problem at hand. And none of us will ever know everything! Ironically enough, I discovered OneNote after several Year 10 students introduced it to me, and now I’m one of its biggest proponents in the school.

 

How are you using OneNote in the classroom?

I’m encouraging other teachers to adopt class-based notebooks to share their notes and work more closely together. Not just in Perth Modern, but throughout Western Australia. Being part of the MIE Expert program has proven to me the value of close digital collaboration–bringing the keenest, most enthusiastic teachers together to pool their expertise. For example, I’ve been trying to encourage the girls in our school to take up STEM projects, but it’s been hard getting equal gender representation in projects.

I’m also extremely excited about using OneNote to help our students build up their electronic portfolios. We’ve been encouraging them to not only capture their final designs or innovations, but the entire process of problem solving and prototyping that goes on behind the scenes.

Why is it valuable? This sounds like it has implications well beyond high school exams.

The real goal is to prepare them for future STEM study. Some of our highest-performing students, they apply for scholarships at international universities and run up against requests for a portfolio of work. If we can use OneNote to prepare them now, they’ll have no issues later. Even in Australia, some universities now say that if you miss your ATAR cut-off, show your portfolio and you’ll still be considered. But the key to a successful portfolio isn’t the work you capture, it’s the thinking you show behind it. OneNote has made capturing those processes much simpler for me, and I’m hoping my experience will help many of our kids find their highest calling in STEM as well.

 

 

Connect with Jarrad Strain on Twitter | LinkedIn

For more information on Perth Modern School, visit http://www.perthmodern.wa.edu.au/

 

 

 

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our community and effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

Reimagining Chemistry with Minecraft: Education Edition

$
0
0

Aussie Science Teachers - it's your time to shine in Minecraft: Education Edition!

With the recent introduction of the Chemistry Update for Minecraft: Education Edition, there has been no better time to expose your Chemistry students to a new dimension of learning. Discover the building blocks of matter, combine elements into useful compounds and Minecraft items, and conduct amazing experiments with new lessons and a downloadable worlds from the Minecraft: Education Edition Community.

With a global focus on Science, Technology, Engineering and Mathematics (STEM) in schools, the Minecraft Education Team have been inspired to integrate Chemistry features and functions into Minecraft: Education Edition. In this update, students can experiment with materials using the Lab Table, Compound Creator and Element Constructor. These features offer a new space to inquire, through questioning and predicting, planning and conducting their own safe chemistry investigations in Minecraft.

"The Minecraft Education Team utilised advisors from Secondary and University Chemistry Educators and Chemical Engineers to represent accurate science while also creating engaging in-game experiences for learners." Neal Manegold, Director, Minecraft Education

So how does this stack up in the Australian Context?

In Australian Science classrooms, students will navigate, discover and learn science through Science Inquiry, Science Understanding and Science as a Human Endeavour. Minecraft: Education Edition and the new Chemical Update can support Australian Science teachers and engage students across three of these key ideas of the Australian Science Curriculum.

Chemistry in Minecraft allows teachers to introduce chemistry concepts without the costs of lab equipment in the engaging Minecraft world that will inspire more girls and boys to explore the subject. The Chemical Science Resource Pack found at the Minecraft Educator Community is a great place to start. It provides a number of predesigned lesson plans and downloadable worlds to support students in developing their understanding of key Chemistry ideas that can applied to the real world.

Science Inquiry

Safety in the Science Lab:

In Australian schools students are typically introduced to 'real' chemistry labs when they start Secondary school. When inquiring in science and planning/conducting investigations, safety and risk assessments are of the highest priority. Students in a Chemistry lesson are at a higher risk of danger and we certainly don't want our students learning from their mistakes.

Immerse your students in a fun and engaging Minecraft world where learning about science safety and safe lab practices and protocols. The  Minecraft Science Lab will have students presented with a lab filled with dangers (from water on the floor, poisons lets on surfaces and edible materials out in the open). Working collaboratively in the Chemistry Lab students can use Chalk Boards to design safety protocols and create safe workspaces before conducting their science experiments.

10 x chemistry lessons in one Minecraft World - starting with creating a safe science lab. Download the world and start creating a safe chemistry lab here.

Planning & Conducting Scientific Investigations:

When planning and conducting scientific investigations, from Year 3 to Year 10 students are deciding variables to be changed and measured in fair tests, and observe measure and record data using digital technologies (ACSIS055). Students can use the new Minecraft Lab Table to design fair tests and observe, measure and collect data. Have your students use Minecraft blocks to create visual representation of their findings, take a snap shot using the in-game camera/portfolio tools to process and analyse their data, evaluate their evidence and communicate their findings using a range text types and modes (ACSIS110)

Scientific Method:

Exploration in Minecraft can lead students to developing big questions around Chemistry, Biology and Physics. When creating in Minecraft students build a broad set of integrated thinking skills through experimentation and play, trial and error, testing and hypothesising, analysis and evaluation. The scientific Method consists of a continuous cycle and through  The Scientific Method lesson, students will design experiments in Minecraft and keep accurate records.

Science Understanding:

In Chemistry, we help student make connections between the composition and behaviour of substances and how new substances are produced. They build an understanding that all natural and synthetic materials are composed of chemicals, be they elements, molecules or compounds. Matter in our world can be categorised into states of matter and changes in state can be reversible and irreversible.

States of Matter:

In year 5, students understand how solids, liquids and gases have different observable properties and behave in different ways (ACSSU077). What is the World Made of? is a Minecraft lesson that can help students understand natural and synthetic objects are all made of elements and chemicals can be made from natural resources. Through Minecraft students can explore liquids, solids and gases and use the Material Reducer to expose what Minecraft Materials and Matter is made out of. The Material Reducer reduces a block to its component elements, providing a useful way to explore the elements that make up our environment.

Chemical Compounds and Mixtures:

In Year 8, students will explore properties of different states of matter (ACSSU151) and explain the differences between elements, compounds and mixtures (ACSSU152). Use the Compound Creator and the outline in the Properties of Matter lesson found at http://education.minecraft.net encourages students to experiment by combine elements into over 30 compounds.

Elements and Isotopes:

The Element Constructor is one of the new crafting tools included in the Chemistry Resource Pack. In Year 9 students develop a deeper understanding of atom structure and the composed of protons, neutrons and electrons (ACSSU177). In Minecraft, Students can use the Elements Constructor to build 118 elements based on the number of protons, neutrons and electrons.

For a full list of elements and isotopes available in Minecraft: Education Edition click here.

The Periodic Table:

The Chemistry Update allows students to build their own periodic table or use their imagination to come up with a new way to arrange the elements based on the elements characteristics.

Endless Experiments and a world of science ready to be created:

Minecraft: Education Edition is not just a game. It's a platform that can ignite curiosity and have students critically think to solve complex problems and make connections to what they learn with the real world. The only limited is your imagination.

Your Windows 10 version of Minecraft Education Edition will automatically update to include the chemistry features, and for MacOS, please head to Education.minecraft.net  to get the latest update.

Get started and try out the new Chemistry Update for Minecraft: Education Edition now!

Download your free copy of the Minecraft: Education Edition LAB guide: https://education.minecraft.net/wp-content/uploads/ChemistryLab_Journal.pdf

Targeting a specific version of the .NET Framework

$
0
0

I find numerous opinions and understandings about side-by-side and in-place installations of the .NET Framework.  Here is my favorite explanation of this “.NET Versioning and Multi-Targeting - .NET 4.5 is an in-place upgrade to .NET 4.0”.  I also link to that same article in one of my IIS labs here, “Lab 5: Basic and Advanced Application Pool Settings”.  Check out these other articles I wrote as well about .NET Framework versions.

Here is my hypothesis: “if the .NET version you want to target exists in the following directory structure and you target it in your web.config file, your application will run using those binaries.”  What I mean is, regardless of what you select from a drop down, whether in the Azure portal or in the IIS Management console, the targeted version in your web.config is the code base in which your code is compiled and executed.

C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETFramework

image

Figure 1, .NET Framework reference assemblies, how to target a specific version

On an Azure App Service, the same are found in this directory.

D:Program Files (x86)Reference AssembliesMicrosoftFramework.NETFramework

image

Figure 2, .NET Framework reference assemblies, how to target a specific version

The way I will test my hypothesis is by creating an ASP.NET application that targets the .NET Framework 4.7.1, since this exists on both my client machine, Figure 1 and on the Azure App Service, Figure 2.  I will code something available only in >= 4.7, test it, then change the target to a version which does not support that feature.  Notice in Figure 3 that when creating my ASP.NET application I am targeting the .NET Framework version 4.7.1.

image

Figure 3, .NET Framework reference assemblies, how to target a specific version

I used pattern matching which is a new C# 7 feature available only in .NET version 4.7 or greater.  I wrote about that here “How to enable C# 7 in Visual Studio 2017” and I used the same method.  I passed the method a Circle class which fell through to the default: switch.

public class Circle
{
    public double Radius { get; }
    public Circle(double radius)
    {
        Radius = radius;
    }
}

I compiled it and it worked when I targeted 4.7.1, but the odd thing was that it worked even when I targeted 4.5.2.  So that blew my hypothesis out the window, for a moment.  You can see here (http://foursevenone.azurewebsites.net/) that I target 4.5.2 and the code which was supposed to only work in 4.7+ worked also.

Scratching my head and poking my brain, I quickly came to this conclusion.  In Figure 3, I targeted my ASP.NET application to .NET Framework 4.7.1 and by doing that Visual Studio, correctly, created my default application bundle and loaded all the 4.7.1 assemblies and dependencies into the Bin folder.  So regardless of my targetFramework change to 4.5.2, the code still ran against the 4.7.1 version and worked.  I conclude that changing the targetFramework to a lower .NET Framework version doesn’t break the application.

When I then created another ASP.NET application that targeted 4.5.2, Figure 4, as expected I could not code and use the pattern matching feature.  And I even got a great error when I tried compiling it, Figure 5.

I also compared the assembly and compiler versions in the Bin folder of each project and they were indeed different versions.

image

Figure 4, .NET Framework reference assemblies, how to target a specific version

image

Figure 5, .NET Framework reference assemblies, how to target a specific version

The hypothesis is true because I targeted 4.7.1 in my first application and it ran.  In the second application I targeted 4.5.2 and it ran.

What I then did was to deploy my 4.5.2 ASP.NET application to here (https://fourfivetwo.azurewebsites.net/) and without the 4.7.1 code it ran perfectly.  However, I copied the pattern matching code from the 4.7.1 project into the code on the 4.5.2 web app and it crashed because the contents of my Bin directory contain 4.5.2 assemblies and compilers, Figure 6.

image

Figure 6, .NET Framework reference assemblies, how to target a specific version

Simply setting the targetFramework to 4.7.1 in my web.config did not resolve the execution problem.  I would need to update the assemblies and dependencies in my application Bin directory to make the code compile and work as expected.  However, this did prove, at least to me that you can target a specific version other than the most current or version shown in the drop-down on an Azure App Service or from the drop-down in the Application Pool settings in IIS.

In conclusion and summary, you can target whatever version of the .NET Framework you want so long as that version exists on the server where the application is deployed.

The Microsoft Artificial Intelligence Landscape – And What to use When

$
0
0

Artificial Intelligence (AI), at its broadest definition, is simply “a machine that can act using human-style reasoning or perception”. Of course, the technologies used to enable that definition are far from simple themselves.

Artificial Intelligence isn’t new – I worked with “Good Old Fashioned AI” (that’s a real thing) back in the late 70’s and early 80’s. The promise of AI was higher than the technology allowed in those days, and so the AI field went dormant in the public’s mind. It had a resurgence a few years later, and now has returned with far greater options than we have ever had before. Breakthrough algorithms, understandings, and technologies now allow a far deeper use of AI – I’ll cover a few of these uses in another article. The hype this time, however, is (mostly) real. You’re using AI every day, and you can embed it in your applications for your users as well.

For the general public, AI is somewhat of a “black box”, with image recognition being the same level of difficulty to understand as language translation or even interpretation. But for the technical professional, some problems are already solved, and you can simply leverage them, and others you have to write yourself. In general, the more customization you need, the more you’ll need to write for your solution.

Define the Opportunity

The first thing to do of course is to define what opportunity you are solving for.  For instance, you might have a need to estimate the repair costs of a car accident, using only a photo uploaded from your client from the web. Or perhaps you want to translate between languages so that your firm can read a document sent from another country. You may have a need to analyze that document to find its sentiment or general meaning, or you may want to generate an automated response from the facts and data it contains. You may want to do route prediction for expense mileage tracking, or fill out a set of values using only an example. (You can find a few real-world examples for AI and Machine Learning here)

From No-Code to Deep Learning

Once you’ve defined the opportunity, the next step is to start at the very highest level of leveraging that you can. For instance, Microsoft Excel also has a feature (called Flash Fill) that will transform columns of data based only on an example.  In the case of mileage and route tracking, that’s already built into an application – so you could simply use that.

However, if you want to recognize road signs along the way, you could move up to an image recognition service. If you need to further process that image to read the text, there’s a service for that as well. If you want to determine if the sign is overhead or on the side of the road, you may need to customize that service further. Taking this example even further, if you want to determine the speed of the vehicle using only images of road signs as you pass by them by analyzing the angle of the edge of the sign, you may need to write some Machine Learning code to do that.

The point is to follow the progression of customization that you need. If the problem is solved, use that solution. If you need to customize the solution, that’s where you start to further leverage more tools.

Ensemble Approaches

Another point to keep in mind is that you can combine applications, services and custom-developed AI into a much larger and more complex solution. The “Intelligent Kiosk” is one such example, combining image recognition, handwriting and voice interactions, and even bots to form a complete solution.

The AI Landscape

With that background set, let’s take a look at a few examples of the options you have for working with AI in a solution.

Embedded Artificial Intelligence

You may already be using AI in your day to day work. Office 365 applications such as Microsoft Word and PowerPoint have adaptable suggestions and critiques to make your communications clearer.

PowerPoint also has a Presentation Translator, an add-in that can bring real-time captioning to your presentations, and even allows those captions to be translated to another language. Flash-Fill in Excel is an algorithm that examines an example of what you want and makes those conversions for you. Microsoft Dynamics 365 has built-in insights to Sales, Finance, HR and Customer Relationship Management embedded within the solution.

So what is the technical professional’s responsibility here? Education. I can’t tell you how many times I’ve shown companies the features listed here when they describe what they want to accomplish – only to have them say “We had no idea it already did this”.  No, you probably don’t have time to create or deliver that training, but we’ve done that for you. There are hundreds of training options built into the products, in the documentation, and in free training seminars from Microsoft – a quick web search will find them. Speaking of which, Microsoft Bing also has AI built in for searches, predictions, Autosuggest, image searches and more, and you can add those features to your applications using Bing Search as an AI API call.

Leveraging Artificial Intelligence in a Solution

Your next option is to use various Cognitive Services in your solution. These are cloud-based services that you simply call in an application (no AI algorithms to write, just use the service) using Python, Java, C#, even CuRL. Remember, you can leverage these together, using Vision to feed Language to feed Search and so on. Here’s a quick list:

Customizing Artificial Intelligence for a Solution

While services like the Vision API do most of what organizations need, there are times when you need to go a bit further and train or customize a service, again without having to know a lot about various Machine Learning and Artificial Intelligence algorithms and programming. This helps you get started faster, and as we make the services better, your apps automatically get better.

Here are a few of the options when you need one step above a fully automated API call:

Writing Your Own Artificial Intelligence Service

At times you’ll find there are certain requirements that a service just won’t handle on its own, nor is it customizable enough to do what you need. When I do an architecture, I start with as automated as I can, and then get “pushed” to the next level based on requirements.

When you do hit that level, there are multiple options still available. Microsoft tools and platforms support all major Data Science languages, such as Python, R, Node, Java, C++, C#, F#, SQL and more. These platforms work with TensorFlow as well as newer toolkits and integrate with Open-Source and commercial software platforms. Not only that, the ecostructure is tied with a formal process you can use for project creation through DevOps.

Hybrid Solutions

There are on-premises solutions as well – and these can also be used in a Hybrid solution so that you can work with local data or for compliance reasons:

Learning and References

There’s a full whitepaper you can download here on these options. And of course, we offer learning paths, a full AI School, and Machine Learning and AI programs on EdX.

Strategies to realizing Cost Savings in Azure

$
0
0

Contents

Introduction. 2

Choosing the appropriate Compute Option in Azure. 3

Planned (Pro-Active) approach to Compute Optimization. 3

Laying down the scaffolding/structure so VMs are provisioned with cost optimization guidelines. 3

Governance. 4

Naming standards. 5

Using Role Based Access Control (RBAC) 6

Using Azure Policies. 6

Review Resource Group (RG) Costs periodically to mine opportunities for cost reduction. 7

Making use of Tactical platform capabilities. 7

Azure Hybrid User Benefit 7

Use Reserved Instances (RI) 9

Shutdown. 10

Right Sizing. 11

Azure Low Priority VMs. 11

Scaling (Auto) 11

Performance optimization/performance POV. 12

Choose the right storage. 12

Choose the right type of VMs. 13

Choosing the appropriate platform construct 13

REACTIVE Approach - things to do post deployment 14

Azure Advisor. 14

Cloudyn type of tools. 14

Check if deployments can be converted for Reserved Instance(RI) 15

Check if deployments can be converted for AHUB. 1

Study the entities within the RG.. 15

Azure Compute VMs – Capability and Design Guidance. 15

VM Capability Guidance. 16

Azure VM Design Guidance. 22

Usage Analysis Tools. 23

 

 

Introduction

 

This document provides a framework/structured approach to consuming Azure Compute resources. At a high level this is divided into (a) Proactive approach and (b) a tactical/reactive approach to the topic at hand. It is worth noting that beyond the pro-active/reactive approaches described here, studying, and arriving at the appropriate application architecture with the platform native constructs in mind is equally if not more important in optimized usage of all resources (human, time, DevOps cycles, compute costs etc.,). Of course, this should be done much earlier in the life-cycle instead of merely lifting and shifting and then looking for opportunities to save costs. Another important dimension to Azure compute cost savings is taking an organizational view of azure services consumed, and pooling resource usage across applications. Classical examples of this are:

(a)    using SQL Elastic pools if multiple applications use Azure SQL Services

(b)    using fewer App Service Environments (ASE) but packing them to the maximum density with 100s of Azure Web Applications per ASE

 

Choosing the appropriate Compute Option in Azure

With a bewildering choice of options, the challenge is to figure out the appropriate if not the most optimum compute choice within the Azure platform for the application/solution that you want to deploy. There is indeed a bewildering choice of compute options in Azure which are wide and varied and cater to a wide variety of application OR rather processing needs. For instance, what compute should one choose for a CPU intensive, or say a memory intensive application(s)?  – In these cases, the choices are easy. What should one choose for Web applications – again there are easy choices. But what should one choose say for a combination of requirements such as (a) highly decomposed and componentized application, (b) sudden bursts, (c) high velocity for the development team when pushing code, (d) where the underlying requirement is to have the appropriate compute (not the old style VM construct) for the task at hand, (e) support cloud native constructs, (f) enable smooth DevOPS, (g) fault tolerance and resilience? This is not an easy task and both Infrastructure architects and application architects should take the time to study and understand the platform capabilities and utilize the correct Azure platform constructs for the solution deployment. Taking the easy approach of deploying to VMs should be the last resort and avoided at best, to save time and money for the long run.

The following link captures some of the well thought out decision criteria depending on the task or tasks the application or solution needs to accomplish.

Criteria for choosing an Azure Compute Option

Planned (Pro-Active) approach to Compute Optimization

Laying down the scaffolding/structure so VMs are provisioned with cost optimization guidelines

These are things mature organizations do, so that end consumers of azure resources are within well-established guidelines (guard rails). The scaffolding is essentially a framework that is a combination of governance rules (enforce-able) and best practices. The graphic below captures the essential components that are used to build such a scaffolding and bear in mind the evolving nature of the platform, as newer constructs and capabilities get added.

 

 

Governance

When moving to Azure, you must address the topic of governance early to ensure the successful use of the cloud within the enterprise. Unfortunately, the time and bureaucracy of creating a comprehensive governance system means some business groups go directly to vendors without involving enterprise IT. This approach can leave the enterprise open to vulnerabilities if the resources are not properly managed. The characteristics of the public cloud - agility, flexibility, and consumption-based pricing - are important to business groups that need to quickly meet the demands of customers (both internal and external). But, enterprise IT needs to ensure that data and systems are effectively protected.

In real life, scaffolding is used to create the basis of the structure. The scaffold guides the general outline and provides anchor points for more permanent systems to be mounted. An enterprise scaffold is the same: a set of flexible controls and Azure capabilities that provide structure to the environment, and anchors for services built on the public cloud. It provides the builders (IT and business groups) a foundation to create and attach new services.

The Azure enterprise scaffolding is based on practices we have gathered from many engagements with clients of various sizes. Those clients range from small organizations developing solutions in the cloud to Fortune 500 enterprises and independent software vendors who are migrating and developing solutions in the cloud. The enterprise scaffold is "purpose-built" to be flexible to support both traditional IT workloads and agile workloads; such as, developers creating software-as-a-service (SaaS) applications based on Azure capabilities.

The enterprise scaffold is intended to be the foundation of each new subscription within Azure. It enables administrators to ensure workloads meet the minimum governance requirements of an organization without preventing business groups and developers from quickly meeting their own goals.

Governance is crucial to the success of Azure. This article targets the technical implementation of an enterprise scaffold but only touches on the broader process and relationships between the components. Policy governance flows from the top down and is determined by what the business wants to achieve. Naturally, the creation of a governance model for Azure includes representatives from IT, but more importantly it should have strong representation from business group leaders, and security and risk management. In the end, an enterprise scaffold is about mitigating business risk to facilitate an organization's mission and objectives.

The following image describes the components of the scaffold.

The foundation relies on a solid plan for departments, accounts, and subscriptions. The pillars consist of Resource Manager policies and strong naming standards. The rest of the scaffold comes from core Azure capabilities and features that enable a secure and manageable environment.

Naming standards

The first pillar of the scaffold is naming standards. Well-designed naming standards enable you to identify resources in the portal, on a bill, and within scripts. Most likely, you already have naming standards for on-premises infrastructure. When adding Azure to your environment, you should extend those naming standards to your Azure resources. Naming standard facilitate more efficient management of the environment at all levels.

For naming conventions:

Review and adopt where possible the Patterns and Practices guidance. This guidance helps you decide on a meaningful naming standard.

Use camel Casing for names of resources (such as myResourceGroup and vnetNetworkName).

Note: There are certain resources, such as storage accounts, where the only option is to use lower case (and no other special characters).

Consider using Azure Resource Manager policies (described in the next section) to enforce naming standards.

 

Using Role Based Access Control (RBAC)

RBAC stands for Role Based Access Control where-by Azure provides various default roles for various namespaces and objects in each of these namespaces. The roles have a name and a set of allowed operations on those namespaces and as well as a set of operations that are not allowed. The general rule is anyone using Azure in any sort of capability should be in clear understood and well-defined roles which allow them to do only the specific activities they should be allowed to and nothing more and nothing less.  As a best practice every end user and administrator in Azure should only be limited to the set of roles that are absolutely required. Some of the following links are very handy in this regard.

  1. What is RBAC?
  2. Using RBAC to manage access to resources
  3. RBAC – Built-in roles
  4. RBAC – Custom roles
  5. Assign custom roles for internal and external users

Using Azure Policies

Azure Policy is another capability in the Azure platform where an enterprise can create various policies and then deploy them at an appropriate scope such as a subscription or a resource group. Based on the scope at which the policy is deployed, incoming ARM based requests to Azure get evaluated and ascertained if they conform to the policies. If they are not the incoming requests are not honored. The following are some handy links with regards to understanding and using Azure Policies.

  1. Overview of Azure Policies
  2. Azure Policy Definition structure
  3. Azure Policy Initiatives

 

Review Resource Group (RG) Costs periodically to mine opportunities for cost reduction

The best practice in terms of application deployment is to deploy the components of the solution all into a single Resource Group. The costs for each RG can be then separated out and analyzed for the following: -

(a)    Relevancy – meaning is this component required any more. If not, then delete it.

(b)    Usage and load patterns – if the usage is low or the load handled by the component is very low (or conversely high) then consolidate multiple instances of such components into lesser number of instances (or set scale up/down rules)

Making use of Tactical platform capabilities

Azure Hybrid User Benefit

One can save up to 40 percent on Windows Server virtual machines with Azure Hybrid Benefit for Windows Server. Use your on-premises Windows Server licenses with Software Assurance to save big on Azure. With this benefit, for each license we will cover the cost of the OS (on up to two virtual machines!), while you just pay for base compute costs.

Overview Reference - https://azure.microsoft.com/en-us/pricing/hybrid-benefit/

Use Reserved Instances (RI)

As the name indicates Azure Reserved Instances are those VMs that you run, but which you have reserved for long term usage. The prices for Reserved VM Instances are significantly lower than non-reserved instances and so anytime there is a need to have long running VMs (say you are sure it is going to be used for a year or so) then you should opt to use Azure RIs to gain cost savings.

Significantly reduce costs—up to 72 percent compared to pay-as-you-go prices—with one-year or three-year terms on Windows and Linux virtual machines (VMs). When you combine the cost savings gained from Azure RIs with the added value of the Azure Hybrid Benefit, you can save up to 82 percent*. Lower your total cost of ownership by combining RIs with pay-as-you-go prices to manage costs across predictable and variable workloads. What’s more, you can now improve budgeting and forecasting with a single upfront payment, making it easy to calculate your investments.

Prepay for Virtual Machines with Reserved VM Instances

The following are some useful references. To learn more about Reserved Virtual Machine Instances, see the following articles.

 

Shutdown

Of all the strategies for saving costs on compute, shutting down a VM when not needed is the most propitious one. There are various methods to accomplish this.

  1. Now there is a provision in the portal (or via ARM) to shut down a VM at a certain time every day. So, see if that will work in your scenario and just set it for the VMs where this can be applied.
  2. Shutdown a VM when a certain idle threshold is reached – say VM CPU is 85% idle or whatever is applicable to your specific application and its downtime requirements. Idle detection is part of the platform and alerts can be generated which can then be used to trigger automation to shut down the VM in question.

Right Sizing

When provisioning VMs for various application workloads it is worth spending time and effort on finding the correct metrics that the application under load generates. These are the CPU load, IO throughput, memory requirements, etc. It might take a day or two get this information by testing/loading the application prior to migrating it. This will pay rich dividends when you deploy to Azure as it will give the correct size when you come to choose the VM family and size in Azure instead of blindly choosing the size with no proper data behind it. A right sized VM also has the advantage of not having to constantly resizing it.

Azure Low Priority VMs

Low-priority VMs are allocated from our surplus compute capacity and are available for up to an 80% discount, enabling certain types of workloads to run for a significantly reduced cost or allowing you to do much more for the same cost. Hence, in addition to considering AHUB and RI, also consider using Low Priority VMs whenever it is appropriate. The tradeoff for using low-priority VMs is that those VMs may not be available to be allocated or may be preempted at any time, depending on available capacity.

The following links provide additional information on Low Priority VMs.

Low Priority VMs – Overview Using Low Priority VMs in Batch

Low-priority VMs are offered at a significantly reduced price compared with dedicated VMs. For pricing details, see Batch Pricing.

Scaling (Auto)

The Azure platform has baked in elastic and dynamic scaling that enable your azure resources such as compute to be dialed UP and DOWN in an automated fashion. The recommendation here is to make use of this platform capability in your solution deployments to scale UP and DOWN on demand, instead of using fixed size compute resources.

As a prelude to using Auto Scaling for your compute deployments, always deploy VMs into Availability sets or VM Scale Sets. Once you deploy the VMs into these platform constructs leverage the auto-scale capabilities in here. If you are deploying applications into VMs, a la the old-fashioned IaaS method, then the least the application must satisfy is, is that it must be a stateless application.

The following links provide various Scaling options.

Azure Platform Auto-Scaling capabilities Using Azure Automation for Scaling VMs VM Scale Set Overview Scaling Low Priority VMs

 

 

Performance optimization/performance POV

Choose the right storage

This was already covered in detail in the documentation on Storage Optimization. However briefly Azure provides the following types of storage capabilities from a replication perspective.

(a)    LRS – Locally redundant storage

(b)    ZRS – Zone Redundant storage

(c)     GRS – Geographically Redundant storage

(d)    RA-GRS Read Access – GA

Choose the appropriate storage based on application replication needs. Obviously, GRS types of storage costs slightly more.

Additionally, there are the options of

(a)    HDD [spinning old style disks] – cost effective choice for low throughput non-mission critical apps.

(b)    SDD [solid state drives] – Premium disks for high throughput I/O

(c)     Managed Disks – it is recommended all production and critical applications use managed disks.

Finally, the general recommendation is to use Managed Storage disks for all new VMs as they have significant advantages over non-managed disks and as well guaranteed SLAs besides not having to manage them as Azure manages them for the customer.

Choose the right type of VMs

Once the application team has decided that they would use IaaS based solution, landing on VMs, then spending time and deciding on the correct VM family and size will prove beneficial. There are several approaches to do this, some of which are outlined.

  1. For pure Lift and Shift type of scenarios, the current on-Prem deployment would serve as a starting baseline. Once you have the baseline, it is incumbent on each application team to profile the system characteristics under load. Once the load characteristics are known this can be mapped to the appropriate Azure VM Family and Azure Compute Units required to run the same application in Azure on VMs.

Choosing the appropriate platform construct

Many customers fall into this category where they do not have the time to understand the cloud native constructs provided by Azure and go for the easy pattern of lift and shift. While this is an easy mental model for experienced IT staff, it is not in tune with the various PaaS, SaaS and identity models that azure offers. This approach will almost invariably end up with choosing the wrong model and often advised by infrastructure-background personnel, who do not have the background to understand the developer capabilities of the platform, but who tend to push the discussion to the legacy approaches of deploying to VMs. From a cloud perspective these legacy approaches are rapidly become obsolete and as well as from an evolving software architecture point of view. This is a trap that should be avoided, and application teams should take the help of seasoned application developers who have kept abreast of various cloud native constructs which are baked right into the platform. The cost savings of such an approach are immense. Some of the benefits of microservices and cloud native capabilities are listed below.

(a)    Overall savings from blindly spinning up VMs – this is an old model which should be *really* be the last resort. Cost of managing the IaaS based deployments as well as constantly monitoring, patching, backing up and taking care of the environment. Also, the lack of auto-scaling which will not enable cost realizations.

(b)    Several hundreds to thousands of hours of savings for the development and deployment team as the cloud native constructs are built for rapid agile iterative cycles and super smooth DevOPS style deployment. None of these will be realized in the old-fashioned legacy way of deploying into VMS, by still wrongly adopting the technique of deploying into a VM/Server.

(c)     Look for cloud native constructs the platform provides – a few examples

  1. Use of Cloud neutral and OS neutral capabilities like Service Fabric (SF)
  2. If SF is not feasible deploy the application into containers
  3. Serverless constructs like Azure Functions

(d)    Use PaaS capabilities such as the following: -

  1. Azure Web Applications
  2. Azure Logic Apps
  3. Azure Event Hubs
  4. Azure Service Bus
  5. Azure Event Grid

Most enterprise message bus type applications can be implemented by a combination of Azure Logic Apps, Azure Functions (or Web Jobs) with durable functions and storage. Such decoupling will lead to significant cost savings over and above the existing legacy architectures, besides being more scalable and easy to manage, resulting in less overall costs. Application reviews done so far point to the fact that customers are still following the older IT model of deploying into servers and this is a serious issue from a software architecture perspective. Such teams end up carrying huge technical debt, and will have to undergo a painful refactoring later to modern software architectures which exploit cloud native capabilities, in order to realize the resultant benefits of rapid DevOPS, huge time-savings, cost benefits, agile development cycles, besides having a decoupled and a highly decomposed(componentized)  architecture, which allows for independent versioning and management (of the different components that make the) of the  solutions/applications that need to continually manage, evolve and operate.

 

REACTIVE Approach - things to do post deployment

Once the customer has started deploying into Azure then there are certain things that should be done post facto, in a continuous basis to wring out more cost efficiencies. These are spelt out below.

Azure Advisor

Turn on tools like Azure Advisor and implement all of its cost recommendations. The following link provides an intro to Azure Advisor which is a very simple tool to turn and as well as to implement its recommendations.

https://docs.microsoft.com/en-us/azure/advisor/advisor-overview

Cloudyn type of tools

Cloudyn is another tool that can be used to study the usage and cost benefit analysis and it is recommended to turn this on and study its findings and recommendations and then implement them. Note that both Cloudyn and Azure Advisor use heuristics and AI to figure out usage patterns and make recommendations. So, the more these tools are turned on and have access to data over longer time periods their recommendations will make sense. The following link provides information on how to view Cost data with Cloudyn.

https://docs.microsoft.com/en-us/azure/cost-management/quick-register-azure-sub

 

Check if deployments can be converted for Reserved Instance(RI)

After having deployed into Azure it is often the case that customers retroactively try to make sure they are exploiting Azure RI wherever they can. So, it is incumbent on Azure administrators or those responsible for the apps and its associated costs to periodically scan the usage data and figure out which are all the Azure VM instances that RI can applied to.

Check if deployments can be converted for AHUB

The same applies to leveraging existing enterprise licenses for Windows Servers which the organization has already paid for. For all Windows VMs the AHUB benefits should be turned on.

 

Study the entities within the RG

This is an activity that should be done periodically ideally atleast every couple of months by each application teams as part of the sprint cycle. The suggestion is to review the usage and cost data for every resource group in the monthly consumption data. By picking the top ten spenders in each RG and focusing on them and studying them and figuring out optimizations of these TOP TEN spenders, the following can be realized. Cost savings if any can be affected

  1. Valuable skills will be gained over a period of time by doing this exercise repeatedly.
  2. Rinse, repeat this atleast every 2 months.

 

Azure Compute VMs – Capability and Design Guidance

The sizing and tiering options provide customers with a consistent set of compute sizing options, which expand as time goes on. From a sizing perspective, each sizing series represents various properties, such as:

  • Number of CPUs
  • Memory allocated to each virtual machine
  • Temporary local storage
  • Allocated bandwidth for the virtual machines
  • Maximum data disks
  • GPU availability

VM Capability Guidance

 

Some virtual machine series includes the concept of Basic and Standard tiers. A Basic tier virtual machine is only available on A0-A4 instances, and a Standard tier virtual machine is available on all size instances. Virtual machines that are available in the Basic tier are provided at a reduced cost and carry slightly less functionality than those offered at the Standard tier. This includes the following areas:

Capability Consideration Capability Decision Points
CPU Standard tier virtual machines are expected to have slightly better CPU performance than Basic tier virtual machines
Disk Data disk IOPS for Basic tier virtual machines is 300 IOPS, which is slightly lower than Standard tier virtual machines (which have 500 IOPS data disks).
Features Basic tier virtual machines do not support features such as load balancing or auto-scaling.

The following table is provided to illustrate a summary of key decision points when using Basic tier virtual machines:

Size Available CPU Cores Available Memory Available Disk Sizes Maximum Data Disks Maximum IOPS
Basic_A0 – Basic_A4 1 – 8 768 MB –

14 GB

Operating system = 1023 GB

Temporary = 20 - 240 GB

1 - 16 300 IOPS per disk

 

In comparison, Standard tier virtual machines are available for all compute sizes.

Capability Consideration Capability Decision Points
CPU Standard tier virtual machines have better CPU performance than Basic tier virtual machines.
Disk Data disk IOPS for Basic tier virtual machines is 500. (This is higher than Basic tier virtual machines, which have 300 IOPS data disks.) If the DS series is selected, IOPS start at 3200.
Availability Standard tier virtual machines are available on all size instances.
A-Series features ·       Standard tier virtual machines include load balancing and auto-scaling.

·       For A8, A9, A10, and A11 instances, hardware is designed and optimized for compute and network intensive applications including high-performance computing (HPC) cluster applications, modeling, and simulations.

·       A8 and A9 instances have the ability to communicate over a low-latency, high-throughput network in Azure, which is based on remote direct memory access (RDMA) technology. This boosts performance for parallel Message Passing Interface (MPI) applications. (RDMA access is currently supported only for cloud services and Windows Server-based virtual machines.)

·       A10 and A11 instances are designed for HPC applications that do not require constant and low-latency communication between nodes (also known as parametric or embarrassingly parallel applications). The A10 and A11 instances have the same performance optimizations and specifications as the A8 and A9 instances. However, they do not include access to the RDMA network in Azure.

Av2-Series features ·       Represent new version of A-Series VM’s with amount of RAM per vCPU raised from 1.75 GB or 7 GB of RAM per vCPU to 2 GB or 8 GB per vCPU.  Local disk random IOPS has been improved to be 2-10x faster than that of existing A version 1 sizes.
D-Series features ·       Standard tier virtual machines include load balancing and auto-scaling.

·       D-series virtual machines are designed to run applications that demand higher compute power and temporary disk performance. D-series virtual machines provide faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the temporary disk.

Dv2-Series features ·       Standard tier virtual machines include load balancing and auto-scaling.

·       Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.2 GHz. The Dv2-series has the same memory and disk configurations as the D-series.

Dv3-Series

features

·       Standard tier virtual machines include load balancing and auto-scaling.

·       With Dv3-series, a follow-on to the original D/Dv2-series, Microsoft is introducing a new generation of Hyper-Threading Technology virtual machines for general purpose workloads

Ev3-Series features

 

·       Standard tier virtual machines include load balancing and auto-scaling.

·       A new family for memory optimized workloads - introducing sizes with 64 vCPUs on Intel® Broadwell E5-2673 v4 2.3 processor and with 432 GB of memory on the largest Ev3 sizes

DS-Series features ·       Standard tier virtual machines include load balancing and auto-scaling.

·       DS-series virtual machines can use premium storage, which provides high-performance and low-latency storage for I/O intensive workloads. It uses solid-state drives (SSDs) to host a virtual machine’s disks and offers a local SSD disk cache. Currently, premium storage is only available in certain regions.

·       The maximum input/output operations per second (IOPS) and throughput (bandwidth) possible with a DS series virtual machine is affected by the size of the disk.

F-Series features ·       SKU is based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, which can achieve clock speeds as high as 3.1 GHz with the Intel Turbo Boost Technology 2.0. Having the same CPU performance as the Dv2-series of VMs, at a lower per-hour list price, the F-series is the best value in price-performance in the Azure portfolio based on the Azure Compute Unit (ACU) per core. The F-Series VMs are an excellent choice for gaming servers, web servers and batch processing. Any workload which does not need as much memory or local SSD per CPU core will benefit from the value of the F-Series.
FS-Series features ·       FS-series virtual machines can use premium storage, which provides high-performance and low-latency storage for I/O intensive workloads. It uses solid-state drives (SSDs) to host a virtual machine’s disks and offers a local SSD disk cache. Currently, premium storage is only available in certain regions.

·       The maximum input/output operations per second (IOPS) and throughput (bandwidth) possible with a DS series virtual machine is affected by the size of the disk.

G-Series features ·       Standard tier virtual machines include load balancing and auto-scaling.

·       Leverages local SSD disks to provide the highest performance virtual machine series that is available in Azure.

GS-Series features ·       Standard tier virtual machines include load balancing and auto-scaling.

·       Leverages local SSD disks to provide the highest performance virtual machine series that is available in Azure.

·       GS-series virtual machines can use premium storage, which provides high-performance and low-latency storage for I/O intensive workloads. It uses solid-state drives (SSDs) to host a virtual machine’s disks and offers a local SSD disk cache. Currently, premium storage is only available in certain regions.

·       The maximum input/output operations per second (IOPS) and throughput (bandwidth) possible with a GS series virtual machine is affected by the size of the disk.

H-Series

features

·       H-series VMs are based on Intel E5-2667 V3 3.2 GHz (with turbo up to 3.5 GHz) processor technology, utilizing DDR4 memory and SSD-based local storage.  The new H-series VMs furthermore features a dedicated RDMA backend network enabled by FDR InfiniBand network, capable of delivering ultra-low latency.  RDMA networking is dedicated for MPI (Message Passing Interface) traffic when running tightly coupled applications.

·       Provide great performance for HPC applications in Azure.  H-series VM sizes is an excellent fit for any compute-intensive workload.  They are designed to deliver cutting edge performance for complex engineering and scientific workloads like computational fluid dynamics, crash simulations, seismic exploration, and weather forecasting simulations.

N-Series features A new family of Azure Virtual Machines with GPU capabilities suited for compute and graphics-intensive workloads – aiding in scenarios like remote visualization, high performance computing and analytics. To be available in preview in Q1CY16 the N-series will be based on NVidia’s M60 and K80 GPUs and will feature the NVIDIA Tesla Accelerated Computing Platform as well as NVIDIA GRID 2.0 technology, providing the highest-end graphics support available in the cloud today

 

 

The following summary of the capabilities of each virtual machine series is provided in the following table:

 

Size Available CPU Cores Available Memory Available Disk Sizes Maximum Data Disks  Maximum IOPS
Basic_A0 – Basic_A4 1 – 8 768 MB –

14 GB

Operating system = 1023 GB

Temporary = 20-240 GB

1 – 16 300 IOPS per disk
Standard_A0 – Standard_A11

(Includes compute intensive A8-11)

1 - 16 768 MB - 112 GB Operating system = 1023 GB

Temporary = 20-382 GB

 

1 – 16 500 IOPS per disk
Standard_A1_v2 – Standard_A8_v2; Compute-intensive Standard_A2m_v2 - Standard_A8m_v2 1 - 8 2 GB – 64 GB Operating system = 1023 GB

Temporary SSD disk = 10-80 GB

1 – 16 500 IOPS per disk
Standard_D1-D4 Standard_D11-D14

(High memory)

1 - 16 3.5 GB – 112 GB Operating system = 1023 GB

Temporary (SSD) =50 – 800 GB

2 – 32 500 IOPS per disk
Standard_D1v2-D5v2 Standard_D11v2-D14v2

(High memory, faster CPU)

1 - 16 3.5 GB – 112 GB Operating system = 1023 GB

Temporary (SSD) =50 – 800 GB

2 – 32 500 IOPS per disk
Standard_DS1-DS4 Standard_DS11-DS14

(Premium storage)

1 - 16 3.5 – 112 GB Operating system = 1023 GB

Local SSD disk = 7 GB – 112 GB GB

2 – 32 43 – 576 GB cache size

3200-50000 IOPS total

Standard_F1 – F16

 

1-16 2-32 Local SSD disk: 16 GB – 256 GB 2-32 500 IOPS per disk
Standard_FS1 – FS16 1-16 2-32 Local SSD disk: 4GB – 64GB 2-32 4,000-64,000 IOPS
Standard_G1 – G5

(High performance)

2 - 32 28 GB – 448 GB Operating system = 1023 GB

Local SSD disk = 384 – 6,144 GB

4 – 64 500 IOPS per disk
Standard_GS1 – GS5

(High performance,

Premium storage)

2 - 32 28 GB – 448 GB Operating system = 1023 GB

Local SSD disk = 56 – 896 GB

4 - 64 264 – 4224 GB cache size

5000-80000 IOPS total

Standard_H8 - H16 8-16 56 GB - 224 GB Local SSD Disk: 1,000-2,000 GB 16 - 32 500 IOPS per disk

 

 

 

The table below also describes differences between A-series and Av2-series

 

SIZE vCPU RAM (GiB) Disk Size   Size vCPU RAM (GiB) Disk Size
A1 1 1.75 20 GB (HDD) A1_v2 1 2 10 GB (SSD)
A2 2 3.50 70 GB (HDD) A2_v2 2 4 20 GB (SSD)
A3 4 7 285 GB (HDD) A4_v2 4 8 40 GB (SSD)
A4 8 14 605 GB (HDD) A8_v2 8 16 80 GB (SSD)
A5 2 14 135 GB (HDD) A2m_v2 2 16 20 GB (SSD)
A6 4 26 285 GB (HDD) A4m_v2 4 32 40 GB (SSD)
A7 8 52 605 GB (HDD) A8m_v2 8 64 80 GB (SSD)

 

These sizing and capabilities are for the current Azure Virtual Machines, and they might expand over time. For a complete list of size tables to help you configure your virtual machines, please see: Sizes for Virtual Machines.

 

Azure VM Design Guidance

 

Design Guidance

 

When you design solutions for using virtual machines, consider the following:

Capability Considerations Capability Decision Points
Deployment order If you intend to deploy an application that may require compute intensive resources, it is recommended that customers provision a virtual machine to a cloud service with the largest virtual machine (such as Standard_G5) and scale it down to a more appropriate size. The reason is that virtual machines will be placed on the clusters that have the faster processors. It also makes scaling easier and it is more efficient to combine resources.
Supportability The following are not supported in a virtual machine on Microsoft Azure:

·       Multicast

·       Layer-2 routing

·       32-bit OS versions

·       Windows Server OS versions prior to Windows Server 2008 R2

Note: Windows Server 2003 / 2008 32-bit OS versions are supported for deployment on Azure virtual machines with substantial limitations, including no support for agents or extensions.

 

 

Usage Analysis Tools

The following tools are available as part of the platform and consider turning them on.

(A)    Azure Advisor

(B)    Azure Security Center

(C)    Cloudyn

They analyze the underlying data emitted by various Azure resources and arrive at recommendations from the perspective of

(i)                  Cost

(ii)                Security

(iii)               Performance

In most cases these recommendations are implementable in an automated fashion, if not via a single click. One can also review and implement the recommendations manually if one prefers to do that.

 

 

 

Work Item Search issues in multiple regions – 02/22 – Investigating

$
0
0

Update: Thursday, February 22nd 2018 18:26 UTC

We have identified mitigation steps and have tested it on a single scale unit which has successfully resolved the problems. We are in the process of rolling it out globally.

  • Next Update: Before Thursday, February 22nd 2018 19:45 UTC

Sincerely,
Sri Harsha


Initial Update: Thursday, February 22nd 2018 17:13 UTC

We're investigating an issue where some customers may experience work item searches returning 0 results in all regions.

  • Next Update: Before Thursday, February 22nd 2018 17:45 UTC

Sincerely,
Randy


Diving deep with conversational bots using Azure Bot Service

$
0
0

Abstract:

As we all know, our industry is going through a major transformation in terms of UI experience. Every single developer today strives to build software that is more user friendly and intelligent. With the rising demand of easier and simpler interfaces to perform tasks, we now have the ability to create bots – an app which you can interact with, using messages or voice, in a conversational way.

Join the journey as we learn more about what a bot is, what it can do, where is it useful, how to build one and how we can add intelligence to one. This session aims at providing a solid understanding of Azure Bot Service to develop intelligent conversational AI solutions.

Webinar Date & Time:  March 16th 2018, 10 AM GMT

Invite: Download the Calendar Invite

Speaker Bio:

Jawahar Ganesh S currently works with Microsoft India as a support engineer with the Developer Web Apps. He has written blogs and delivered multiple sessions and hands on workshops on recent cutting edge technologies to benefit developers in the IT industry. His areas of interests includes ASP.NET, Azure Bot Services, Web and Cloud application development. He currently lives in Bengaluru and is eager to meet all of us in this session.

Abhilash Rejanair works as a Support Engineer for Microsoft as an IIS/ASP.NET engineer. He has had the opportunity of working with a lot of Developers to troubleshoot issues spanning multiple teams and build projects used in production scenarios. His current interests are Bots, Machine learning, Automation, Web development, Nodejs and python programming.

Learn How To Build A Mixed Cloud Model For SharePoint, And More On This Week’s Friday Five

$
0
0

C# 7.2 : In Parameters 

Jaliya Udagedara is from Sri Lanka and is currently working as a Technical Lead for a software company headquartered in Australia. He's been a Microsoft MVP since January 2014, initially under Visual C# category, then .NET and now under Visual Studio & Development Technologies. He's passionate about everything related to .NET. Jaliya is also a TechNet Wiki Ninja and a blog author at TNWiki Ninja’s official blog. Follow him on Twitter @JaliyaUdagedara.

Quickstart Your D365/NAV Connect API Usage

Tobias Fenster works as the CTO for Axians Infoma, a German ISV building a Dynamics 365 / NAV solution for the public sector. He started as a developer and always kept coding while working in different areas of IT. Recently he helped kick off the Docker initiative for Dynamics NAV and shares his work around Docker, D365/NAV, TFS and other topics on https://navblog.axians-infoma.com. Follow him on Twitter @TobiasFenster.

freek-berson-1

HTML Client For Microsoft Remote Desktop Services 2016

Freek Berson is an Infrastructure specialist at Wortell, a system integrator company based in the Netherlands. Here he focuses on End User Computing and related technologies, mostly on the Microsoft platform. He is also a managing consultant at rdsgurus.com. He maintains his personal blog at themicrosoftplatform.net where he writes articles related to Remote Desktop Services, Azure and other Microsoft technologies. An MVP since 2011, Freek is also an active moderator on TechNet Forum and contributor to Microsoft TechNet Wiki. He speaks at conferences including BriForum, E2EVC and ExpertsLive. Join his RDS Group on Linked-In here. Follow him on Twitter @fberson.  

Building A Mixed Cloud Model For Sharepoint – Part 1

Thuan Nguyen is a Subject Master Expert in Digital Workplace and 7-year Office Services and Servers MVP. With nearly 10 years of experience in Information Technology and Services, Thuan has been involved in a number of successful Microsoft Digital Workplace adoptions for mid-tier and large organizations, including government agencies in which his responsibilities include technical evangelism, pre-sales activities, proof-of-concept mentoring, technical support, to architecting the entire solution. He is a regular guest speaker for several international conferences on SharePoint, Office 365 and Azure. Follow him on Twitter @nnthuan.

Gareth Gruger

Change Which Organizational Units (OUs) Are Synced to Office 365

Gareth Gudger is a Microsoft MVP based in the USA, specializing in Exchange and Office 365. He started his career in 1999 working on Windows NT 4.0 and Exchange 5.5. For most of his career he has worked in a consulting role, primarily delivering Microsoft technologies to businesses. He holds several Microsoft certifications including an MCSE in Messaging, and MCSA in Office 365. Gareth blogs at www.supertekboy.com, a site focusing on Exchange, Office 365 and adjacent technologies. Follow him on Twitter @SuperTekBoy.  

Microsoft Teams Configuration Tab: We couldn’t save your tab settings. Please try again.

$
0
0

You can troubleshoot JavaScript in Teams using the web based version of teams via: https://teams.microsoft.com

If you're building a configuration tab and get the following message:

We couldn't save your tab settings.

You can bring up F12 Developer Tools to see what's going on and even debug your JavaScript eg:

Clicking on the configuration.htm you can see I've forgotten to use the SSL ngrok endpoints - which was the cause of this error.  This is a prerequisite referenced here: https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/tabs/tabs-configuration

The Newest Cryptocurrency Trend: Privacy Coins – What is it and how you can benefit from it?

$
0
0

Guest post by Janya Eighani - Principle Solicitor at Lehman Walsh Lawyer

Is it really a private coin given the fact that blockchain technology is a decentralised system?

What are Privacy Coins?

Cryptocurrency investors are not limited in their choices. There are hundreds of cryptocurrencies and at least 39 of them have a market capitalisation of over US$1billion.

However, not all cryptocurrencies provide the same level of anonymity for those who use them.

Regular cryptocurrencies, such as Bitcoin or Ethereum, provide a certain amount of privacy due to their nature as mechanism of exchange protected by cryptography.

However, despite the ability to use a pseudonym to shield one’s identity, access to a blockchain ledger allows transactions to be viewed publicly. If personal details can be matched to an IP address, use of cryptocurrencies can quickly become observable. Certain businesses, such as Elliptic, operate as analysts of Bitcoin capital flows, demonstrating the relative visibility of Bitcoin payments.

Privacy coins, as their name suggests, offer users the ability to keep their transactions truly incognito. There are a number of privacy coins on the market, such as Monero and Dash, that use various methods allow their owners to remain anonymous. Advanced cryptographic techniques operate to keep the currency untraceable, while transaction locations and amounts can be protected by distributing the transaction between large numbers of data addresses.

How private are they?

Not all privacy coins provide the same level of concealment. Certain currencies, such as Zcash, continue to use blockchain to authenticate their transactions and can still be traced to an IP address. On the other hand, Monero operates by using dealings that are conducted via throwaway accounts, so that although Monero is exchanged via the blockchain, no traceable link is left to find the users. A further level of privacy is provided by the use of ring signatures to hide the amounts used.

Whether true secrecy on the blockchain can be achieved has been questioned. Advances in cryptography and mathematics are touted as providers of greater security and privacy. However, they may yet be accessible. Privacy technology used by Zcash and Monero relies on elaborate digital associations between multiple transaction verifiers. This places dependence on the other links in the verification process. Users of privacy coins may still find themselves unsure about whether their activity remains opaque.

What is the law concerning transactions made using cryptocurrencies?

There is no comprehensive legal regime for digital currencies. Yet despite cryptocurrencies being relatively new to the general public, some attempts have been made by regulatory authorities to track the identities of users. The current legal framework for regular monetary transactions is primarily aimed at anti-money laundering. The US Bank Secrecy Act (1970) requires that all financial institutions are registered and provide identification of their customers and dealings. In 2013 it was noted that these requirements extended to ‘virtual currencies’, including any blockchain transaction.

Driven primarily by concerns over taxation and the use of cryptocurrencies as a tool of criminal activity, potential regulation has had a destabilising effect on the crypto market. The US Government Accountability Office, an investigative body for the US Congress, has written a report in which one of its stated aims is to provide information on the names of account holders and the amounts of cryptocurrency. In Germany, a capital assets tax on cryptocurrency transactions has been proposed, however, it relies on voluntary reporting of income by users themselves.

It is clear that the potential value of privacy coins may be tied to the requirements stipulated in future government regulations of digital transactions. Should the policing of private cryptocurrency users be tightened, privacy coins may well end up losing value.

Azure PowerShell Login “Remember Me”

$
0
0

We use “Remember Me” feature quite a lot in our personal computers for quick login. This seamless experience can be achieved in Azure PowerShell as well. When you work with Azure PowerShell it asks for the login. Once you close the PowerShell windows it automatically clears up the context. You may want to keep it in your local machine based on the preference,

Enable-AzureRmContextAutosave

It will save the information in your local AddDataRoamingWindows Azure PowerShell folder.

After that login using Add-AzureRmAccount . Now if you close the PowerShell window and open again and type any Azure command it should connect to Azure without prompting you to login again. Though you might have to refresh the login after some day may be once it is expired but definitely not every time you open PowerShell. This is quite handy when you are working with Azure and using PowerShell to tryout many things from a single PC

If you want to clean it then use

Disable-AzureRmContextAutosave

Namoskar!!!

Viewing all 5308 articles
Browse latest View live




Latest Images