Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

Handling High volume events with Azure Event hub

$
0
0

When we need to handle high volume events and telemetry per second, Azure Event hub service is an ideal candidate for this kind of workload. Azure Event Hub is a highly scalable data streaming platform and event ingestion service, capable of receiving and processing millions of events per second.

It is our first step for event pipeline, which we called event ingestor. It is a component between your event producer and event consumer. To handle millions of events per second, you can use multiple event hubs under a event hub namespace Also, in each event hub, we can have multiple partitions. At very high level, it will look as shown below

Let’s next explore how we can use Azure event hub to handle high volume. At high level, we have 3 steps:

  • Configure Azure Event Hub service
  • Write service to push data into different partitions of Azure event hubs
  • Write service to Read data concurrently from different partitions of event hubs

Step 1 : Configure Azure Event Hub

To start with, we will create one event hub namespace with name “Eventhub-namespace-azure”.  Create this name space with default throughput unit i.e. 2. Under this event hub name space create 2 event hubs with name “hub1” and “hub2”. By default, each event hub has 2 partitions. Change number of partitions to 4 in both event hubs i.e. “hub1” and “hub2”. This is feasible only while creating the event hub. In Azure portal, it will look as shown below

Once you have created event hub you can’t modify number of partitions. For how to create event hub, refer below link

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-create

Now we are ready to push data into event hub. Let’s move to step 2.

Step 2 : Write service to push data into different partitions of event hubs.

Java code for sending event into specific eventhub  partition is in below git location

https://github.com/ReenuSaluja/Azure-eventhub-multi-partition-send

To clone the code on local run below command on command prompt

Git clone https://github.com/ReenuSaluja/Azure-eventhub-multi-partition-send

In code, go to client.java file which is located under com.ms.eventhub package. Replace values of  ----ServiceBusNamespaceName----, ----EventHubName----, ----SharedAccessSignatureKeyName----, ----SharedAccessSignatureKey---- into below code s

namespaceName = "----ServiceBusNamespaceName-----";

eventHubName = "----EventHubName-----";

sasKeyName = "-----SharedAccessSignatureKeyName-----";

sasKey = "---SharedAccessSignatureKey----";

based on your eventhub configuration in azure. Based on step 1 configuration, value of namespaceName would be “Eventhub-namespace-azure”. If you want to send events into event hub “hub1” then value of eventHubName would be hub1 , otherwise hub2.  To send event to eventhub sendSync function is called.

ehClient.sendSync(sendEvent,"partion1");

All Events with Same partitionKey will land on the Same Partition. So all the events with partition key “partion1” will be in same partition. In next line there is another statement ehClient.sendSync(sendEvent,"partion2");

Here the partition key is specified as "partion2". So this event will land into second  partition. Point to be noted here is, although both events will be going into different partitions, but part of same event hub.

After running the sample code, goto Azure portal. Select your eventhub namespace->hub1 and click on Metrics.

 

Based on sample code 2 messages were sent one in each partition.

 

Step 3 :   Write service to Read data concurrently from different partitions of event hubs.

To get sample code to read data from event hub, either you can git clone it from below location

https://github.com/ReenuSaluja/Azure-event-hub-reader-multithreading

or you can git clone from

https://github.com/Azure/azure-event-hubs/tree/master/samples/Java/src/main/java/com/microsoft/azure/eventhubs/samples/Basic and customize it for multithreading. For both ways you need to follow steps to create storage account for event hub as given in

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/event-hubs/event-hubs-java-get-started-receive-eph.md article.

If you open EventProcessorSample from  https://github.com/ReenuSaluja/Azure-event-hub-reader-multithreading, you will find a variable with name ExecutorService. This variable is initlize with a constructor Executors.newWorkStealingPool(2). Number 2 is there number of concurrent thread. You can increase it as per your requirement. newWorkStealingPool is part of java 1.8. if you are using older version, you can simply use newFixedThreadPool constructor for multithreading. Now there will be 2 concurrent threads running on same eventhub.

Few points to be noted.

  • In this code we are using default consumer group. You can enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. In your code you just need to replace "$Default" value of consumerGroupName with your consumer group name.
  • You can have upto 5 concurrent readers on a partition per consumer group
  • In EventProcessor.java file

context.checkpoint(data);

statement keeps track of reader current position in the event stream. This saves  current checkpoint into storage account with details of consumer group, event hub.

You can get more details on this from https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features.

  • For batch processing of event, you can use LinkedList of EventData. That will be pass as parameter in sender.sendSync.
  • If you want to have multiple instance of reader application,

o   Configure multiple partitions in event hub.

o   In Step 3, in reader code, change value of “Host1” as unique  for each instance of the reader, which is a parameter in EventProcessorHost constructor.

 

Recap

To handle high volume events in Azure event hub, first we had configured eventhubs. Then we used java api to insert data into different partitions of eventhub. We then created multithreaded Java application to read data from eventhub partitions.


Line Messaging API C# SDK moves to new repository

$
0
0

これまで LINE Messaging API for C# SDK を使ってくださり、ありがとうございます。コミュニティの力を集約させるために、今後 LINE Messaging API for C# SDK はhttps://github.com/pierre3/LineMessagingApi にて行います。既にこちらのレポジトリに Visual Studio のテンプレートを含め多くのアセットを以降済です。実装の仕方は異なりますが、詳細は別途ブログでも紹介していきます。

中村 憲一郎

For all who uses my LINE Messaging API for C# SDK so far, thanks a lot. However I decided to close down my repository and moving everything into https://github.com/pierre3/LineMessagingApi so that we can consolidate community effort into one. Most of the assets such as Visual Studio Templates were already moved. I will write blog articles how to use new library more details in future articles. Thanks!

Ken

LINE Messaging API C# SDK, Visual Studio templates and more!

$
0
0

LINE Messaging API for C# SDKC# 用 LINE Messaging API ライブラリを改めて @pierre3 さんと公開しました。

GitHub: https://github.com/pierre3/LineMessagingApi

各フォルダに日本語の説明もあります。このレポジトリには以下のものが含まれます。

- C# LINE Messaging API 用 SDK
- Azure ファンクションサンプルプロジェクト
- Web App サンプルプロジェクト
- Visual Studio 拡張用プロジェクト

拡張用プロジェクトはマーケットプレースに公開済です。また Azure Funciton は v2 もあるので、Mac でも安心です。是非遊んでみてください。

 

Finally, we completed releasing LINE Messaging API for C# SDK, Visual Studio templates and samples.

GitHub: https://github.com/pierre3/LineMessagingApi

This repo contains following.

- C# SDK for LINE Messaging API
- Azure Functions sample
- Web App sample
- Visual Studio extension project

The VS extension is on marketplace, too. You can play with it on Mac as Azure Function v2 sample is also available!

Enjoy!

Ken

Exploring Big Data: Course 3 – Introduction to NoSQL Data Solutions

$
0
0

Big Data with Sam Lester

(This is course #3 of my review of the Microsoft Professional Program in Big Data)

Course #3 of 10 – Introduction to NoSQL Data Solutions

Overview: The “Introduction to NoSQL Data Solutions” course is comprised of five modules, covering NoSQL services in Azure, Azure Storage Tables, DocumentDB in Cosmos DB, MongoDB, and an overview module which briefly introduces Cassandra, Lucene and Solr, HBase, and Redis. The lab instructions are very good, making it easy to get through the labs and explore the resulting solutions. Modules #2-4 are in-depth with a single technology, while module #5 gives a very short introduction into several different technologies.

Time Spent / Level of Effort: I spent roughly 12-14 hours total for this course, including watching all the videos at double speed and completing each of the labs. This course was extremely easy to pass with a high score since both the quizzes and the exam questions were straight from the content. I had passed the required 70% mark before the 5th module but completed it to learn more about these technologies that were new to me.

Course Highlight: Since I have a bit of experience exploring Azure Storage Tables and Cosmos DB, the highlight of this course for me was in module 4 where we installed and executed queries against a MongoDB instance using the Mongo shell. Module #5 was also great to work through as it provided a high-level overview of several technologies. This high level of detail was sufficient for me to learn the basic concepts of the technologies to be able to understand how the tools are used, but not spend much time on each one since it is unlikely that I’ll be using any of these technologies soon in my current role. 

Apache Solr

Apache Solr

Suggestions: While navigating through the course, be sure to read through the code examples provided in between the videos. The quizzes and final exam use this content extensively. Also, since the labs walk through step-by-step to provide the answers for the quizzes, it is easy to set up the environment and then instantly delete it from your Azure account after answering the questions. Instead of taking this approach, explore the environment (MongoDB, DocumentDB, etc.) to learn more about them.

Also, when creating your MongoDB instance, use the "Database as a service for MongoDB" option to create the instance through Cosmos DB as opposed to a Linux hosted instance when selecting the "MongoDB" option.

MongoDB

If you have taken this course in the past or are going through it now, please leave a comment and share your experience.

Thanks,
Sam Lester (MSFT)

Exploring Big Data: Course 5 – Delivering a Data Warehouse in the Cloud

$
0
0

Big Data with Sam Lester

 

(This is course #5 of my review of the Microsoft Professional Program in Big Data)

Course #5 of 10 – Delivering a Data Warehouse in the Cloud

Overview: The Big Data course titled “Delivering a Data Warehouse in the Cloud” walks you through the key concepts of a SQL Data Warehouse (DW) in Azure, including the steps to provision a DW, followed by lectures on designing tables and loading data, and completes with big data integration with Hadoop using Polybase. During the course, the four lab exercises require you to install numerous software applications used in a data warehouse environment, including SQL Server Management Studio (SSMS), SQL Server Data Tools (SSDT), Visual Studio Community, Visual Studio Code, Azure Feature Pack for SSIS, and Azure Storage Explorer. The download and installation of these tools is part of the lab exercise as opposed to the course providing a pre-built VM with the required software. As a result, you can’t complete labs #2-4 without going through this setup. I would have preferred an image with the required software since I’m very familiar with installing and configuring each of these applications. For those who don’t have experience with some of these tools, the course is a great way to walk through installation and basic functionality of data warehouse tools in addition to the Azure DW content.

Time Spent / Level of Effort: I spent about 10 hours total for this course. I watched the videos from part 1 on double speed and then finished the quiz from that section. At that point, I decided to do all four of the labs consecutively. This took me around 2.5-3 hours, but I felt like it was a great use of time to do them back-to-back since I could focus on everything in Azure, including the numerous tools introduced. After completing the labs, I went back to the videos to watch parts 2-4, followed by the quizzes, and then the final exam.

By completing the labs all at one time, I was able to minimize Azure costs by shutting down the VM when finished. Here is the resource usage for the SQL Data Warehouse VM while working through the labs.

Azure Data Warehouse with Sam Lester

Course Highlight: It felt like completing the labs took an exceptionally long time due to the required installation step, but the highlight after finishing the course is that I have a great Azure DW demo environment to continue to use for demos and presentations. I also enjoyed the videos on Polybase since I haven’t had a chance to explore this for a customer related project to date. To me, this chance to watch videos and build solutions around popular topics (such as Polybase) while going through the program is a huge benefit to help me remain relevant with so many interesting areas of the data industry. The other aspect of the labs that I enjoyed was the process of executing the same step through two different tools. For example, uploading data through bcp and Azure Storage Explorer as well as running TSQL through both Visual Studio and SSMS. 

Suggestions: The final exam for this course was harder than any of the previous 15 edx courses I have taken as it didn’t feel like the videos and labs prepared you directly for the questions. I found most of the answers when reading documentation and trying out the scenarios in the lab environment. Don’t forget to fill out the final survey / question after completing the course as this contributes to your score. Also notice that I took course #5 directly after completing course #1 since there is no requirement for the ordering of the classes as long as you can stay within the required schedule.

There are also a few small items that I encountered in the labs where the documentation is incorrect in areas where the product functionality has been updated. One example is in populating a table using Azure Data Factory in Lab 3. The default Data Factory version is now V2, but the lab instructions to use the Copy Data functionality are available in version 1 (V1). Creating the Data Factory using Version 1 allows you to continue with the lab as documented.

Azure Data Factory

There is another small issue when using TSQL with Polybase to load data. The provided TSQL code begins with the line "CREATE MASTER KEY;". Since this already exists, the script fails. You can work around this by removing this single line and continue with the lab.

Overall, it was a very educational course that covered a lot of material and introduced several software applications used in a data warehouse environment. If you have taken this course in the past or are going through it now, please leave a comment and share your experience.

Thanks,
Sam Lester (MSFT)

MIEE Spotlight- Scott Barron

$
0
0


In today's MIEE Spotlight, we are going to be looking at the inspiring work of Deputy Curriculum Leader at Alsager High School, Scott Barron and the impact Office 365 has had in his school.

Scott always strives to use technology in the classroom to enhance learning and with his support, Office 365 is now widely available to staff and students across Alsager High. Scott was able to drive this forward through staff training and support across the school, through running workshops and seminars. Scott has used Office 365 tools, such as OneNote Class Notebook, OneDrive and Groups to foster collaboration in the classroom and beyond. He has also empowered the staff to collaborate more fully throughout the whole school, by utilising Staff Notebooks.

His students have now developed a system where they have become confident users of OneNote to gather notes, complete Homework assignments, class projects and more! His staff have been able to easily keep their lesson plans, observations and parent communication organised effectively using the power of OneNote Staff Notebook.

You can follow Scott on Twitter @maxwell01782 to keep up to date with his fantastic work to drive technological innovation forward within his school.


Interact with the Sway below to hear more about Scott's development and classroom practices in his own words.


 


Follow in the footsteps of our fantastic MIEE's and learn more about how Microsoft can transform your classroom with the Microsoft Educator Community.


 

WebUI access is slow or failing in West Europe – 11/13 – Mitigated

$
0
0

Final Update: Monday, November 13th 2017 14:23 UTC

We’ve confirmed that all systems are back to normal as of 11/13/2017 12:35 UTC. Our logs show the incident started on 11/13/2017 12:10 and that during the 25 minutes that it took to resolve the issue. Customers experienced slow and failed commands within the UI in the Western Europe region. Sorry for any inconvenience this may have caused.

  • Root Cause: We have collected diagnostic data and continue to actively investigate the issue.
  • Chance of Re-occurrence: High
  • Lessons Learned: We are working both minimizing resource-intensive activities in our post-deployment steps, and are also working targeting monitors specifically to detect post-deployment issues in the future.
  • Incident Timeline: 25 minutes – 11/13/2017 12:10 UTC through 11/13/2017 12:35 UTC

Sincerely,
Randy


Initial Update: Monday, November 13th 2017 12:39 UTC

  • We're actively investigating error and performance degradation while using Visual Studio Team Services web site in West Europe.
  • Our telemetry shows that some users might see HTTP errors (503) as well as the slow rendering of the various Visual Studio Team Services web pages.

Next Update: Before Monday, November 13th 2017 14:00 UTC

Sincerely,
Olivier Dovelos

28 ноября, Киев. Microsoft Developer School

$
0
0

Приглашаем вас на первую в Киеве Microsoft Developer School, которая пройдет в формате Open Hack.

OpenHack — это:

  • не больше 10-12 команд по 2-5 человек;
  • 3 дня на кодирование, скриптование и прочие радости;
  • эксперты от Microsoft и новые технологии;
  • эксперты рынка и реальное использование этих новых технологий.

Кого мы ждем?

Участие в Microsoft Developer School бесплатно. Мы ждем команды по 2–5 человек из вашей компании для работы над существующим проектом либо для знакомства с технологическим стеком релевантного для вас проекта.

Ключевые направления

Вместе с вами мы реализуем проект в рамках одного из направлений школы:

  • serverless computing, контейнеры и микросервисы;
  • IoT;
  • работы с большими данными и машинное обучение;
  • дополненная и виртуальная реальность.

Ждём вас 28 - 30 ноября в Киеве на трёхдневный Open Hack!

Изучим новые технологии вместе!


Visual Studio Shortcuts and Add on Tools

$
0
0

This post from Premier Developer consultant Crystal Tenn walks you through customizing Visual Studio to work better for you and your organization.


I like tools that make my development faster and more organized. The small amount of time it takes to invest in installing and learning these tools pays off in the long run! I have listed out the shortcuts that I use in my Visual Studio and how to change your settings if you want to adopt some of my shortcuts or make up your own easy to remember ones. You can share settings across a team so that everyone is more productive and in sync. I like to change my new classes so that by default they are public, instructions are below. Also, I use add-ons to help check my spelling, so it is easy for others to find my work (hard for others to find my class if I spell it wrong) and I have listed a couple of options in this article. I did not go into this topic here as it is lengthy, but I also do recommend using ReSharper which has many tools to help you write code faster and more effectively!

*As a note, all screenshots are taken with Visual Studio 2017.


How to get to edit Visual Studio Shortcuts:

  1. Click on Tools > Options

    clip_image002

  2. Under Environment, go to Keyboard. The highlighted section “Show commands containing:” corresponds to the “VS Mapping column” in the table that you will see next. If you press shortcut keys, you can assign a new shortcut to whichever command is selected. You can type in the shortcut keys to find out if it is used by anything currently by checking the box below that is greyed out in the screenshot that reads “Shortcut currently used by”, for example if you need to find the name of a shortcut you use, and you are not sure what it is called.

    clip_image004


  3. Action

    VS Mapping

    Recommended Shortcut

    How to remember it:

    Default

    Project / Files / References

    Add a new class

    Project.AddClass

    Ctrl + N, Ctrl + C

    N for New and C for Class

    N

    Add new Project

    File.AddNewProject

    Ctrl + N, Ctrl + P

    N for New and P for Project

    N

    Add existing Project

    File.AddExistingProject

    Ctrl + N, Ctrl + E

    N for New and E for Existing

    N

    Set current project as startup

    Project.SetasStartUpProject

    Ctrl + S, Ctrl + P

    S Set as Startup and P and Project

    N

    Add Reference to selected project

    Project.AddReference

    Ctrl + A, Ctrl + R

    A for Add and R for Reference

    N

    Code Related

    Comment out code

    Edit.CommentSelection

    Ctrl + K, Ctrl + C

    Y

    Comment in code

    Edit.UncommentSelection

    Ctrl + K, Ctrl + U

    Y

    Collapse all methods

    Edit.CollapsetoDefinitions

    Ctrl + M, Ctrl + O

    Y

    Collapse all code

    Edit.ToggleAllOutlining

    Ctrl + M, Ctrl + L

    Y

    Uncollapse all code

    Edit.StopOutlining

    Ctrl + M, Ctrl + P

    Y

    Rename all

    Refactor.Rename

    Ctrl + R, Ctrl + R

    Y

    Fix all code alignment

    Edit.FormatDocument

    Ctrl + K, Ctrl + D

    Y

    Commenting template

    /// the line above what you want to comment, then hit enter

    Y

    Navigational

    Go to Declaration

    Edit.GoToDeclaration

    F12

    Y

    Go to Implementation

    Edit.GoToImplementation

    Ctrl + F12

    Y

    Navigate To

    ReSharper VS config default

    Ctrl + T

    Y

    Go to Solution Explorer

    View.SolutionExplorer

    Ctrl + S, Ctrl + E

    S for Solution and E for Explorer

    N

    Go to Team Explorer

    View.TfsTeamExplorer

    Ctrl + T, Ctrl + E

    T for Team and E for Explorer

    N

    Go to Test Explorer

    TestExplorer.Show.
    TestExplorer

    Ctrl + U, Ctrl + T

    U for Unit and T for Tests

    N

    Previous page

    View.NavigateBackward

    Ctrl + -

    Y

    Forward page

    View.NavigateForward

    Ctrl + Shift + -

    Y



How to export Visual Studio Shortcuts:

  1. Click Tools > Import and Export Settings ….

    clip_image006

  2. On the popup, choose Export selected environment settings and hit Next >
  3. To choose only the keyboard settings, UNCHECK all settings, then go under options, go under environment, and then CHECK keyboard.

    clip_image007

How to import Visual Studio Shortcuts:

  1. Go to the same menu from Tools > Import and Export Settings …
  2. Choose Import selected environment settings then hit Next >
  3. Select if you just want certain settings or all the settings, for Keyboard go to the same mapping as the screenshot above in the previous set of instructions.
  4. Choose the.vssettings file you want to import and where you want it stored.
  5. Hit Finish

How to set new classes to be public by default:

  1. Go to one of these locations, depending on which version of VS you own.
  • VS2015: C:Program Files (x86)Microsoft Visual Studio 14.0Common7IDEItemTemplatesCSharpCode1033Class
  • VS2017(RC): C:Program Files (x86)Microsoft Visual Studio2017EnterpriseCommon7IDEItemTemplatesCSharpCode1033Class
  • Edit the template and add the public keyword before class. Add any other changes you would like to a default class or its usings.

  • Code Commenting Tool:

    GhostDoc helps you with filling in as much of your comments as possible ahead of time so that you can go ahead and customize it a little bit more. It saves you time by putting in a default summary and all your parameters into plain English.
    *Note Pro/Enterprise come with free spell checking Smile!

    GhostDoc Community (free) for VS2017: https://marketplace.visualstudio.com/items?itemName=sergeb.GhostDoc

    GhostDoc Pro/Enterprise/Community for VS2015: https://submain.com/download/ghostdoc/pro/

    Cost of Pro/Enterprise editions: https://submain.com/order.aspx

    Features for different editions: https://submain.com/products/ghostdoc.aspx#features

    4-minute kick-starter tutorial: https://submain.com/ghostdoc/GettingStarted/

    Spelling Check Tool: ReSpeller (free!)

    If you don’t get GhostDoc’s paid versions and you want free spelling tools you can get it from the folks at JetBrains who made ReSharper by downloading this:

    https://resharper-plugins.jetbrains.com/packages/EtherealCode.ReSpeller/

    Managing your Microsoft Dynamics AX 2012 business with role center Cues

    $
    0
    0

    I have been involved with a company for a while that have used the Role center cues to manage both the state of the business and manage the different situations that had to be supervised such as entities being stuck in a work process.

    In general, I am very much in love with Cues for a fast and efficient business insight tool, requiring no to very little development for gaining a very large variety of business data. That said cues also have some limitations which in my opinion stems from the fact that Cues received one iteration of code and version 2 has never come around. In the end of this blog I will describe the code enhancements we have done to the cue administration.

    First a quick story on the name. Cues were made for the AX 2009 and I remember that the name Cues were discussed quite a bit at Microsoft at the time. The word Cues comes from A hint, intimation, guiding suggestion, I remember that Cue card was used to indicate what it was. If you use picture search engine you may however likely get this kind of result.

    Personally, I would appreciate a bit more effort on what to name this concept back then. For non-English users the name will be different. In (my) Danish it is called køer which is not a great word either, directly translated it means Queue however Køer (also) has a double meaning in Danish:

    You can read more about the Cues from TechNet both here and here.

    Basic cue creation

    You can create a cue on most forms in AX (also non-list page forms). As long as you can access the advanced filter (Ctrl-F3) save the query you have made as a cue. Here I have for example made a query on open sales order lines that have a unit price of zero

    On doing this you are presented with a dialogue that gives you the option to name the new cue and designate who can view this cue and finally if you wish to have an alert on the cue when the number of records in each cue is above or below a certain threshold.

    Here I have named my new cue Sales order lines with unit price 0. I have set the Show alert when count is to a personal favorite of mine whenever I make a cue with an outcome that is undesired such as this. With this setting I will get a warning message telling me to react to this cue whenever more than zero sales order lines with a unit of price of zero is found.

    On pressing OK the cue is saved and... ok often.. the cue is automatically added to my role center, note the warning that I have more than zero in that cue.

    I skipped the Show sum of field. When I check this, I get a drop down of a subset of values on the table in question. Here I get 7 options:

    Without knowing it exactly it does seem that the reason for these values and not others are that each of these fields on in this case the sales order line table is that these fields extends the Amount EDT. Using this option is most certainly at the users own peril because we are directly using values from the table whatever comes out has probably never been intended for user consumption. In the above example I would probably use the LineAmount field although I would be concerned about multicurrency. The result is the following:

    Role center personalization

    The role center contains a range of good options. For the sake of this blog I have personalized my role center to have a few more Cue groups. Having more than one cue group is good for using the available space within the role center more efficiently and when you start having many cues within each cue group you will find that it takes longer and longer to load the data within the individual cue group, having more cue groups will make data become available faster:

    Role center personalization is the topic of quite a few sites so briefly press the personalization and select Microsoft Dynamics AX and choose the Cues in the Web Parts and then where to add the cue part, in my example to the right column:

    Secondly it is very important to add a Cue Group Name to get different sets of Cues in each Cue group:

    The Cue group names comes from the existing Cue Groups created in the AOT >Parts>Cue Groups. I Personally normally pick one that only have a few existing Cues in them to quicker empty it. I had an experience once that adding a cue group made the entire role center freeze. I resolved it by resetting the users personalization and using a different cue group name. That is a rather unfortunate thing to do if a great many cues have been added to the Cue groups so be careful and perhaps make screenshots of added cues if you are working on an existing role center.

    Advanced Cues

    Now to truly get something meaningful out of Dynamics AX you will need to understand how to build more advanced queries.

    Make sure you understand the basics of querying:  https://technet.microsoft.com/en-us/library/aa569937.aspx

    And to add additional tables to the existing query: https://technet.microsoft.com/en-us/library/aa551356.aspx

    I tried to make a blog on the SysQueryRangeUtil methods to use, which you can find here: http://pingala.eu/en/blogs/dynamic-query-ranges

    Building the query can be a pain and will often require the user to enter the AOT to find the relationships and not least what the joining tables are named. For example, if I wanted to make a query on sales order lines that are late and that are reserved physically I would be extending the previously used query with the inventory transaction originator and inventory transaction to get to the Issue status field.

    In this example I would probably also have to work with some SQL statements within the query like you can read about here:

    https://timsaxblog.wordpress.com/2015/07/23/how-to-use-advanced-filter-queries-in-ax-2012/

     Managing Cues

    Under Organization administration>Setup>Role center>Enterprise portal you will find the Edit cues form which shows the Cues that have been created within the organization.

    It does not provide a great deal of options. I have the option to change the name and the two Cue options plus the visibility. For some reason there have been added a restrict on the ability to delete cues if they are present on a users role center:

    It is possible to get rid of these without having the users to manually removing the cues, but it requires access to the AOT to find the table SysCueGroupMembership and perhaps also SysCuePersonalization. The other option is to have a developer to change the delete action types.

    Improving upon the Cues framework

    For a customer we created an enhancement to the Cue framework. The customers focus was to manage the (very many) cues they had in place. That includes two things: Being able to change the query that underlies the Cue instead of having to recreate the Cue and the ability to manage the Cues a user has on their role center.

    The ability to change a query was added to the Edit cues form:

     

    Second option is to see the memberships of the individual Cue by pressing the button Manage cue membership:

    From here the options are to delete an existing cue membership, meaning to remove a cue from a user.

    The second option is to add the cue to a selected user:

    Cue membership can also be viewed from the user:

    Let me know if you wish to try out this change.

     

    EA 契約における Azure の使用状況レポートについて

    $
    0
    0

    いつも大変お世話になります。Microsoft Azure サポート チームです。

    EA 契約における Azure 使用状況レポートについて本記事で以下をご案内させていただきます。
    ご参考になりましたら幸いです。

     

    1. 使用状況レポートのダウンロードの方法
    2. ダウンロード可能な権限
    3. 各カラムの説明
    4. 注意事項

     

    1. 使用状況レポートのダウンロードの方法


    EA ポータルから最も詳細な使用状況レポートをダウンロードする手順をご案内いたします。

     

    1. 以下の URL にアクセスする
      https://ea.azure.com
    2. 適切な権限を持ったアカウントでサインインする
      ※ 権限について後述でご案内します。
    3. 画面左から [Reports] をクリックする
    4. 画面上から [使用状況のダウンロード] をクリックする
    5. [月単位レポートのダウンロード] をクリックする
    6. 該当月の [使用状況の詳細] 列の [ダウンロード] をクリックする

     

    2. ダウンロード可能な権限


    上記でご案内した使用状況レポートを EA ポータルからダウンロードできる権限についてご案内いたします。
    使用状況レポートをダウンロード可能な権限は以下になります。

     

    • エンタープライズ管理者
    • 部署管理者
    • アカウント所有者

     

    1. エンタープライズ管理者

    エンタープライズ管理者は EA ポータル内の最上位レベルの権限になります。
    この権限を持っているアカウントは使用状況レポートのダウンロードが可能です。

     

    2. 部署管理者

    該当の EA ポータルにおいて以下状況の場合、部署管理者の権限を持っているアカウントも使用状況レポートのダウンロードが可能です。

     

    • マークアップ ステータス : 公開済み
    • DA ビューの請求額 : 有効

     

    マークアップ ステータスと DA ビューについて、以下の EA ポータル内ヘルプをご案内させていただきます。

     

    エンタープライズ ポータル パートナーの料金マークアップ
    https://ea.azure.com/helpdocs/partnerPriceMarkup

    非エンタープライズ管理者向けレポート
    https://ea.azure.com/helpdocs/accountOwnerReporting

     

    3. アカウント所有者

    該当の EA ポータルにおいて以下状況の場合、アカウント所有者の権限を持っているアカウントも使用状況レポートのダウンロードが可能です。

     

    • マークアップ ステータス : 公開済み
    • AO ビューの請求額 : 有効

     

    マークアップ ステータスと AO ビューについて、以下の EA ポータル内ヘルプをご案内させていただきます。

     

    エンタープライズ ポータル パートナーの料金マークアップ
    https://ea.azure.com/helpdocs/partnerPriceMarkup

    非エンタープライズ管理者向けレポート
    https://ea.azure.com/helpdocs/accountOwnerReporting

     

    3. 各カラムの説明


    上記でご案内した使用状況レポートの各カラムについてご案内いたします。
    なお、正確な値は EA ポータルにてご確認ください。
    ※ [リソース料金]、[拡張コスト] は参考の値となります。

     

    カラム名 説明
    アカウント所有者 ID 該当サブスクリプションのアカウント所有者の ID

     

    例)
    taro@contoso.com

    アカウント名 該当サブスクリプションのアカウント所有者のアカウント名

     

    例)
    Taro Suzuki

    サービス管理者 ID 該当サブスクリプションのサービス管理者の ID

     

    例)
    taro@contoso.com

    サブスクリプション ID 該当サブスクリプションのサブスクリプション ID

     

    例)
    12345678901

    サブスクリプション GUID 該当サブスクリプションのサブスクリプション GUID

     

    例)
    1a2b3c4d-1a2b-3c4d-1a2b-1a2b3c4d5e6f

    サブスクリプション名 該当サブスクリプションのサブスクリプション名

     

    例)
    Microsoft Azure エンタープライズ

    日付 使用が発生した年月日

     

    例)
    10/01/2017

    使用が発生した月

     

    例)
    10

    使用が発生した日

     

    例)
    1

    使用が発生した年

     

    例)
    2017

    製品 使用が発生したリソースの製品名

     

    例)
    A1 VM (Windows) - JA East

    測定 ID 「製品」カラムの値毎の一意な ID
    「製品」カラムが同一であれば、同一の ID となる

     

    例)
    6bbddf59-3fce-4600-90bd-d8f1e565ebe9

    測定カテゴリ 使用量測定の大分類

     

    例)
    Virtual Machines

    測定サブカテゴリ 使用量測定の小分類

     

    例)
    A1 VM (Windows)

    測定範囲 使用量測定の範囲

     

    例)
    JA East

    測定名 使用量測定の名称

     

    例)
    Compute Hours

    使用量 実際の使用量
    例えば「測量単位」カラムの値が「Hours」で、「使用料」のカラムが「24」の場合、24 時間使用されたことを示す
    例えば「測量単位」カラムの値が「GB」で、「使用料」のカラムが「3」の場合、3GB 使用されたことを示す

     

    例)
    24

    リソース料金 該当リソースごとの参考価格
    [拡張コスト] / [使用量]

     

    例)
    10.8199999531736

    拡張コスト 該当リソースごとの請求額 (参考価格)

     

    例)
    259.68

    リソースの場所 該当リソースの使用量が測定された場所

     

    例)
    JA East

    消費済みサービス 消費対象となったサービスの名称

     

    例)
    Microsoft.Compute

    インスタンス ID 該当リソースのリソース名か完全修飾リソース ID

     

    例)
    winsrv01(winsrv01)

    サービス情報 1 サブスクリプションでサービスが属しているプロジェクトの名前
    サービス情報 2 サービス固有の省略可能なメタデータをキャプチャする、以前から使用されているフィールド
    追加情報 該当リソース固有に追加情報

     

    例)
    ComputeSmall

    タグ タグ
    詳細については以下を参照

     

    https://docs.microsoft.com/ja-jp/azure/azure-resource-manager/resource-group-using-tags

     

    例)
    {"constCenter":"finance", "env":"prod"}

    ストア サービス識別子 使用されていないカラム
    部署名 該当サブスクリプションの部署 (EA ポータル内で設定する部署)
    詳細については以下を参照 (EA ポータル内のヘルプ)

     

    https://ea.azure.com/helpdocs/createADepartment

    コスト センター 該当サブスクリプションのコスト センター (EA ポータル内で設定するコスト センター)
    詳細については以下を参照 (EA ポータル内のヘルプ)

     

    https://ea.azure.com/helpdocs/updatePurchaseOrderNumber

    計量単位 使用量測定の単位

     

    例)
    Hours

    リソース グループ 該当リソースが所属するリソース グループ

     

    例)
    Rg-vm-001

     

    4. 注意事項


    注意事項についてご案内をさせていただきます。

    当日の使用量が完全にレポートに反映されるまで 5営業日以上要する場合がございます。
    例 : 10月1 日の使用量の反映は10月6日以降となる

    恐れ入りますがこの点についてご留意くださいますと幸いです。

     

     

    以上の通りご案内いたします。
    引き続き弊社製品・サービスについてお客様のお役に立てる情報のご案内に努めさせていただきます。
    よろしくお願いします。

    Eltham High rethinks teaching and learning with Surface Pro and cloud – 21st Century skills

    $
    0
    0

    Technology has taken learning from slate to paper to screen; from pencil to pen to stylus. Consequently, each generation of students should emerge with the skills and smarts to participate fully in the workforce of the day.

    Victoria’s Eltham High School is keenly aware of the need to equip today’s students with 21st century skills, and understands that digital technologies and the cloud provide the platform that allows students to connect, collaborate and create both within and out of the classroom.

    It equips them for today and prepares them for tomorrow.

    The school’s transition to Microsoft Surface devices began in 2015 when school leaders recognised the potential of the technology to transform both teaching and learning and began a one-to-one computer programme that has been well received by students, parents and guardians.

    The technology’s all-in-one approach, combining tablet, touchscreen and digital stylus, combined with the power of Windows 10 and Office 365 equips students with important digital literacy skills and a better understanding about new modes of working. At the same time, access to learning platforms such as OneNote and class notebooks encourage children to learn, to participate and to collaborate.

    As a result of Eltham’s initiatives students now have:

    • seamless access to a rich array of digital resources and tools;
    • more understanding of how to work effectively in a digital environment;
    • enhanced collaboration skills and opportunities;
    • unlimited note-taking capacity; and
    • anytime anywhere access to learning content and their notes.

    According to the school; “Having permanent, 24 hour access to digital learning technologies is an essential component for success in today’s digital learning environment. With greater access to real time learning resources and assistance from peers and staff, students experience higher levels of motivation and engagement in their learning.

    “The barrier between school and home is also blurred as learning happens all the time, anywhere.”

    Children who may be away from school because of an illness or injury can still learn and collaborate with their peers from home or hospital. Students who may be shy are often more willing to participate fully in an online environment.

    Turbocharge transformation

    To turbo-charge the transformation Eltham High provided a first tranche of 20 Surface devices to staff teaching students in years 7-9 allowing them to further familiarise themselves with the technology, enhance their own digital literacy and also explore how to spark new ways of learning and deepen student engagement.

    Paired with effective professional development to explain to teachers how to make the most of the technology, staff were also encouraged to collaborate on lesson designs that made the most of the technology and also encouraged students to use multimodal inputs including the stylus as well as the keyboard.

    That has had particular impact in some faculties; language teachers for example note how much easier it is for students to put in accents when writing in foreign languages when using the stylus rather than the keyboard.

    The flexibility and richness of the platform has had many other significant benefits.

    Adam Scanlon, Eltham’s Year 7 co-ordinator and integrated studies teacher says that; “Providing a platform like OneNote with digital paper and a pen allows students to freely think. With the ability to use images, audio and type notes all in one piece of “paper” provides a multimodal aspect to the digital learning environment.”

    Lessons meanwhile can be enriched with embedded video, podcasts, and surveys with content automatically synchronised.

    Scanlon says the same applies to the content that students create. “I like to provide scaffolds in OneNote to assist my students with navigating our different sections. When doing text analysis many of my students are now screen clipping or creating their own Venn diagrams to make comparisons.

    “Students are able to choose how they approach and choose the mode they respond, whether it be through images, typed, inked hand notes, annotations or even audio.”

    The rich array of response options is also encouraging higher participation rates according to French teacher Sue Keating. “OneNote class books have been a powerful formative assessment tool for me - to be able to use the collaboration space for students to make group responses.

    “Students who would not normally contribute have been making contributions and it’s been instant feedback for me to see who needs further support or even extending.”

    Learning loops

    The technology has also expanded the ways in which teachers can communicate with children and provide learning support and recommendations.

    Eltham e-learning co-ordinator Luke Herring says that bringing the pen back – albeit in a digital format – has been welcomed by teachers allowing them to instantly annotate student work and provide valuable feedback.

    “Being able to assess and provide instantaneous feedback to students through oral presentations has enabled my students to instantly see the feedback I have provided in their OneNote. By the time they get back to their seat after presenting to their class they can see my feedback,” says Herring.

    Students also appreciate the instant feedback. As one Eltham High student notes; “I use OneNote for nearly all my classes. In science we write prac reports and in English we do our essays in our private section. Our teacher can write over it and record their feedback which is good because I can listen back to it.”

    Scanlon says that the rich digital solutions now available to both students and teachers also strip away the artificial constraints of exercise books and worksheets encouraging more creative and complete response.

    “Lined paper in a book puts parameters on what students will produce. The freedom and flexibility of inking in OneNote means students are not constrained in their individual workbook,” he says.

    Wireless connectivity in the classroom and cloud based applications also means that teachers and students are not confined to their desks but can locate themselves where it makes most sense through the day.

    Importantly the solution also provides teachers with a window into student’s learning approaches. Scanlon says; “Through my students using pen in OneNote it makes thinking more visible and I can more easily make interpretations of my student’s understanding.”

    The school now faces the challenge of ensuring this approach to learning with technology is applied consistently across all classes at the school to transform teaching and learning practices across the entire community.

    As Luke Herring notes, Surface and OneNote are enhancing teaching best practice rather than demanding a confronting and wasteful rip-and-replace approach; noting; “We do not need to reinvent the wheel. It’s about curating the best resources, best questions and best modes of learning to ensure students build knowledge as well as the important 21st century skills.”

    The learning advantage of this rich digital approach is clear to French teacher Sue Keating. “Finally I have a tool that allows me to move away from my lessons being teacher driven and encourage students to be creative and integrate their own digital material.

    “Now learning is driven by my students, they are already becoming more flexible and I have noticed they are more creative in how they approach responding to tasks. Already I have seen students become more autonomous and self-regulating in their learning, making deeper connections to learning intensions.”

    They are, in short, ready for the 21st Century.

    Learn how to integrate Surface, OneNote and Windows 10 into your classroom by downloading our instructional OneNote here. It contains:

    • Surface videos teaching you 'when to pen'
    • Ideas and samples (student work from Eltham Students across a range of subjects)
    • 4 strategies for teaching
    • 4 strategies for learning
    • ‘Have A Go’ templates to help teachers learn how to use the functionality of Inking in OneNote & Windows 10
    • Ways to ink in Windows 10.

    Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

     

    Creating work item form extensions in Visual Studio Team Services

    $
    0
    0

    Visual Studio Team Services (VSTS) and its on-prem version TFS has an extensibility framework that lets 3rd party developers write and publish their own extensions. A VSTS extension is just a set of contributions where each contribution can contribute to certain contribution points provided by VSTS like hubs, pivots, menu items, work item forms, and more.

    This blog describes how to write efficient work item form extensions that can contribute to a work item form page, group, or control. General documentation for writing extensions are published in Microsoft docs, which also cover how to write work item form extensions. In this post, I provide more details on how to write well-performing work item form extensions.

    How does the work item form work?

    To understand the best time to run initialization logic for your contribution, let me first give you a little detail on how the work item form works. When a user opens a work item (either a standalone work item creation/edit or from a query result grid), it creates the form based on the work item’s type. If you open a “Bug” work item, it will create a form instance for the “Bug” type. This form instance will also load all the extension iframes after its creation. After that, if you open a “Feature” type, it will create a new form instance for this type and load new instances of the same contribution iframes in the form.

    Each work item type’s form has its own instances of form contributions. These forms are cached by their work item types. If you open multiple “Bug” work items, they will use the same form instance (and thus same contribution instances). For example – if you have a query that returns 2 Bugs and 2 Features, and you open Bug #1, the system will create a “Bug” form. After that, if you open Bug #2 from the grid, it won’t create a new instance of the same form, instead it will unload Bug #1 from the same form instance and load Bug #2 in it. If you open Feature #3 after this, it will create a new form instance for the Feature type (if it hadn’t been created yet on the page) and load new instances of contributions in this new form.

    Understanding how the form is created is useful because many developers think that every time they open a new work item form, it will create new instances of contributions. They therefore place most of their initialize logic in the constructor of their contribution App. Instead of starting your contribution logic in the constructor, you should initialize the logic in the Work item form “onload” event as  described in the next section.

    Note that this logic is true only in triage mode (when you open a work item form side by side with a query results grid). If you open a standalone work item (either in a dialog or in a full view), it creates a new instance of the form everytime, thus creating new instances of extension iframes. But in any case, initializing the logic of the contribution in the constructor is always bad, instead it should be initialized within the  “onLoad” event.

    Work item form contribution points

    The work item form has 5 contribution points (excluding a menu contribution which is not discussed in this post) –

    1. Page contribution

      A page contribution contributes to the list of tabs (or pages) in the work item form –

      In the screenshot above, “Related work items” tab is a contributed tab. When users click this tab, it will load the contribution iframe in the form below the tab. Note that the contribution is not loaded into the form until the user clicks the tab.

    2. Group contribution

      A group contribution contributes to the grid of groups in the form –

      The system loads a group contribution as soon as it renders the tab that contains the.

    3. Control contribution

      A control contribution is placed inside a group. Similar to group contributions, a control contribution is also not loaded until the tab that contains the contribution is rendered.

    4. Menu contribution

      A menu contribution contributes to the work item form’s menu bar. These contributions fall under the ellipsis menu item.

    5. Observer contribution

      If you want a contribution to not render on the form, but still be able to interact with the form, then you use the observer contribution. This contribution is initialized as a hidden iframe in the form. It’s not visible to the user but it can still interact with the form using the services provided (as mentioned below).

    Work item form extension services

    Two services – IWorkItemNotificationListener and WorkItemFormService support the ability for contributions to interact with the work item form. Using these services, a contribution can interact with a work item form in the following two ways: -

    Listen to form events

    Contributions can listen to form events via IWorkItemNotificationListener. A work item form contribution can register event handlers to certain work item events which will be called whenever the work item form fires those events. The events are as follows:

    1. onLoaded

      Fired when a work item is loaded in the form, or when a contribution is initialized on an already loaded work item form. This is a good place to initialize your contribution’s logic, because firing of this event means that a work item is ready to be used. When a work item form is opened, it first creates the form (along with all its contributions) and then starts loading the work item data from the server. Once the work item data is loaded, the data is “bound” to the form and at this point it will fire the onLoaded event. Since the contributions are created during form creation, it is not guaranteed that a work item is ready in the form during their creation. So, trying to perform actions on the work item form as soon as the contribution is loaded won’t work. All the work item form related activities should start from the point the onLoaded event is fired. Note that if you switch between work items of the same work item type in a query result grid, it will fire onLoaded event for each work item which is bound to the form. As mentioned above, when a new work item is opened in the form, it doesn’t instantiate a new instance of contribution, instead it uses the same form instance and just unbinds the previous work item and binds the new work item.

    2. onUnloaded

      Fired when a work item is unloaded in the form. This is particularly useful when users switch between different work items in a query result grid. When users switch between work items of the same type, it will first fire an onUnloaded event on the form for the old work item and then fire an onLoaded event on the form for the new work item. This event is a good place to dispose the internals of a contribution.

    3. onSaved

      Fired when a work item is saved by the user on the form.

    4. onRefreshed

      Fired when a work item is refreshed from the form.

    5. onReset

      Fired when a work item is reset in the form.

    6. onFieldChanged

      Fired whenever a field is changed on the form. This event w fires whenever the field value changes – either manually, by some work item rule, or by calling a function on the WorkItemFormService.

    More information on these events are at Extend the work item form.

    Perform actions on work item form

    The Work Item Form Service allows contributions to perform certain actions on the form. This service is available with all contribution types, but only works when a work item form is active on a page. The functions exposed by this service are documented in the link above, so I am not going to explain each of them in details. I will point out some interesting facts about some of the functions.

    1. Save()

      Calling save() method on the form service saves the current state of the work item in the form (if it’s in a valid state). This function returns an empty promise which is resolved if the save succeeds and is rejected if the save fails. If the work item in the form has not been changed at all, or if the work item in in an invalid state, this function would be no-op and the promise would neither resolve nor fail.

    2. setFieldValue()

      Calling setFieldValue() will update the value for a field on the form and mark the work item as dirty. It will also fire onFieldChanged event on the IWorkItemNotificationListener object registered by the contribution. Keep in mind that any field change by the contribution would also fire a field changed event on the contribution. This information is useful because you don’t want to end up in an infinite loop where on a field change event, the contribution sets a field value which would again fire the event and so on.

    All 5 kinds of work item form contributions are similar except for how they are rendered on the form. All of them can interact with the form using these two services.

    A simple work item form group extension

    I am going to demonstrate by using as an example a real custom group extension that is on the marketplace (link). This form group extension is built upon node using npm, typescript 2.3 (using typescript async await) and react. The source code for this extension is here. Although the extension can be written in javascript, I highly recommend using typescript to avoid any runtime errors as most of the errors would be caught at build time in typescript. Microsoft releases typescript declarations of its internal classes, services and interfaces every 3 weeks which can be used if your extension needs to call VSTS rest APIs or Work item. The sdk can be found here.
    I’ll assume that readers know how create the initial structure of VSTS extension and jump into some best practices on writing form contributions. If not, then please look into samples provided by VSTS here

    The checklist group extension start with the declaration of its manifest in the vss-extension.json file:

    When the system loads a contribution, it loads an iframe targeting the starting html file. In this case, it’s the index.html file.

    As soon as the html  loads, it’ll call this javascript code. A couple of things to note here –

    1. Authors can call VSS.init to configure the initialization of the extension. VSTS lets authors use some of its inbuilt controls and libraries in an extension, if “usePlatformScripts” is set to true. If authors also want to use css styles published by Microsoft, they can also choose to set “usePlatformStyles” to true. If the contribution wants to use its own styles and controls, then it can set them to false.
    2. When VSTS host page tries to load a contribution iframe, it sets a load timeout, and if the html page is not fully loaded until that timeout, the page will show an error in place of the extension. To avoid this, make sure that your html file is loaded as quickly as possible. Requiring all your javascripts and css files in the <head> of the html file will make loading the html slow because until all the resources are loaded, the page is not fully loaded. I suggest you lazy load your starting javascript module so that it doesn’t block html render. That’s why I load my “scripts/App” javascript module using VSS.require which is just a wrapper over Requirejs.
    3. As soon as your module is ready, it needs to call VSS.notifyLoadSucceeded to notify the host page that contribution has successfully loaded.

    Let’s see how the App module is defined.

    When a contribution loads, there is no guarantee that a work item is bound to the form at that moment. Instead of initializing the internals of the contribution in the constructor of the main module, it should be initialized in an onLoaded() event and disposed of with an onUnloaded() event callback. This makes the contribution robust in scenarios when the user is switching between workitems in a query result grid.

    This is the crux of a simple form extension. A page contribution can be written like a group contribution. A control contribution is also the same except it can take some inputs from the user so they can configure the contribution per their needs.

    Writing a custom control extension

    A custom control can take some user inputs when a user tries to add it to a work item form. These user inputs are then passed to the control contribution when it’s initialized, so that the contribution can initialize itself based on user input. A group or page contribution doesn’t take any user input because they are meant to be standalone apps. The reason we let users pass input values to a custom control is that a custom control is primarily meant to act as a replacement of 1st party field controls on the form. For example, a custom control that replaces the HTML control provided by VSTS with a  markdown control. Such a control can be used with any kind of HTML work item field, but only users can decide which field would it be used for.

    I created a simple Pattern custom control that restricts a field value to a certain pattern provided by a user (like an Email pattern or a Phone number pattern). The code for this extension is available at - https://github.com/mohitbagra/vsts-extensions-controllibrary.

    This custom control binds itself to a string-based field (provided by the user) and restricts the field value to a certain pattern which is also provided by the user. If the value provided in that control doesn’t match the specified pattern, it’ll show an error. This control contribution can define its set of inputs in the extension’s manifest –

    In this example, the pattern control contribution needs 3 inputs from a user. Each input is described by its descriptor where authors can describe the type, name and other properties of the inputs. These descriptors are used to validate the input value provided by the user. For example,  if an input is of type “number”, and the user tries to enter a string value for it, it won’t let the user do it.

    The inputs described in this example are –

    1. A string-based field which should be bound to this control. Notice that the type of this input is “WorkItemField” which means that users won’t be able to enter any other value to this input. The type of field can also be defined by using “workItemFieldTypes” so that user can only provide field of certain specified types.
    2. A string pattern which the control should allow as the field value.
    3. A custom error message if the user provides a value to the control which doesn’t match the pattern.

    A control contribution is loaded like a group or page contribution. The difference is that the control contribution will get the input value provided by a user -

    This will return an object where keys are the input ids – as defined in the extension manifest and values would be the values provided by the user. In this example, it would look like –

    { “FieldName”: “Test.PatternField”, “Pattern”: “<pattern>”, “ErrorMessage”: “<error message>” }

    To initialize the control, register a work item notification listener which listens to work item events –

    Depending on the need, the control contribution can listen to other events too like onSaved, onRefreshed etc. But in this example, we only need to listen to 3 events –

    1. onLoaded – So that the contribution can refresh the field value in its control when user loads a different work item in the form
    2. onUnloaded – So that the contribution can clear the control’s value
    3. onFieldChanged – So that the control refreshes its value whenever the value of the field it’s bound to changes. This can be either manually by user or via some work item type rule. The onFieldChanged event is fired for every field value change, but the control needs to only worry about its own field, that’s why we check that the field this control is bound to is in the arguments of field changed event.

    A field bound custom control needs 2 things to work – listen to work item’s field changes and refresh its control to show the current field value, and listen to user changes in its own control and change the work item’s field value.

    Both onLoaded and onFieldChanged callbacks call _invalidate() function which sets the value of the control by reading the current field value of the work item.

    To read the field value of the workitem, we use the WorkItemFormService’s getFieldValue function. Once you read the value, you can set it to the custom control. In this case, I call a _setValue() function which just sets the value in an input box and also validates the input based on the pattern provided by user.

    To set the work item’s field value based on user input into the custom control, we can use WorkItemFormService’s setFieldValue function –

    In this example, the onValueChanged function is bound to the “input” event of the text box shown in the control.

    Note that both _invalidate and onValueChanged functions reads/writes a variable “_flushing”. This it because the control can set field value as well as listen to field value changes. When this control sets the field value, it’ll still get onFieldChanged event and we don’t want to come across an infinite loop where the control first sets the field value, then gets the field changed event and then tries to refresh the control’s value based on the same field value that it changed originally. This can be avoided by maintaining a private boolean variable here.

    Auto resize group and control contribution iframe in the form

    Unlike work item form page contributions and hub contributions, control and group contributions on work item form only get limited space for them to render. If you look at the screenshot at the top of this post, you can see how small the group and control contributions are. By default, the work item form gives only 70px height to control contributions and 300px height to group contributions. Since the form is responsive, the width of groups and controls depend upon how much space does the whole form have. Although the initial height of controls and groups are set, the form listens to certain resize events from the contributions and it resizes the parent host of the contributions to fit it.

    To change the height of a group or control contribution, the contribution can call VSS.resize function and give it appropriate size in pixels –

    VSS.resize(width, height);

    When VSS.resize is called from a contribution, it resizes the parent iframe of the contribution on the work item form to resize to the height passed to this function. Note that work item form only respects the height parameter passed to this function and will ignore the width parameter. Also, if the value of height parameter is passed as null, the form will try to automatically resize the iframe to fit its current contents.

    For example, if you have a checklist group extension which allows user to add checklist items to it, the contribution would want to expand its height as more and more items are added. In this case, whenever a new item is added, the contribution can call –

    This function will call resize with height as contribution’s body height. As more and more items are added, the contribution’s body height will increase, and calling VSS.resize with the body height will expand the contribution’s iframe in the form too.

    You can also hook this function as a callback to window resize event, so that whenever the browser window or the form is resized, the contribution can automatically resize itself.

    Note that, the contribution should always have a max height attribute in its body element, so that it doesn’t expand too much and make the form look bad.

    Also note that if the contribution is making use of popups like a popup context menu, they will still be shown inside the contribution iframe and get cut off by the end of the frame.. To fix this, you can again make use of VSS.resize whenever you show or hide the popup.

    Performance tips

    As I mentioned above, the system sets a timeout for every contribution it tries to load. If the contribution and all its blocking resources (html, css, javascripts) are not loaded until the timeout hits, the system will show an error message in place of the contribution.  To prevent your extensions from hitting this error, here are some tips on how to make the contribution load faster:

    1. Do not add all your scripts in the html’s <head>. Instead make use of async javascript load using requireJs.
    2. Note that the system (VSTS/TFS) can provide some 3rd party libraries like jquery, react, react dom, requireJs and Q on demand to the contribution. Contributions don’t need to add these libraries in their html. If you are using AMD style javascript load in your app, you can just add import (or require) statements to these libraries and VSTS will load them for you. If you use typescript to write your app, you can use typescript declarations for these libraries for your dev work, but no need to actually make these libraries a part of your app.
    3. Make use of bundling tools like webpack. Bundling and minifying can really improve the javascript loading time if your app has many javascript files. I personally make heavy use of webpack and some of its plugins like uglifyJs, commonChunksPlugin and css-loaders and it speeds up the app load time greatly.
    4. If your App doesn’t make use of any VSS platform javascript modules or VSS platform css styles, you don’t need to load them from your App. In this case you can just set “usePlatformScripts” and “usePlatformStyles” to false in the code snippet above. If you set “usePlatformStyles” to true, it’ll load a couple of css files and fonts from VSS which you can use in your contribution app. But if you are using your own styles, you can set this to false and it won’t load VSS styles which can save you some load time.
    5. If your app uses a lot of VSS controls or libraries, you can add import or require statements to those scripts in your app and they will be loaded from VSTS page on demand. But these VSS modules will not be bundled when they are loaded. Even loading a single control like Combo from VSS actually loads a few other script modules individually which can really make the app slow. To improve this, add a VSS.require statement in your app’s html file and pass in VSS module names to it which are getting used in your app. If you pass in multiple modules in VSS.require(), it won’t load all of them individually, instead it will load them in one single bundle.

    Example -

    I hope this blog post has provided you with some new insights to VSTS extensions. I have published 5 extensions to VSTS and the source code for all of them are on my . Pretty much all of them make use of React, Typescript 2.3+, Typescript async await, Office Fabric UI and webpack. You are welcome to use those repos as examples. I also have published a npm package which has some common react components which be used in a VSTS extension.

    Some helpful links –

    1. VSTS sample extensions
    2. Write your first extension for VSTS
    3. Extend the work item form
    4. VSS library typescript typings

    Mohit Bagra

    LinkedIn, Twitter

    Video-Tutorial zu Minecraft – Teil 5

    $
    0
    0

    Sie möchten die #MinecraftEducationEdition im Unterricht einsetzen? In unseren Video-Tutorials erklären wir die Basics Schritt für Schritt! Teil 5: Abbauen und Craften

    Schulen können jetzt von einem besonders vorteilhaften Angebot profitieren: Zu jedem neu gekauften Windows-10-Gerät erhalten sie die Minecraft Education Edition kostenfrei für ein Jahr. Mehr zur Minecraft Education Edition und zu den Details der Aktion finden Sie hier.
    Die ersten vier Teile unserer Minecraft Video-Tutorials finden Sie hier:

    Teil 1: Die Einstellungen

    Teil 2: Die Gamemodes

    Teil 3: Eine neue Welt erstellen

    Teil 4: Ein Haus bauen und fotografieren

    Never underestimate the difference children can make

    $
    0
    0

    Travis_Goulter

    “We should never underestimate children – they care, they want to be heard and they can make a difference.”- Travis Goulter, Head of Junior School, Ormiston College.

    Early in MIEE Travis Goulter’s teaching career, he learned how important it is to include his students in real-world problem-solving.

    “I taught at a school located in an inner-city suburb,” Goulter shares. “A number of developers wanted to start building multi-story residential and corporate buildings that had the potential to change the very dynamic of the suburb. The issue had divided the community and raised many questions and concerns. In the debate about the development proposal, a voice was missing – the voice of the students.”

    So Goulter created a unit where he provided students with digital cameras. The students took photos and footage, documenting their favourite parts of their suburb.

    “We then collated these and recorded the students explaining why these landmarks, buildings, and places were important to them now and in the future,” he says. “The final product was a short film that was shown to the City Council, developers and community advocates.”

    For Goulter, this experience confirmed that connecting learning experiences to students’ life leads to engaged kids. Goulter’s classroom challenge became about finding opportunities for meaningful problem-solving and he saw first-hand that, “we should never underestimate children – they care, they want to be heard and they can make a difference.”

    He continues to use technology as a way to incorporate meaningful learning each day. One of his favorite learning experiences from the year has been the “Escape Room” challenge he created in OneNote for a group of year-three students who required further enrichment in Mathematics.

    Even with the benefits technology offers to the world of education, Goulter believes that teachers can have all the, “bells and whistles, but this means nothing if they haven’t established a positive relationship with students.”

    “Our society is changing at a rapid pace, and this is evident in education with the evolution of mixed reality, flipped learning, and agile learning environments,” adds Goulter. “With these changes and growing access to research, PLC’s and resources, we cannot lose focus on what still makes the biggest difference for our students: positive relationships and love of learning. Students can access cutting-edge technology and learning environments, but if they do not have a love of learning, they will not maximize these opportunities.”

    Connect with Travis on his Microsoft Educator Community Profile, and be sure to check out his blog for even more inspiring stories.

    About Travis Goulter

    • Educational background: Bachelor of Education
    • Favorites Microsoft product, tool, technology: OneNote Class Notebook
    • What is the best advice you have ever received? Those who dare to teach must never cease to learn.
    • Website I check every day: Twitter
    • Favourite childhood memory: Playing basketball on my outdoor court till the sun went down.
    • Favourite book: Michael Jordan: The Life by Roland Lazenby

    Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.


    D365ffO : RDP の IP レンジの制限を行う機能

    $
    0
    0

    Microsoft マネージ環境で RDP IP レンジ(範囲)の制限を行う機能が有効になりました。

    新しいデプロイメント環境では、パートナー様およびお客様は、RDPアクセスを許可したい各環境の IP アドレス レンジ(範囲)を指定する必要があります。

    詳細は以下のドキュメントのRemote Desktop (RDP) lockdownの章に記載されています。

    < Dynamics 365 for Finance and Operations, Enterprise edition cloud deployment overview >

    https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/deployment/cloud-deployment-overview

     

    お使いのD365ffO VM RDP接続できない場合、サービスリクエストをオープンする前に、以下の点を御確認頂けますでしょうか。

     

    • LCSで環境を開いて頂き、Login > Log on to environmentを実行してD365ffO環境にブラウザでアクセスして頂き、VM自体稼働していないか、RDP接続だけが出来ないか確認ください。

     

    • LCS 内でお客様のプロジェクトを検索し、問題となっている環境が Microsoft マネージ環境(プロジェクトのメインページに表示される環境一覧)の中の一つであるかを確認してください。
      • 環境が、お客様自身が持つサブスクリプションのクラウド ホスティング環境である場合は、Azure ポータルからRDPをダウンロードしてアクセスできるか確認ください。もしアクセスできない場合は、ネットワークセキュリティグループ(NSG)の構成を確認するか、または Azure サポート リクエスト をオープンください。

     

    • お使いの LCS の環境内で Maintain > Enable Accessをクリックして、指定されているIPレンジを御確認ください。
      • *や% などのワイルドカード文字の入力はサポートされません
      • 範囲の入力は、CIDR表記を使用する必要があります . 1.2.3.4/24
      • ポートを指定することはできません

     

    • RDP を使用するマシンからhttp://whatsmyip.comにアクセスして頂いて、パブリックIPを確認頂き、そのIPがMaintain > Enable Accessで指定されている範囲に含まれているか御確認ください。

    https://www.ipaddressguide.com/cidr などのツールを使用してレンジ(範囲)を入力し、開始と終了の IP レンジ(範囲)を確認することも可能です。

     

    • RDP を使用するマシンのパブリックIPがMaintain > Enable Access に指定した範囲に含まれている場合は、

    https://docs.microsoft.com/en-us/sysinternals/downloads/psping のサイトから、VM の TCPポートをテストする PSToolsユーティリティをダウンロードします。 このユーティリティを使用するには、以下の手順を実行します。

      • https://docs.microsoft.com/en-us/sysinternals/downloads/psping からツールをダウンロードし、ZIP ファイルをフォルダに解凍します。(または http://live.sysinternals.com/psping.exe から exe を実行します)。
      • LCS ページから RDP ファイルをダウンロードして、RDP ファイルを右クリックし、編集を選びます
      • リモートデスクトップ接続ウィンドウのコンピューターフィールドから全ての値(マシンとポート)をコピーします。(Ctrl + C でクリップボードにコピーします)
      • コマンドプロンプトを開き、PSToolsを解凍したフォルダにディレクトリを変更します
      • コマンドプロンプトから、psping <computer name:port> を入力(Ctrl-V でクリップボードにコピーした名前を貼付けられます)してから、コマンドを実行します
      • psping の実行で操作がタイムアウトとなった場合、下記の項目をチェックする必要があります:
        • ネットワークセキュリティグループ(NSG)に追加した IP レンジ(範囲)に、RDPを実行しているマシンが含まれているか
        • RDP 環境で使用されるアドレスまたはポートへのアクセスに対する何らかのファイアウォールのブロック要素が有るか。
        • VMが実行中でなく、LCSポータルから起動する必要があるか。(または直前に開始されており、まだ起動中かどうか)
      • psping の実行が成功した場合は、ユーザーID およびパスワードが現在 LCS で表示されているものに合致しているか確認する必要があります。

     

    • 上記の事項の確認を通じて、 LCS に表示される(Maintain > Enable Access)お客様のアドレスのレンジ(範囲)の情報は正しいが、問題が解決しない場合は、サポートチームにサービスリクエストをオープンしてお問合せください。

    サービスリクエストをオープン時に、以下の情報もお送りください。

    (1)設定した環境の IP レンジ(範囲)のスクリーンショット、

    (2)http://whatsmyip.com からキャプチャしたIPアドレス

     

     

    [関連情報]

    < Azure 仮想マシンへのリモート デスクトップ接続に関するトラブルシューティング>

    https://docs.microsoft.com/ja-jp/azure/virtual-machines/windows/troubleshoot-rdp-connection

    MIEE Spotlight – Jacqueline Campbell

    $
    0
    0


    Today's MIEE Spotlight is focused on Scotland's Teacher of the Year, Jacqueline Campbell, Computer Science Teacher from St. Mungo's High School. Jacqueline has been integral in St. Mungo's being awarded Microsoft Showcase School status and supporting the school in becoming a Microsoft STEM School!

    Jacqueline inspires teachers all across Scotland in her innovative uses of Office 365 within GLOW, ensuring all pupils have the skills and knowledge they need for the 21st Century. She is passionate about empowering staff and pupil Digital Leaders across her school and cluster, in using technology effectively for teaching and learning. As well as this, Jacqueline supports the students in her class with first class teaching using OneNote Class Notebook to distribute class work, give formative feedback and make learning easier for her class with everything in the one 'Digital Notebook'.

    Jacqueline also supports local cluster Primary Schools in their use of technology and development of Computer Science. For example, during Digital Learning week in May, Jacqueline took her Pupil Digital Leaders to Primary 6 classes to teach the pupils about effective coding using the BBC Micro:Bit.

    You can follow Jacqueline's class on Twitter @StMungosComp to keep up to date with the incredible work she does in her classroom, school and beyond.


    Interact with the Sway below to hear more about Jacqueline's development and classroom practices in her own words.


     


    Follow in the footsteps of our fantastic MIEE's and learn more about how Microsoft can transform your classroom with the Microsoft Educator Community.


     

    Building Real World Solution on Azure Government with Machine Learning 

    $
    0
    0

    In this episode of the Azure Government video series, Steve Michelotti sits down to talk with Vishwas Lele, CTO, Applied Information Sciences, about building real world solutions on Azure Government with Machine Learning. Vishwas describes a solution built for a classified customer in which machine learning is used to provide recommendations on relevant news articles for analysts. The machine learning functionality of the solution was provided by Microsoft Machine Learning Server (formerly known as Microsoft R Server) which is directly callable from C# code! In addition to the machine learning aspects, the solution leverages several PaaS (Platform as a Service) services on Azure Government including API Management, Cosmos DB, Azure Web Apps, and Azure Storage. If you’re interested in running Machine Learning workloads on Azure Government, this short video is a must watch! 

    The direct link to the video is here. 

    Team Member License in Dynamics 365 for Finance and Operations, Business Edition

    $
    0
    0

    We have had questions, and there has been some confusion, about what can be done when a user is assigned a Team Member License with Dynamics 365 for Finance and Operations, Business Edition. I hope this blog can help clarify.  I have pasted a table below that is the most recent and up to date description on what can be done as a Team Member, please review.

    **Please note that Purchase Quotes are not yet in Dynamics 365 for Finance and Operations, Business Edition, but this is something that will be included in a future release. Currently there are no purchasing type documents that can be created by a user with a Team Member license assigned. When Purchase Quotes are added, then there will be a purchasing type document for Team Member users to create.

    In summary,  Dynamics 365 for Finance and Operations, Business Edition Team Member users can do the following:
    - Read anything that’s enabled in Financials or any other Dynamics 365 Application
    - Update existing data and entries in Financials - Existing data are records like customer, vendor or item records which are already created. Entries means entries on which it is specifically allowed from an accounting perspective to update specific information. (e.g. due date on customer ledger entries)
    - Approve or reject tasks in workflows assigned to a user
    - Create, edit, delete a sales quote (purchase quotes are in development)
    - Create, edit, delete personal information
    - Enter a Time Sheet for Jobs
    - Use PowerApps for Dynamics 365

    TechDays Sweden 2017

    Viewing all 5308 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>