Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

AdalException : authentication_ui_failed with ADAL and Xamarin Forms Android

$
0
0

Recently I was helping a client with an Azure Active Directory integrated project ( ADAL not MSAL for some various reasons ) . All was going well and good as I plugged away with the various integration pieces.   We are targeting Xamarin Forms on this project.   Luckily there was a great blog post by Mayur at https://blog.xamarin.com/put-adal-xamarin-forms/ .

Typically I will start with UWP as a 'sanity check' to make sure things are working right. I provisioned up my Xamarin UWP project and had it running smoothly, no problems. My IOS project also also worked smoothly, authenticating against my test O365 environment like a champ.

Next up was my Android project.  I followed the steps Mayur suggested and gave the program a run. Oooo. Nasty messages! Yuk.

If we peek hard you can see the error message "Authentication_ui_failed: The browser based authentication dialog failed to complete" show up.

Various activities were tried including adjusting manifest capability request and other such things. No luck.

Turns out the fix was the adjust the TLS from Default to a specific TLS Implementation.  You can adjust this by navigating into your Android project properties and navigating  Android Options > Advanced > SSL/TLS Implementation from "Default" to Managed TLS 1.1 OR Native TLS 1.2+

Who knew? I sure didn't. And after a mere, well, quite a number of hours digging this out, I'm sharing it with you. Hopefully the SEO gods will bless you and you find this post when hunting for the error.

Interesting to note, and I'll try and find out why, this only occurs before the value of SSL/TLS Implementation is changed for the first time. Once you change from "Default" to one of the other values, even if you go back to Default, the error will not occur.  Time to dig out some of the deep Xamarin brains out for an answer.

Tested on physical hardware and HAXM Android 7.1 Emulator image several times. To test it you need to build the Android Xamarin Forms project up from scratch. Mayur's instructions are great for this at [ https://blog.xamarin.com/put-adal-xamarin-forms/ ].

In the spirit of sharing I've posted a new ADALForForms sample until Mayur gets his repository updated. My sample can be found at [ https://github.com/jhealy/xammutts/tree/master/ADALForForms ]. The xammutts repo I posted is .NET Standard with Android, IOS, and UWP support. Mayur had WP, IOS, and Android. Note since we are targeting .NET Standard with the UWP, the UWP run is limited to Fall Creator's Update ( build 16299 ) and later. FCU introduces .NET Standard to UWP.   Windows Phone is left as an exercise to the coder.

Partial exception dump for SEO purposes below.


AcquireTokenHandlerBase.cs: === Token Acquisition started:
12-31 12:19:55.341 I/        ( 4651): 	Authority: https://login.microsoftonline.com/M365x947151.onmicrosoft.com/
12-31 12:19:55.341 I/        ( 4651): 	Resource: https://graph.windows.net
12-31 12:19:55.341 I/        ( 4651): 	ClientId: 8f580ff3-2ab3-469b-bd43-158c581757da
12-31 12:19:55.341 I/        ( 4651): 	CacheType: null
12-31 12:19:55.341 I/        ( 4651): 	Authentication Target: User
12-31 12:19:55.341 I/        ( 4651):
12-31 12:19:55.463 V/        ( 4651): 2017-12-31T17:19:55.4308620Z: 1663fe86-17e1-4842-b23e-ca25a43dde8d - AcquireTokenHandlerBase.cs: Loading from cache.
12-31 12:19:55.481 V/        ( 4651): 2017-12-31T17:19:55.4809390Z: 1663fe86-17e1-4842-b23e-ca25a43dde8d - TokenCache.cs: Looking up cache for a token...
12-31 12:19:55.537 I/        ( 4651): 2017-12-31T17:19:55.5378480Z: 1663fe86-17e1-4842-b23e-ca25a43dde8d - TokenCache.cs: No matching token was found in the cache
12-31 12:19:55.729 D/Mono    ( 4651): DllImport attempting to load: '/system/lib64/liblog.so'.
12-31 12:19:55.730 D/Mono    ( 4651): DllImport loaded library '/system/lib64/liblog.so'.
12-31 12:19:55.730 D/Mono    ( 4651): DllImport searching in: '/system/lib64/liblog.so' ('/system/lib64/liblog.so').
12-31 12:19:55.730 D/Mono    ( 4651): Searching for '__android_log_print'.
12-31 12:19:55.730 D/Mono    ( 4651): Probing '__android_log_print'.
12-31 12:19:55.730 D/Mono    ( 4651): Found as '__android_log_print'.
12-31 12:19:55.736 W/monodroid( 4651): JNIEnv.FindClass(Type) caught unexpected exception: Java.Lang.ClassNotFoundException: md5673452a39546f0a71deda1ad064ecc65.AuthenticationAgentActivity ---> Java.Lang.ClassNotFoundException: Didn't find class "md5673452a39546f0a71deda1ad064ecc65.AuthenticationAgentActivity" on path: DexPathList[[zip file "/data/app/com.devfish.xamarin.ADALForForms-1/base.apk"],nativeLibraryDirectories=[/data/app/com.devfish.xamarin.ADALForForms-1/lib/x86_64, /system/fake-libs64, /data/app/com.devfish.xamarin.ADALForForms-1/base.apk!/lib/x86_64, /system/lib64, /vendor/lib64]]
12-31 12:19:55.736 W/monodroid( 4651):    --- End of inner exception stack trace ---
12-31 12:19:55.736 W/monodroid( 4651):   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in :0
12-31 12:19:55.736 W/monodroid( 4651):   at Java.Interop.JniEnvironment+StaticMethods.CallStaticObjectMethod (Java.Interop.JniObjectReference type, Java.Interop.JniMethodInfo method, Java.Interop.JniArgumentValue* args) [0x00069] in :0
12-31 12:19:55.736 W/monodroid( 4651):   at Android.Runtime.JNIEnv.CallStaticObjectMethod (System.IntPtr jclass, System.IntPtr jmethod, Android.Runtime.JValue* parms) [0x0000e] in :0
12-31 12:19:55.736 W/monodroid( 4651):   at Android.Runtime.JNIEnv.CallStaticObjectMethod (System.IntPtr jclass, System.IntPtr jmethod, Android.Runtime.JValue[] parms) [0x00017] in :0
12-31 12:19:55.736 W/monodroid( 4651):   at Android.Runtime.JNIEnv.FindClass (System.String classname) [0x0003d] in :0
12-31 12:19:55.736 W/monodroid( 4651):   at Android.Runtime.JNIEnv.FindClass (System.Type type) [0x00015] in :0
12-31 12:19:55.736 W/monodroid( 4651):   --- End of managed Java.Lang.ClassNotFoundException stack trace ---
12-31 12:19:55.736 W/monodroid( 4651): java.lang.ClassNotFoundException: md5673452a39546f0a71deda1ad064ecc65.AuthenticationAgentActivity
12-31 12:19:55.736 W/monodroid( 4651): 	at java.lang.Class.classForName(Native Method)
12-31 12:19:55.736 W/monodroid( 4651): 	at java.lang.Class.forName(Class.java:400)
12-31 12:19:55.736 W/monodroid( 4651): 	at md5270abb39e60627f0f200893b490a1ade.ButtonRenderer_ButtonClickListener.n_onClick(Native Method)
12-31 12:19:55.737 W/monodroid( 4651): 	at md5270abb39e60627f0f200893b490a1ade.ButtonRenderer_ButtonClickListener.onClick(ButtonRenderer_ButtonClickListener.java:30)
12-31 12:19:55.737 W/monodroid( 4651): 	at android.view.View.performClick(View.java:5637)
12-31 12:19:55.737 W/monodroid( 4651): 	at android.view.View$PerformClick.run(View.java:22429)
12-31 12:19:55.737 W/monodroid( 4651): 	at android.os.Handler.handleCallback(Handler.java:751)
12-31 12:19:55.737 W/monodroid( 4651): 	at android.os.Handler.dispatchMessage(Handler.java:95)
12-31 12:19:55.737 W/monodroid( 4651): 	at android.os.Looper.loop(Looper.java:154)
12-31 12:19:55.737 W/monodroid( 4651): 	at android.app.ActivityThread.main(ActivityThread.java:6119)
12-31 12:19:55.737 W/monodroid( 4651): 	at java.lang.reflect.Method.invoke(Native Method)
12-31 12:19:55.737 W/monodroid( 4651): 	at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:886)
12-31 12:19:55.737 W/monodroid( 4651): 	at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:776)
12-31 12:19:55.737 W/monodroid( 4651): Caused by: java.lang.ClassNotFoundException: Didn't find class "md5673452a39546f0a71deda1ad064ecc65.AuthenticationAgentActivity" on path: DexPathList[[zip file "/data/app/com.devfish.xamarin.ADALForForms-1/base.apk"],nativeLibraryDirectories=[/data/app/com.devfish.xamarin.ADALForForms-1/lib/x86_64, /system/fake-libs64, /data/app/com.devfish.xamarin.ADALForForms-1/base.apk!/lib/x86_64, /system/lib64, /vendor/lib64]]
12-31 12:19:55.737 W/monodroid( 4651): 	at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56)
12-31 12:19:55.737 W/monodroid( 4651): 	at java.lang.ClassLoader.loadClass(ClassLoader.java:380)
12-31 12:19:55.737 W/monodroid( 4651): 	at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
12-31 12:19:55.737 W/monodroid( 4651): 	... 13 more
12-31 12:19:55.737 W/monodroid( 4651):
12-31 12:19:55.859 E/        ( 4651): 2017-12-31T17:19:55.8588900Z: 1663fe86-17e1-4842-b23e-ca25a43dde8d - AcquireTokenHandlerBase.cs: Microsoft.IdentityModel.Clients.ActiveDirectory.AdalException: authentication_ui_failed: The browser based authentication dialog failed to complete ---> Java.Lang.ClassNotFoundException: md5673452a39546f0a71deda1ad064ecc65.AuthenticationAgentActivity ---> Java.Lang.ClassNotFoundException: Didn't find class "md5673452a39546f0a71deda1ad064ecc65.AuthenticationAgentActivity" on path: DexPathList[[zip file "/data/app/com.devfish.xamarin.ADALForForms-1/base.apk"],nativeLibraryDirectories=[/data/app/com.devfish.xamarin.ADALForForms-1/lib/x86_64, /system/fake-libs64, /data/app/com.devfish.xamarin.ADALForForms-1/base.apk!/lib/x86_64, /system/lib64, /vendor/lib64]]
12-31 12:19:55.859 E/        ( 4651):    --- End of inner exception stack trace ---

Lets bring in the New Year together – Make 2018 Epic next stop Las Vegas!

$
0
0

Happy New Year!

We know how important it is to connect with the right individuals within Microsoft for your unique business, and this year we are going to make it easier than ever to do just that. We are bringing Microsoft Ready, Microsoft’s largest internal readiness event, to Las Vegas, Nevada during the same week to bring partners even closer to the center of Microsoft’s innovation. With past attendance of roughly 18,000, Microsoft Inspire will be your first, and best opportunity to connect with our top Microsoft employees all in one place.

Join us July 15-19, 2018 in Las Vegas, Nevada for Microsoft Inspire – the largest Microsoft partner event of the year. Microsoft Inspire is where partners, Microsoft personnel and industry experts from around the globe come together for a week of networking and learning.

Microsoft Inspire is about the power of connections. At Microsoft Inspire, you can:

  • Create meaningful relationships with partners, industry leaders, and Microsoft experts from around the globe
  • Get inspired at Vision Keynotes with Microsoft executives, innovators, and IT experts
  • Learn about new ways to accelerate the digital transformation of our shared customers through workshops, panels, and breakout sessions
  • Hear how to align your business with Microsoft’s plans for the next fiscal year and find out why partnering with Microsoft is the best decision for your business

If this is your first time attending, we’ve got you covered. The First-Time Attendee (FTA) program helps FTAs stay on track by providing mentors and extra assistance before, during, and after the event. Visit the Microsoft Inspire website and join the Microsoft Inspire discussion on the Microsoft Partner Community to start connecting, engaging, and collaborating now with partners looking to grow their network.

Take advantage of the early registration price of USD $1995 and register now to secure your spot at Microsoft Inspire. Together, we will transform our shared future as we form powerful connections and realize the potential of the innovative ideas formed from our collaborations. Your path to new opportunity leads to Microsoft Inspire.

The Australian Microsoft Team looks forward to seeing you there!

 

Happy New Year Wishes 2018!

$
0
0

image

 

It’s once again time to take a quick moment to step back and share my thanks and gratitude with all of you, the fantastic people I have the incredible privilege and honor to connect with, speak to, interact with and work with with around the world in my role here at Microsoft and with our customers and partners everywhere. I am so thankful each and every day to all of you for all you do and the impact you make day in and day out.

Once again this year. my current role keeps me more behind the scenes instead of being the public face/voice I have been in previous roles; however, as I’ve said before, have no doubt that my dedication and drive to deliver the highest value and impact for our partners and customers around the world here at Microsoft has never been higher or stronger.

I truly appreciate all of the, “Thank you’s” and feedback so many of you have shared with me regarding my “Largest FREE Microsoft eBook Giveaway!” post and how you have been using those for yourselves and others. I am thrilled to hear that it has helped so many of you.

I always hope that in some way I am able to give back to this wonderful community that has been such a pleasure to work with and be a part of throughout the years and once again, I offer my sincerest wishes to you all for a Happy New Year in 2018 for you, your friends, family, and loved ones everywhere.

Happy New Year!

 

ELigman4New2_thumb_thumb_thumb17_thu

  Eric Ligman

  Director – Business & Sales Operations
  Microsoft Corporation

  Follow me on: TWITTER, LinkedIn, Facebook

How do I know that Resource Monitor isn’t just retaining a handle to the terminated process?

$
0
0


A short time ago, I explained that
Resource Monitor shows information for terminated processes
for a little while, so you can see the results before they
go away.
it's not that Resource Monitor is able to go back in time and
see processes that are already dead.



Ben Voight
makes the accurate observation that

it's possible that Resource Monitor is retaining a handle to the
terminated process
,
and it's that handle that allows it to keep asking questions
about the process.
A newly-launched Resource Monitor never observed the process
before it terminated, and therefore never had a chance to obtain
a handle to it.



Interesting theory. Let's test it.



Run Task Manager, Resource Monitor, and Notepad.
Close Notepad.
It vanishes immediately from Task Manager,
which demonstrates that Resource Monitor is not retaining a handle to it.
That's because process objects remain present in the kernel until all
handles are closed;
if the process has terminated, the process object is in a zombie state:
You can use the handle to ask questions about the process
(like get its exit code and CPU statistics).
But as long as the process object exists, it shows up in the Details
page of Task Manager.



Notepad disappeared immediately,
which means that its process object is gone for good,
which means no active handles.



Okay, but maybe Resource Monitor closes the handle
once it detects that the process has terminated.



To test that theory, run Task Manager, Resource Monitor, and Notepad.
Connect a debugger to Resource Monitor and freeze it.
Now close Notepad.
It vanishes immediately from Task Manager,
which shows that Resource Monitor didn't have a handle to it.



The question came from a customer who saw that
when their program terminated,
it disappeared immediately from Task Manager
and all the process enumeration APIs,
but their program still appeared in Resource Monitor
like a ghost from beyond the grave,
and they wanted to know what sort of
otherworldly powers Resource Manager has that
lets it see processes that no longer exist.



Answer: No otherworldly powers. Just a good memory.

ebook deal of the week: Microsoft Office 365 Administration Inside Out, 2nd Edition

$
0
0

Save 50%! Buy here.

This offer expires on Sunday, January 7 at 7:00 AM GMT.

Conquer Microsoft Office 365 Administration—from the inside out!

Dive into Microsoft Office 365 Administration—and really put your Office 365 expertise to work. This supremely organized reference packs hundreds of timesaving solutions, tips, and workarounds–all you need to plan, implement, and operate Microsoft Office 365 in any environment. In this completely revamped Second Edition, a new author team thoroughly reviews the administration tools and capabilities available in the latest versions of Microsoft Office 365, and also adds extensive new coverage of Azure cloud services and SharePoint. Discover how experts tackle today’s essential tasks–and challenge yourself to new levels of mastery.

Learn more

 

Terms & conditions

Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.

eBook Deal of the Week may not be combined with any other offer and is not redeemable for cash.

MIEE Spotlight- Thomas Lange

$
0
0



Today's MIEE Spotlight focuses on Thomas Lange, Assistant Head in charge of E-Learning at Queen Anne's School in Caversham responsible for technology implementation across the whole school.

Thomas has worked on increasing the use of technology in all areas and making significant productivity improvements in Communication, Administration, Teaching and Learning. Queen Anne's is an Office 365 school moving to the cloud and meaning that work is accessible to their students at all times, anywhere in the world. They have adopted different projects such as Planners, Projects, Forms and Yammer. However the major drive has been the implementation of OneNote across the school for staff and students, to enhance organisation of the resources and aiming to create immersive learning experiences for all pupils. They have also introduced Kodu and Minecraft Education into the curriculum. Thomas enjoys learning from and collaborating with other educators from the MIEExpert community around the world.

You can follow @_Thomas_Lange on Twitter to keep up to date with the amazing work he is doing with Office 365 at Queen Anne's.


Interact with the Mix Video below to see how Thomas is utilising Office 365 at Queen Anne's in his own words below!


Follow in the footsteps of our fantastic MIEE's and learn more about how Microsoft can transform your classroom with the Microsoft Educator Community.

Moving to a new home…

$
0
0

This is the last blog post that I am going to write as Virtual PC Guy. But do not fear, I am starting a new blog over at american-boffin.com, and all the Virtual PC Guy posts are going to remain intact.

You may wonder why I am making this change?

Well, there are several reasons.

  • It’s been a long time. I have written 1399 blog posts over 14 years – averaging one new post every other working day. When I started this blog, I had more hair and fewer children.
  • The world has changed. When I started writing as Virtual PC Guy, virtualization was a new and unknown technology. Cloud computing was not even invented yet. It is amazing to think about how far we have come!
  • The scope and impact of my work has drastically increased. When I started blogging, there were a very select group of people who cared about virtualization. Now, between cloud computing, the rise of virtualization and containerization as standard development tools – and the progress we have been making on delivering virtualization based security for all environments – more and more people are affected by my work.
  • I am a manager now. When I started on this blog I was a frontline program manager – and most of my time was spent thinking about and designing technology. I have been a program manager lead for almost a decade now – and while I do still spend a lot of time working with technology – I spend more time working with people.
  • Maintaining multiple blogs is hard. I have tried, from time to time, to start up separate blogs for different parts of my life. But maintaining a blog is a lot of work. Maintaining multiple blogs is just too much work for me.
  • Virtual PC Guy has a very distinctive style. Over the years I have toyed with the idea of switching up the style of Virtual PC Guy – but I have never been able to bring myself to do it.

For all these reasons – I have decided that the best thing to do would be to archive Virtual PC Guy (I have posts that are 10 years old and are still helping people out!) and to start a new blog.

On my new blog – I will still talk about technology – but I will also spend time talking about working with customers, working with people in a corporate environment, and about whatever hobbies I happen to be spending my time on.

I hope you come and join me on my new blog!

Cheers,
Ben

Exploring Big Data: Course 10 – Microsoft Professional Capstone: Big Data

$
0
0

Big Data with Sam Lester

(This is course #10 of my review of the Microsoft Professional Program in Big Data)

Course #10 of 10 – Microsoft Professional Capstone: Big Data

Overview: Course #10 brings the Microsoft Professional Program in Big Data to a close. The capstone project is comprised of three different sections, each making up 33% of the overall grade even though the complexity and number of questions varies by section. The project is based on one years’ worth of weekly sales data for roughly 300 stores making the dataset a collection of over 15,000 files. As a result, we need to rely on the techniques learned in previous courses to set up blob storage, use Azure Storage Explorer to copy/store files, author U-SQL jobs to process files, set up the data warehouse using the supplied T-SQL script, create linked services to establish the dataset connections and pipelines, and finally load the data into the data warehouse to query for the final answers.

Time Spent / Level of Effort: When I read the first lab exercise instructions, I was a bit nervous about tackling this project. Over the previous nine courses, we’ve covered a LOT of material, much of which is challenging to keep straight. I had completed the 9th course about two weeks before the capstone opened, so I went back to the previous archived capstone and spent about 12 hours going through the old capstone to practice. As it turned out, this was time very well spent. In total I spent roughly 15 hours for this capstone project.

Course Highlight: The highlight of the capstone course was going from reading the original instructions with a hefty dose of confusion to ultimately completing the labs and projects to wrap up the class. It required me going back through several of the previous courses and reading the lab instructions and watching some videos for a second and third time.

Suggestions: My biggest suggestion for completing the capstone course is to view the previous capstone course to better understand the tasks. For example, I officially completed the capstone course that opened on January 1, 2018, but I spent several hours working on the previous capstone (October 2017). The time I spent on the October course was extremely helpful when the new course opened since I had already completed most of the exercises.

An additional suggestion would be to focus on obtaining the required passing score of 70% prior to the second part of lab 3, where all 15,000+ files need to be loaded and queried. I found this part of the course to be the most challenging, but since I had already scored above 70%, I wasn’t as concerned about getting these questions correct. However, if I’d been below the required 70% at this point, there would have been a lot more pressure to get these questions correct.

Finally, use the tips supplied in the lab exercise notes that inform you of the courses where the material was originally introduced. As mentioned above, much of the material overlaps a bit, so knowing which course to revisit saved a lot of time. Once I went back to the specific courses for review, I found the lab instructions for those courses to be extremely helpful.

Summary: The 10 courses that make up the Microsoft Professional Program in Big Data are an outstanding way to improve your knowledge about the concepts of processing Big Data. Prior to completing this MPP, several of these topics were vaguely familiar to me, but not well enough to teach/explain to others. After going through this program, I have a much better understanding and will continue to work with these technologies.

I hope this blog series has helped you on your journey to improve your Big Data skills. It certainly has helped mine!

Thanks,
Sam Lester (MSFT)


Důležité změny pro vývojáře v roce 2017

$
0
0

K novému roku vám přinášíme seznam nejzásadnějších změn pro vývojáře za rok minulý. Za celý MSDN blog tým vám přejeme mnoho úspěchů a těšíme se na vás v novém kalendářním roce 2018.

Ukončení klasického Azure Management portálu

Klasický Azure portál bude ukončen 8. ledna 2018. Zákazníci, kteří používají klasický portál, nemusí podnikat žádné další akce a všechny vaše služby budou dostupné na novém Azure portále. Více informací o Azure portále se můžete dočíst na zde. 

Visual Studio App Center

Pro všechny uživatele je nyní dostupné Visual Studio App Center, které umožnuje automatizovat životní cyklus aplikací pro různé platformy. Během několika minut je možné sestavit aplikaci zrepositáře a otestovat ji na reálných zařízeních. Také je možné distribuovat aplikaci k beta testerům a v reálném čase monitorovat pády a další analytická data. Více se můžete dočíst zde. 

.NET Core 2.0 

Jednou ze zásadních novinek v roce 2017 bylo vydání prostředí .NET Core ve verzi 2.0. Novinkou je například zjednodušená práce s autentizačním middleware v ASP.NET Core a mnoho dalších. Více informací naleznete zde. 

Visual Studio 2017

Dalším důležitým milníkem v roce 2017 bylo vydání Visual Studia ve verzi 2017. Kromě novinek a vylepšení v samotném editoru jistě oceníte i jednoduchou a mnohem čistší instalaci/odinstalaci než u předchozích verzí. Seznam novinek najdete zde. 

Azure CLI 2.0

Od prosince je Azure CLI 2.0 hlavní CLI nástroj pro správu zdrojů na základě Azure Resource Manager. CLI 1.0 bude nástroj pro správu "klasických" zdrojů, to jsou příkazy přes azure config mode asm. Pokud používáte CLI 1.0 pro správu zdrojů, měli byste je přemigrovat na CLI 2.0, návod, jak to provést je zde. Pro běh CLI 2.0 můžete využít například Azure Cloud Shell přímo v prostředí Azure Portálu. 

SQL Operations Studio Preview

Další novinkou, který byla oznámená před koncem roku 2017 bylo SQL Operations Studio. Podobně jako Visual Studio Code se jedná o open source projekt. Tato aplikace si dává za cíl sjednotit nástroje pro přístup a správu různých databází. Nabízí například IntelliSense pro práci s jazykem SQL, podobně jako VS Code také integrovaný terminál a spoustu další novinek. Preview verzi můžete získat zde. 

Novinky v IaaS a kontejnerech

Rok je v cloudu dlouhá doba. V této sérii zpětných pohledů se pokouším ohlédnout za největšími novinkami, které rok 2017 přinesl. V dnešním díle se zaměřím na základní kámen všeho – infrastrukturu, tedy compute a storage v Azure.

 

Pro další informace o předchozích i budoucích novinkách můžete sledovat Azure Roadmapu.

 

- Matěj Borský, TheNetw.org

Custom Schedule for Azure Web Job Timer Triggers

$
0
0

CRON Expression is a nice way to define the schedule for the Timer Tigger in Azure Web jobs/Function apps. Though, sometimes it can be a little tricky.

Understanding and defining the CRON expression can be very tricky for some schedules and actual interpretation may not be the same what you had configured in the CRON expression. Though the positive side of defining the CRON expression for timer triggers is that, whenever a change is needed it’s quick and can be done via the KUDU site.

Azure Web Job SDK has the classes “DailySchedule” & “WeeklySchedule” (Nuget Microsoft.Azure.WebJobs.Extensions) which can be an alternative to the CRON expressions, which can make the schedule definition easy to read and define.

Using these classes, we can configure very tricky and complex schedules as below:

•    Every day but at different timings with no regular interval, let say every day 11:00 AM, 01:00 PM, 04:30 PM & 09:15 PM (Quite tricky)

•    On Tuesday 11:00 AM, on Thursday 12:00 PM and on Saturday 06:00 AM(Complex)

Frankly, I didn't even try creating CRON expression for above scenarios :), as being developer, it was quite easy for me to use these classes and configure above schedule in the configuration file.

Lets talk about the classes “DailySchedule” & “WeeklySchedule” now.

DailySchedule

This class is an inbuilt class for timers under the namespace “Microsoft.Azure.WebJobs.Extensions.Timers” and is inherited from “TimerSchedule” which is the base class for Timer Triggers.

Using the constructor of class “DailySchedule” we can define the daily schedule. The constructor takes in the collection of System.TimeSpan strings or collection of System.TimeSpan instances.

WeeklySchedule

Just like the “DailySchedule” class this class is also under the namespace “Microsoft.Azure.WebJobs.Extensions.Timers” and is inherited from the “TimerSchedule” class. Using the Add method defined in this class we can define the weekly schedule. Add method takes “DayOfWeek” and “TimeSpan” as the two arguments to define the schedule.

In order to make use of these classes we need to create our own classes, inherit these classes and use the methods to define the schedule. Once the custom class is defined we need to define the TimerTriggerAttribute as a typeof the custom class.

Lets take a look at the examples of Usage of these classes and make it more clear.

Example 1. DailySchedule.

//Class for custom schedule daily.

public class CustomScheduleDaily : DailySchedule

{
     // calling the base class constructor to supply the schedule for every day. This can be a collection of strings or timespan itself.
     public CustomScheduleDaily() : base("18:06:00", "18:07:00”)
     {
        
     }

}

// This function will get triggered/executed on a custom schedule on daily basis.

public static void CustomTimerJobFunctionDaily([TimerTrigger(typeof(CustomScheduleDaily))] TimerInfo timerInfo)

{
     Console.WriteLine("CustomTimerJobFunctionDaily ran at : " + DateTime.UtcNow);

}

In the code above “CustomScheduleDaily” is a custom class inheriting predefined “DailySchedule” class. In the definition of constructor for “CustomScheduleDaily” class we are calling the base class constructor (: base("18:06:00", "18:07:00")) to define the schedule. In this example we have passed the collection of strings. We can also define TimeSpan variables and pass them to the base class constructor.

Once custom class “CustomScheduleDaily” is defined we can use the typeof(CustomScheduleDaily) and add it to the TimerTrigger of the function (CustomTimerJobFunctionDaily) which we want to run on a schedule in the web job.

In the current example the web job will run every day at "18:06:00" & "18:07:00".

Below is how it looks on the Azure Web Job Dashboard.

1

Example 2. WeeklySchedule

//Class for custom schedule weekly
         public class CustomScheduleWeekly : WeeklySchedule
         {
             public CustomScheduleWeekly()
             {
                 TimeSpan ts = new TimeSpan(6,15,15);
                 //Calling the Add(day, time) method of WeeklySchedule class to add the weekday and time on that weekday to run the webjob.
                 Add(DayOfWeek.Monday, ts);
                 ts= new TimeSpan(9, 15, 15);               
                 Add(DayOfWeek.Tuesday, ts);
                 ts = new TimeSpan(12, 15, 15);
                 Add(DayOfWeek.Wednesday, ts);
                 ts = new TimeSpan(15, 15, 15);
                 Add(DayOfWeek.Thursday, ts);
                 ts = new TimeSpan(18, 15, 15);
                 Add(DayOfWeek.Friday, ts);
                 ts = new TimeSpan(21, 15, 15);
                 Add(DayOfWeek.Saturday, ts);               
             }
         }

              // This function will get triggered/executed on a custom schedule on weekly basis.
        public static void CustomTimerJobFunctionWeekly([TimerTrigger(typeof(CustomScheduleWeekly))] TimerInfo timerInfo)
         {
             Console.WriteLine("CustomTimerJobFunctionWeekly ran at : " + DateTime.UtcNow);
         }

CustomScheduleWeekly has the implementation similar to CustomScheduleDaily.

CustomScheduleWeekly  Inherits WeeklySchedule class. So in the constructor we can add the day of week and time on that particular day when we want to run the webjob. This way we can define different timings on different days easily. 

Once the schedule is defined we need to assign the typeof(CustomScheduleWeekly) to the TimerTrigger attribute of the function we want to run in the web job.

In this example the weekly schedule is from Monday to Saturday but every day the timing is different. We can add multiple TimeSpan’s for the same DayOfWeek as well so that on that particular day the job can run multiple times as per the time defined.

One limitation with this approach is we are defining the schedule in the code which means if we change the schedule we need to recompile the Azure Web Jobs every time and redeploy it to Azure.

Below is how it looks on the Azure Web Job Dashboard.

2

One way to improve this solution is to read the day and time from the app.config file in the web job code. Complete Sample has been uploaded on the GITHUB repo(TimerTriggerWebJobCustomSchedule) https://github.com/amitxagarwal/Azure-Webjobs-Samples . An example of Customer weekly class getting the schedule from app.config file is shown below.

public CustomScheduleWeekly()
             {
                 TimeSpan ts = new TimeSpan();
                 string[] values = null;
                 //Iterating through complete appsettings section to get the schedule for all the days and adding the schedule to trigger.
                 foreach (String key in ConfigurationManager.AppSettings.Keys)
                 {
                     if (ConfigurationManager.AppSettings[key] !=null)
                     {
                         string val = ConfigurationManager.AppSettings[key];
                         values = val.Split('|');
                     }


                    switch (key)
                     {
                         //Calling the Add(day, time) method of WeeklySchedule class to add the weekday and time.
                         case "Mon":
                             foreach(string val in values)
                             {
                                 ts = DateTime.Parse(val).TimeOfDay;
                                 Add(DayOfWeek.Monday, ts);
                             }                           
                             break;
                         case "Tue":
                             foreach (string val in values)
                             {
                                 ts = DateTime.Parse(val).TimeOfDay;
                                 Add(DayOfWeek.Tuesday, ts);
                             }
                             break;
                         case "Wed":
                             foreach (string val in values)
                             {
                                 ts = DateTime.Parse(val).TimeOfDay;
                                 Add(DayOfWeek.Wednesday, ts);
                             }
                             break;
                         case "Thu":
                             foreach (string val in values)
                             {
                                 ts = DateTime.Parse(val).TimeOfDay;
                                 Add(DayOfWeek.Thursday, ts);
                             }
                             break;
                         case "Fri":
                             foreach (string val in values)
                             {
                                 ts = DateTime.Parse(val).TimeOfDay;
                                 Add(DayOfWeek.Friday, ts);
                             }
                             break;
                         case "Sat":
                             foreach (string val in values)
                             {
                                 ts = DateTime.Parse(val).TimeOfDay;
                                 Add(DayOfWeek.Saturday, ts);
                             }
                             break;
                         case "Sun":
                             foreach (string val in values)
                             {
                                 ts = DateTime.Parse(val).TimeOfDay;
                                 Add(DayOfWeek.Sunday, ts);
                             }
                             break;
                     }


                }               
               
             }

Here is the appsettings section from app.config.

<appSettings>

    <add key="Mon" value="08:11:20|09:24:20|09:28:20"/>   

    <add key="Tue" value="09:19:40"/>

    <add key="Wed" value="09:15:40"/>

    <add key="Thu" value="09:15:40"/>

    <add key="Fri" value="09:15:40"/>

    <add key="Sat" value="09:15:40"/>

    <add key="Sun" value="09:15:40"/>

</appSettings>

This is just another “easy ”way of defining the schedule programmatically. However the CRON Expressions are still the CRISP way of defining the schedule, may be tricky though.

Win Big with Next Up Azure Exam Camp Training

$
0
0

Build your expertise and your career

Whether you’re new to Azure or already a cloud professional, training is one of the best investments you can make in your career. Enrich your technical skills with deep, hands-on Azure online training, and have your expertise recognised by earning an Azure certification.
We know that not everybody learns in the same way. That’s why we have created Next Up Exam Camps, we have a number of exams to choose from, giving you a clear pathway to upgrade your skills and remain competitive.

THE MORE EXAMS YOU DO, THE MORE CHANCES YOU HAVE TO WIN!

Eligible Microsoft Azure MCP Exams includes:

My Backups are failing, Let’s open a support ticket

$
0
0

Actually wait!

A big percentage of the backup failures (More than 42%) can be fixed in minutes, and more importantly without even opening a support ticket.

Step 1: Figure out what is the failure.

From the backup blade, click on the failing backup and check the Log Details

Step 2: Check the following table.

Error Fix
Storage access failed. {0} Delete backup schedule and reconfigure it
The website + database size exceeds the {0} GB limit for backups. Your content size is {1} GB. Use a backup.filter file to exclude some files from the backup, or remove the database portion of the backup and use externally offered backups instead https://aka.ms/partial-backup
Error occurred while connecting to the database {0} on server {1}: Authentication to host '{1}' for user '<username>' using method 'mysql_native_password' failed with message: Unknown database '<db name>' Update database connection string
Cannot resolve {0}. {1} (CannotResolveStorageAccount) Delete backup schedule and reconfigure it
Login failed for user '{0}'. Update database connection string
Create Database copy of {0} ({1}) threw an exception. Could not create Database copy Use admin user in connection string
The server principal "<name>" is not able to access the database "master" under the current security context. Cannot open database "master" requested by the login. The login failed. Login failed for user '<name>'. Use admin user in connection string
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Check that the connection string is valid, whitelist site’s outbound IPs in database server settings
Cannot open server "<name>" requested by the login. The login failed. Check that the connection string is valid
Missing mandatory parameters for valid Shared Access Signature Delete backup schedule and reconfigure it


Step 3:
The failure is not in the table


Well ... Please open a support ticket.

Unlocking Creativity with Paint 3D

$
0
0

Find your inspiration with Remix3D.com Community

If you have a Windows 10 device, you may find that your Paint app has been given a dramatic makeover. Manipulate shape and colour in a whole new dimension with the easy and simplicity of the Paint we know and love. You can even change a 2D image into a 3D object. Start by creating objects, which you can rotate and see from every side, then add a splash of color with the new and improved brushes, such as transparent watercolour or oil brushes so real that you can see the paint smudging.

Check out the introduction video to help get you started: 


Paint 3D allows you to easily pull from 3D art in Remix3D.com’s growing catalog for you to customize, edit, and share. Remix3D.com is your online home for 3D content and community. Connect with other creators, showcase your own creations and get inspired.

Pull 3D art from Remix3D.com, like this patch of patch

Pull 3D art from Remix3D.com, like this patch of patch

Realistic Textures

Add fantastic new textures or materials such as sparkling gold, stained hardwood, or soft grass to your creations and watch them pop! Make any object come to life.

Add realistic textures to any object to make them come to life

Add realistic textures to any object to make them come to life

Unique MEME Creator

Personalize shapes with 3D stickers or even wrap your own photos around objects. Stickers automatically wrap around 3D objects and contour to fit. Have fun applying pictures of friends to 3D figurines and emojis.

Add stickers that automatically contour to fit 3D objects

Add stickers that automatically contour to fit 3D objects

Bring your ideas to life with 3D Doodle

Doodle like never before. In the new Paint, your doodles will jump into 3D. You can even take your drawings for a spin and view them from every side. Let your imagination break free.

Use 3D Doodle to draw puffy clouds

Use 3D Doodle to draw puffy clouds

Jump In

Paint can do a lot more than it used to. These are just five of our favorite features, but with a little digging you’ll discover tons more. Jump in and explore.

To get started, sign up on Remix3D.com and download the Paint 3D Preview.

High Availability Team Foundation Server (TFS) in Azure

$
0
0

A few months ago, I published a template for Deploying Team Foundation Server (TFS) in Azure. Since that approach has particular relevance for Azure Government users, the topic was also picked up by the AzureGov blog. The previous template shows you how to get off the ground pretty quick with TFS in Azure, but some organizations are very sensitive to down time and would want a high-availability (HA) deployment of TFS. I recently showed how to deploy SQL Server (2016/2017) Always On in Azure. A natural extension of this effort is, of course, to demonstrate an HA deployment of TFS in Azure using the SQL Server Always On template and multiple, load balanced, application layer TFS instances.

I have assembled a couple of templates that do just that. They can be found on GitHub. First a TFS HA template for deploying the TFS application with multiple VM instances and a load balancer. Secondly an HA DevOps template that combines all the components (network, domain controllers, SQL Always On, and TFS). The end result looks like this:

 

There is obviously quite a bit of stuff in this deployment and you should be aware that it will take on the order of 2-3 hours for a complete deployment.

You should be able to deploy using the usual "Deploy to Azure" buttons, but since TFS establishes a Web front-end, it is recommended that you provide an SSL certificate for that. Moreover, domain credentials are needed throughout the deployment to join VMs, etc. and it is good practice to store these credentials in Azure Key Vault. To help with that process, I have included a convenience script which you can use to set up the Key Vault and store all the details in it. The script will then create a template parameter json file that you can use for deployment. To use this script, you could call it with:

.PrepareDevnetTfsDeployment.ps1 -DomainName contoso.us `
-AdminUsername EnterpriseAdmin `
-KeyVaultResourceGroupName mykvgrp `
-KeyVaultName uniquekvname `
-CertificatePath '<PATH TO PFX FILE>' -Location usgovvirginia

You will then be prompted for two passwords, one if the domain admin password and the other is the certificate password. After running the script, a Key Vault will have been created and the parameters stored in a json file that will look something like:

{
    "adminPassword": {
        "reference": {
            "keyvault": {
                "id": "/subscriptions/XXXXXXXXXXXXX/resourceGroups/keyvault15/providers/Microsoft.KeyVault/vaults/mihansenkv15"
            },
            "secretName": "DomainAdminPassword"
        }
    },
    "secrets": {
        "value": [
            {
                "vaultCertificates": [
                    {
                        "certificateUrl": "https://XXXXXXX.vault.usgovcloudapi.net:443/secrets/SslCert/XXXXXXXXXXXXXXX",
                        "certificateStore": "My"
                    }
                ],
                "sourceVault": {
                    "id": "/subscriptions/XXXXXXXXXX/resourceGroups/keyvault15/providers/Microsoft.KeyVault/vaults/XXXXXXXXXX"
                }
            }
        ]
    },
    "adminUsername": {
        "value": "EnterpriseAdmin"
    },
    "sslThumbPrint": {
        "value": "07XXXXXXXXXXXXXXXXXXXXFE9DB79"
    },
    "domainName": {
        "value": "contoso.us"
    }
}

You can either use this file directly with the New-AzureRmResourceGroupDeployment command or simply copy and paste the parameters into the portal if you deploy that way. You can of course use your other tools of choice to upload the secrets. The script is just meant as help and documentation for assembling the required information.

The main purpose of this template deployment is to make it easier to get started with TFS in Azure. You will probably need to make some adjustments for your specific deployment, but it can be a place to start. The Powershell DSC scripts used in the deployment also provide a way of documenting the needed installation steps. Do feel free to use pieces of the code in your projects and please contribute back (with pull requests on GitHub) if you find bugs/problems or see a chance for improvement.

Let me know if you have questions/comments.

Store submission error on Edge extension with a Desktop Bridge component

$
0
0

The error message (listed below) will occur on store submission if you use Visual Studio 2017 (pre-update 5 version 15.4 or older) to generate the .appxupload file as outlined in the documentation https://docs.microsoft.com/en-us/microsoft-edge/extensions/guides/native-messaging).

"Package acceptance validation error: Apps converted with the Desktop Bridge and that require the .NET Native framework must be pre-compiled by the .NET Native tool chain.”

The workaround listed below will resolve this issue.

  1. Generate the package with Visual Studio. This will produce both an .appxupload file and a folder with the _Test suffix.
  2. Ignore the .appxupload file produced by Visual Studio 2017 version 15.4.
  3. Create a new zip file and add the four files (listed below) from the _Test folder created in step.
    • the 3 .appxysym files
    • the .appxbundle file
  4. Rename the zip by changing the extension to .appxupload from .zip.
  5. Uploaded this separately created appxupload file to the store.

This issue is reported to have been fixed in Update 5 for Visual Studio 2017 (https://www.visualstudio.com/en-us/news/releasenotes/vs2017-relnotes)


Deploying Your Dockerized Angular Application To Azure Using VSTS

$
0
0

Introduction

In my previous post I showed you how to deploy your Angular application to Azure using Visual Studio Team Services (VSTS). Whereas VSTS made it extremely easy to build a CI/CD pipeline, there was one challenging aspect which always proves to be challenging and that is the consistency across the development environment and the production environment. For example, when you develop your Angular application locally the application is served by the webpack server wheras when you hosted on Azure its served using IIS. In this post I will show you how you can use Docker containers to use the same environment under both the development and production machines while automating the whole process using VSTS.

I am going to assume that you are developing on a Windows 10 machine. Thus, you will need to install docker for windows to be able to run docker containers locally. At the time of writing this blog post the latest version of the Angular-Cli was 1.6.2 and the latest version of Angular was 5.1.2. Also I will assume that you have already created an Angular application using the Angular-Cli.

Building the Docker Container

We will start by tying docker into the development environment. If you are using VS Code (its no secret by now that its my favorite code editor) I highly recommend installing an extension called Docker that is maintained by Microsoft. It adds the necessary commands to VS Code to add a DockerFile to your existing angular project as follows:

Once the DockerFile is added to your Angular application its time to add the necessary commands to assemble a docker image which will be used to create docker containers that will run on both the development machine as well as on the production server. We will assume that nginx will be used as the web server. Here is the DockerFile that builds an image based on the nginx image and copies the dist folder that is generated by the angular build process into the specified directory inside the image.

The beauty of the docker image here is that we don’t have to burden ourselves with setting up an nginx server on either the development or production machines. It just works!!! Finally, we are going to expose the web server at port 80 (this is internal to the image and not accessible to the outside world, but more on this later).

If you want to test the above setup locally on your dev machine before you integrate it into your CI/CD pipeline you can build the angular application using the following command:

ng build –prod

and then build the docker image using the following command:

docker build -t my-angular-app

which would result in the my-angular-app image being created. Notice how the nginx image was automatically downloaded since it was utilized as the base image for the my-angular-app image that we created.

Once the image is created you can run instances of the image (aka containers) from the newly built image using the following command:

docker run -p 3000:80 -it my-angular-app

Notice that we had to specify an external port number that would map to the internal port number ( 80 in this case) in order to give access to the outside world as port 80 is only accessible inside the container. You can navigate to http://localhost:3000 which should serve the angular application from within the docker container.

Building a CI/CD Pipeline on VSTS

At this point we are ready to automate this process as part of our CI/CD pipeline. Our CI/CD pipeline will include four different tasks as shown in the figure below. Notice that we are using a Hosted Linux agent (still in preview at the time of writing this post) which will have native support for docker.

The first task will install the required npm packages.

The second task will build the application using the Angular-CLI which will place the bundled code under a folder called dist. Here the assumption is that you have modified your package.json scripts section to include a build-prod script.

 

The third and fourth tasks will build the docker image and push it to Azure Container Registry (ACR) respectively. Note that I could have used docker hub to store the docker image but I opted to use ACR as I love the idea of reducing network latency and eliminating ingress/egress charges by keeping my Docker registry in the same data center as my deployments. In addition, I wanted to store my image inside a private repository which ACR allows me to do as well.

To create an ACR you can follow the steps below.

  1. Sign into your Azure Account at https://portal.azure.com.
  2. In the Azure Portal, choose New, Containers, then choose Azure Container Registry.
  3. Enter a Registry name, Resource Group, and select a Location.
  4. For Admin user, choose Enable and then choose Create.
  5. Wait for the Azure Container Registry deployment to finish.

Task 3 and 4 below are utilizing an ACR that I created called waelscontainerregistry.

At this point the CI/CD pipeline will create a new docker image every time the code is checked in. Notice that I am using the build number as part of the image name to differentiate the different images that are resulting from different builds. The image below shows my ACR repository

Now that the image is stored inside ACR we will need to set up continuous deployment of our Docker-enabled app to an Azure web app. Start by creating a Azure web app to host a container. This can be achieved in Azure by creating a "Web App for Containers" as shown below. Notice that I am pointing my web app to the ACR repository that I created in the previous step. At this point you may be thinking that I am hard coding my app to utilize an image with a specific tag number. Don't worry about it as I will override that later on when I push the docker container from within VSTS.

The final step involves creating a release definition on VSTS that will deploy to the Azure App Service we created above. Follow these steps to create a release definition:

    1. In the Build & Release hub, open the build summary for your build.
    2. In the build summary page, choose the Release icon to start a new release definition.
    3. Select the Azure App Service Deployment task and choose Apply.
    4. Configure the properties as follows:

The image below shows your completed release definition.

Checking in your code should now trigger your build pipeline which generates a docker container that gets stored inside your ACR. The release management pipeline will pick up the docker image and publish it to your Azure App Service. Both your development as well as your production environment are running the same docker container which is utilizing nginx web server to serve your Angular application. This is a huge benefit as you don't have to deal with unexpected behaviors due to discrepancies in the hosting environments.

Azure CosmosDB in Banking Sector

$
0
0

Happy New Year 2018 !!!

Other day I was reading about change feed feature of Azure CosmosDB and thought of possibilities can be driven from it. That triggered me to test a scenario which we deal with in our day to day life which is getting notification of any activity happened on Bank account.

Use case

Typically, in a banking sector, Any bank keep account holders information stored in their database and provides various methods to spend (through debit card), withdraw (through ATM), money transfer (Online or mobile app) and deposit (any channel). The balance will get adjusted based on the transaction type. At the end of successful transaction, as a bank customer we get SMS/email regarding the transaction took place in our account. This will make sure customer is updated of any transaction happening on their account and also if there any fraudulent transaction takes place they will act.

With that thought, I am going to write about how to build the solution using Azure and give information about the services.

Technologies Involved

Azure CosmosDB:- This is heart of this solution. Azure CosmosDB is a multi-model geo replicated database. It provides various APIs like SQL, MongoDB, Cassandra, Graph and make sure core features like security, multi-region replication, consistency, partition, scalability, indexing and log monitoring applies to all. Azure CosmosDB doesn’t enforce the learning of new tools but provides an ability to developer to continue using existing tools for example if a developer is working on GrpahDB, he/she canstill use gremlin console to interact with it.

Change Feed: - This feature once enabled, creates a lease collection. This collection keeps track of changes happening. It stores changes the way they happen on the collection. All the inserts and updates get captured however for deletes, it requires workaround like adding additional property to decide delete. Change feed can be accessed by Azure function (we’ll cover in this blog), Azure Cosmos DB SDK and Azure Cosmos DB Change Feed Processor library. Below is the example of document stored in lease collection.

clip_image002

For more information about Change Feed feature please check here.

Azure Functions: - Based on serverless architecture, Azure Functions provide ease of running micro code without worrying about setting up infrastructure, scheduling job, security, multitasking etc. One can quickly build a function and run it on any scale required. This architecture can be used in various domains like IoT, retail, finance etc. For more features and capabilities of Azure functions please refer here. Before production deployment please refer here to make sure the programming language that you are planning to use (.Net, JS, Java etc.) is in GA and not in experimental.

PowerBI: - PowerBI is the representation layer. It gives a a visualization capacity for users to make sense of different datasets and computations done on them. A dashboard shows the story of data it’s presenting. Microsoft PowerBI has evolved a lot since the time I started working with it. It captures all the different verticals and audiences. The ability to develop once and render it on any device makes it more powerful. For more information please refer here.

Architecture

clip_image004

Prerequisites

  • Active Azure Subscription
  • Visual Studio 2017

Setting up Azure CosmosDB account

clip_image006

  • Once page is open, Enter ID
  • Select API as SQL
  • Select Subscription
  • Create/Select Resource Group
  • Select Location
  • Select Pin to Dashboard
  • Click Create

             clip_image008

  • Once the page is open, click Overview
  • Click Add Collection

clip_image009

  • Let’s create Database and a collection to store account master information. Provide Database id
  • Provide Collection Id
  • Select Storage Capacity as Fixed (10 GB)
  • Click OK

             clip_image011

  • Once Collection created successfully, Click New Collection

clip_image013

 

  • Enter Database id

clip_image015

  • Enter Collection Id
  • Select Storage capacity
  • Select Throughput
  • Click OK
  • Once Database and collection created, screen will look like below

clip_image017

 

  • Click Close (X)

clip_image019

 

Adding Account Master Details

  • Go to https://github.com/rawatsudhir1/AzureCosmosDBChangeFeedUseCase and either clone the repo or download it as zip
  • In the repo under SupportingFiles, open CustomerAccountInfo.txt file. This file has some records or json document (contains customer account master information) which we will upload through portal. The other approach is to create a web-app or mobile app (with proper security enabled) to send this data.

clip_image021

*Note :- I am not following any data modeling technique. Generally, when you are storing data in Azure CosmosDB please decide, whether application will perform read or update extensive operation.

  • Copy first record from the file (from line 1 to line 8 as per the above image)
  • Goto Azure CosmosDB account, created in earlier step.
  • Click Data Explorer

clip_image023

  • Click SummaryInfo
  • Click Documents
  • Click New Document
  • Paste the record copied from github in earlier step

clip_image025

 

  • Click Save
  • Repeat Steps to add more records (copy from github repo)
  • After adding all six records, here is how screen look like

clip_image027

 

  • Close (X) Data Explorer

Before recording deposit or expense details in DetailInfo collection, let’s first build out Azure Function Logic

Building Azure function logic

  • Click Create a Resource
  • Click Compute
  • Click Function App

         clip_image029

 

  • Once blade is open, Enter App name
  • Select Subscription
  • Create new or Use existing Resource Group
  • Select Windows as OS
  • Select Consumption Plan as Hosting Plan
  • Select Location (Select the same region where Azure CosmosDB is created)
  • Let default values for Storage
  • Click Pin to dashboard
  • Click Create

clip_image031  

 

  • Once Function App is created, Click on New (+)

            clip_image033

 

  • Click Custom Function

clip_image035

 

  • Choose a template, enter cosmos in search

clip_image037

 

  • Select Cosmos DB trigger
  • On New Function, Select C# as language
  • Provide Name of the function
  • On Azure Cosmos DB account connection click new and select the cosmodb account created earlier
  • Provide Collection name
  • Provide Database name

clip_image039

 

  • Let Collection name for leases as it is
  • Click Create
  • Once function is created, Click on function app and then Application Setting

clip_image041

 

  • Under Application settings, add two variables endpointUrl and authorizationKey. These variables hold values to connect CosmosDB to retrieve and update Bank Account Master table. Copy values for both the variable from CosmosDB account under Keys section.
  • Add endpointUrl , authorizationKey with values and Save it. This is how it will look like.

         clip_image043

 

clip_image045

 

Azure function is setup. Let’s move to next step

Post transaction

  • Open Visual Studio 2017, create a console application
  • Copy code from here (Program.cs) and paste it.
  • Include Microsoft.Azure.DocumentDB.Core and Newtonsoft.Json nuget package.
  • Add endpointurl and authorizationKey as shown below.

          clip_image047

 

  • Run/F5 to run console application. This action will post a transaction to Azure CosmosDB

          clip_image049

 

Output at Azure Function

  • Switch to Azure portal
  • Look at Logs window in function

clip_image051

 

Send Notification to Bank user

  • Please make sure to setup twilio test account to send sms notification.
  • Switch to Azure portal and open Azure function
  • Click Integrate
  • On Outputs click New Output

clip_image053

 

  • Select Twilio SMS and click Select

clip_image055

 

  • Keep message as Message parameter name
  • Keep acctsid as Account SID setting. The variable acctsid will define under application setting which hold value of Account SID (got from Twilio dashboard)
  • Keep authtoken as Auth Token setting. The variable authtoken will define under application setting which hold value of Authentication token (got from Twilio dashboard)
  • Provide To number. Make sure this phone number is verified in Twilio
  • Provide From number. This information available in in Twilio dashboard. If first time created Twilio account then it will provide a number to use for some time.

clip_image057

 

  • Let’s add variables in Application settings. Click function and Application settings

clip_image059

 

  • Add acctsid and value (Account SID from Twilio Dashboard)
  • Add authtoken and value (Authorization Token from Twilio Dashboard)

clip_image061

 

  • Click Save
  • Copy code from github and paste in
  • Click Save. Make sure there is no error in log
  • Run console program and post a transaction

clip_image063

 

  • Once it run successful, a SMS will be sent to the mobile number. If not received make sure defined number is not in DND (Do Not Disturb) mode.

Building PowerBI Report

  • Follow here to build a PowerBI dashboard.

Summary

In this blog, we built a use case to showcase how Azure COSMOSDB can be used in Banking Sector. However, there are another scenario which can be build using this feature like create request for vendors once an order is received (Retail domain), Immediate action on bad feedback (Service domain), Alert when a new quotation request is received (Insurance domain) etc.

Thanks for reading and I hope you liked it.

Thanks to my colleague Gandhali for her suggestions.

Eat Healthy, Stay Fit and Keep Learning.

Azure HDInsight Performance Insights: Interactive Query, Spark and Presto

$
0
0

Cross post from https://azure.microsoft.com/en-us/blog/hdinsight-interactive-query-performance-benchmarks-and-integration-with-power-bi-direct-query/

Fast SQL query processing at scale is often a key consideration for our customers. In this blog post we compare HDInsight Interactive Query, Spark, and Presto using the industry standard TPCDS benchmarks. These benchmarks are run using out of the box default HDInsight configurations, with no special optimizations. For customers wanting to run these benchmarks, please follow the easy to use steps outlined on GitHub.

Summary of the results

  • HDInsight Interactive Query is faster than Spark.
  • HDInsight Spark is faster than Presto.
  • Text caching in Interactive Query, without converting data to ORC or Parquet, is equivalent to warm Spark performance.
  • Interactive query is most suitable to run on large scale data as this was the only engine which could run all TPCDS 99 queries without any modifications at 100TB scale.
  • Interactive Query preforms well with high concurrency.

About TPCDS

The TPC Benchmark DS (TPC-DS) is a decision support benchmark that models several generally applicable aspects of a decision support system, including queries and data maintenance. According to TPCDS, the benchmark provides a representative evaluation of performance as a general purpose decision support system. A benchmark result measures query response time in single user mode, query throughput in multi-user mode and data maintenance performance for a given hardware, operating system, and data processing system configuration under a controlled, complex, and multi-user decision support workload. The purpose of TPC benchmarks is to provide relevant, objective performance data to industry users. TPC-DS Version 2 enables emerging technologies, such as big data systems, to execute the benchmark. Please note that these are unaudited results.

HDInsight Interactive Query

HDInsight Interactive Query enables you to get super fast query results from your big data with ZERO ETL (Extract Transform & Load).

Interactive Query in HDInsight leverages (Hive on LLAP) intelligent caching, optimizations in core engines, as well as Azure optimizations to produce blazing-fast query results on remote cloud storage, such as Azure Blob and Azure Data Lake Store.

Comparative performance of Spark, Presto, and LLAP on HDInsight

We conducted these test using LLAP, Spark, and Presto against TPCDS data running in a higher scale Azure Blob storage account*. These storage accounts now provide an increase upwards of 10x to Blob storage account scalability. Over last few months we have also contributed to improve the performance of Windows Azure Storage Driver (WASB), which as a result has helped improve the performance for all HDInsight workloads.

To get your standard storage accounts to grow past the advertised limits in capacity, ingress/egress and request rate, please make a request through Azure Support

We picked a common external Hive metastore, Azure SQL DB S2, so that various engines could go against the same data and metadata. To learn more, please review the steps to generate data and to run TPCDS queries.

HDInsight configuration

For these tests, we used a similar cluster to run LLAP, Spark, and Presto.

config

Note: Tests were performed using the default out-of-the-box configurations resulting in no optimizations, no special settings, and no query change for any engine. 

The table below uses 45 queries that ran on all engines successfully. As shown, LLAP was able to run many more queries than Presto or Spark.

 

perfnew

 

As you can see with above run, LLAP with ORC is faster than all other engines. What’s an even more interesting observation is that LLAP with Text is also very fast. Even faster then Spark with Parquet file format.

Fast analytics on Hadoop have always come with one big catch, they require up-front conversion to a columnar format like ORC or parquet, which can be time consuming and expensive with on-demand computing. LLAP Dynamic Text Cache converts CSV or JSON data into LLAP’s optimized in-memory format on-the-fly. Caching is dynamic so the queries your users run determine what data is cached.

llap

 

HDInsight Interactive Query(LLAP) architecture

LLAP also utilized cluster memory DRAM and SSD to provide better performance. Cache pool is a joint pool made up of cluster DRAM and SSD. To give you an example, with D14V2 VM’s in Azure you an get 112 GB of RAM and 800 GB of local SSD, so just a couple of nodes are good enough to keep over a terabyte of data in memory for fast query performance.

Text caching in Interactive Query

Text caching in Interactive Query is a very interesting concept which has caused us to think about big data pipelines very differently. Traditionally, after ingesting data in raw form we needed to convert the data to an optimized file format such as ORC, Parquet, or Avro, as these file formats ensured users would receive good performance while querying the big data. With text caching, raw text and json performance is very similar to ORC which eliminates the need for having additional steps in our big data pipeline, resulting in cost saving as well as faster and fresher query results.

conceptllap

Running Interactive Query on 100TB TPCDS data

As we see many benchmarks all over the web by different vendors, one thing we notice was that they focus on only a select set of queries where their respective engine will produce the best results. We decided to run all 99 queries at 100 TB scale, and only Interactive Query was able to run these unmodified. 41% of queries were returned under 30 seconds, and 71% of queries came back under 2 minutes. This benchmarks proves that Interactive query is fast, has rich SQL, and scales at much larger scale levels without any special efforts.

 

99queries

Concurrency

With the introduction of much improved fine-grain resource management and preemption, Interactive Query (Hive on LLAP) makes it easier for concurrent users. With Interactive Query, the only limit to concurrency is cluster resources. Cluster can be scaled to achieve higher and higher levels of concurrency.

We used number of different concurrency levels to test the concurrency performance. For the dataset, we again used 99 TPCDS queries on 1 TB data with 32 worker node cluster with max concurrency set to 32.

Test 1: Run all 99 queries, 1 at a time - Concurrency = 1

Test 2: Run all 99 queries, 2 at a time - Concurrency = 2

Test 3: Run all 99 queries, 4 at a time - Concurrency = 4

Test 4: Run all 99 queries, 8 at a time - Concurrency = 8

Test 5: Run all 99 queries, 16 at a time - Concurrency = 16

Test 6: Run all 99 queries, 32 at a time - Concurrency = 32

Test 7: Run all 99 queries, 64 at a time - Concurrency = 64

Results: As outlined in the above results, Interactive Query is a super optimized engine for running concurrent queries. The longest time to finish the workload was with single concurrent query.

concurrent

Comparison with Hive and performance improvements over time

Its important that we compare Interactive Query (LLAP) performance with Hive. There has been a ton of work done to make Hive more performant in the community, as well as some of the work we have been doing to improve Windows Azure storage driver performance. Back in January 2017, it took 200 minutes to run the TPCDS workload with Hive 1.2, and with the storage driver improvements Hive can now run the benchmark in 137 minutes. With LLAP cached data, the benchmark completes in 49 minutes. These are impressive gains.

 

hivecompare

Integration with Power BI direct Query, Apache Zeppelin, and other tools

Power BI now allows you to connect directly to your HDInsight Interactive Query cluster to explore and monitor data without requiring a data model as an intermediate cache. This offers interactive exploration of your data and automatically refreshes the visuals without requiring a scheduled refresh. To learn more about how to get started, please watch the video HDInsight Interactive Query with Power BI.

Get Data

HDInsight Interactive Query supports many end points. You can also use Apache Zeppelin , Visual Studio, Visual Studio Code, Hive View, and Beeline to run your queries

Summary

Azure HDInsight is a fully-managed, full spectrum, open-source analytics cloud service by Microsoft that makes it easy, fast, and cost-effective to process massive amounts of data. You can use the most popular open-source engines such as Hadoop, Spark, Hive, LLAP, Kafka, Storm, HBase, and R, and install more open source frameworks from the ecosystem. With Azure HDInsight, our mission is to provide a fully managed, full spectrum of open source technologies combined with the power of the cloud. Customers today are using these open source technologies to build a variety of different applications such as batch processing, ETL, data warehousing, machine learning, IoT, and more. The goal of this blog post is to share some of the intelligence on SQL query performance of various Open Source engines in the Azure HDInsight environment.

Do you have questions or comments? Please reach out to AskHDInsight@microsoft.com for more information.

XBox: Analytics on petabytes of gaming data with Azure HDInsight

$
0
0

Cross post from https://azure.microsoft.com/en-us/blog/how-xbox-uses-hdinsight-to-drive-analytics-on-petabytes-of-telemetry-data/

Microsoft Studios produces some of the world’s most popular game titles including the Halo, Minecraft, and Forza Motorsport series. The Xbox product services team manage thousands of datasets and hundreds of active pipelines consuming hundreds of gigabytes of data each hour for first party studios. Game developers need to know the health of their game through measuring acquisition, retention, player progression, and general usage over time. This presents a textbook big data problem where data needs to be cleaned, formatted, aggregated and reported on, better known as ETL (Extract Transform Load).

HDInsight - Fully managed, full spectrum open source analytics service for enterprises

Azure HDInsight is a fully-managed cloud service for customers to do analytics at a massive scale using the most popular open-source frameworks such as Hadoop, MapReduce, Hive, LLAP, Presto, Spark, Kafka, and R. HDInsight enables a broad range of customer scenarios such as batch & ETL, data warehousing, machine learning, IoT and streaming over massive volumes of data at a high scale using Open Source Frameworks.

Key HDInsight benefits

  • Cloud native: The only service in the industry to provide an end-to-end SLA on your production workloads. Cloud optimized clusters for Hadoop, Spark, Hive, Interactive Query, HBase Storm, Kafka, and Microsoft R Server backed by a 99.9% SLA.
  • Low cost: Cost-effectively scale workloads up or down through decoupled compute and storage. You pay for only what you use. Spark and Interactive Query users can use SSD memory for interactive performance without additional SSD cost.
  • Secure: Protect your data assets by using virtual networks, encryption, authenticate with Active Directory, authorize users and groups, and role based access control policies for all your enterprise data. HDInsight meets many compliance standards such as HIPAA, PCI, and more.
  • Global: Available in more than 25 regions globally. HDInsight is also available in Azure Government cloud and China which allows you to meet our needs in key geographical areas.
  • Productive: Rich productivity tools for Hadoop and Spark such as Visual Studio, Eclipse, and IntelliJ for Scala, Python, R, Java, and .NET support. Data scientists can also use the two most popular notebooks, Jupyter and Zeppelin. HDInsight is also the only managed-cloud Hadoop solution with integration to Microsoft R Server.
  • Extensible: Seamless integration with leading certified big data applications via an integrated marketplace which provides a one-click deploy experience.

The big data problem

To handle this wide range of uses and varying scale of data, Xbox has harnessed the versatility and power of Azure HDInsight. As raw heterogeneous json data lands in Azure Blob Storage, Hive jobs transform that raw data to more performant and indexed formats such as ORC (Optimized Row Columnar). Studio users can then add additional Hive, Spark, or Azure ML jobs to the pipeline to clean, filter, and aggregate further.

Scalable HDInsight architecture with decoupled compute and underlying storage

Depending on the launch style of a game, Xbox telemetry systems can see huge spikes in data at launch. Outside of an increase in users, the type of analysis and query needed to answer different business questions can vary drastically from game to game and throughout the lifecycle, resulting in shifting the compute needed. Xbox uses the ease of creating HDInsight clusters via the Azure APIs to scale and create new clusters as analytic needs and data fluctuates while maintaining SLA.

Needing a system that scales up and out while offering a variety of isolation levels, Xbox chose to utilize an array of Azure Storage Accounts and a shared Hive metastore. Utilizing an external Azure SQL database as the Hive metastore allows the creation of many clusters while sharing the same metadata, enabling a seamless query experience across dozens of clusters. Utilizing many Azure Storage Accounts, attached with SAS keys to control permissions, allows for a greater degree of consistency and security at the cluster level. Employing this cluster of clusters method greatly increases the scale out ability. In this cluster of clusters configuration, Xbox enabled the separation of processing (ETL) clusters and read-only ad-hoc clusters. Users are able to test queries and read data from these read-only clusters without affecting other users or processing, eliminating noisy neighbor situations. Users can control the scale and how they utilize their read-only cluster while sharing the same underlying data and metadata.

 

xboxmetastore

Shared data and metastore across different analytical engines in Azure HDInsight

We adopted the following best practices while picking up an external metastore with HDInsight for high performance and agility:

  • Use an external metastore. This helped us separate compute and metadata.
  • Ensure that the metastore created for one HDInsight cluster version is not shared across different HDInsight cluster versions. This is due to different Hive versions having different schemas. For example, Hive 1.2 and Hive 2.1 clusters trying to use same metastore.
  • Back-up custom metastore periodically for OOPS recovery and DR needs.
  • Keep metastore, storage accounts, and the HDInsight cluster in same region.

Data flow

Xbox devices generate telemetry data which is consumed by Event Hub for further processing in HDInsight Cluster by running thousands of different Azure Data Factory activities, and finally making data available to users for further insights. The figure below showcases the Xbox telemetry data journey.

xboxarchitecture

Load balancing of Jobs

We use multiple clusters in our architecture to process thousands of jobs. We built our own custom logic to distribute jobs among a number of different clusters, which helped us optimize our job completion time. We typically have long running jobs and interactive high priority jobs that we need to finish. We use the Yarn capacity scheduler to load balance cluster capacity for these jobs. Typically, we set high priority queue at ~80-90% of the cluster with maximum capacity to 100% and a low priority queue with ~10-20% with maximum capacity to 100%. With this distribution, long running jobs can take max cluster capacity until a high priority interactive job shows up. Once that happens, the high priority job can take 80-90% of cluster capacity and finish faster.

Summary

Xbox telemetry processing pipeline, which is based on Azure HDInsight, can be applied to any kind of enterprise trying to solve big data processing at massive scale.

For more information, please reach out to AskHDInsight@Microsoft.com for any questions.

Visual Studio ALM/DevOps VM 2017 Update Available

$
0
0

Visual Studio ALM/DevOps VM 2017 (Winter Update) Available

I am excited to announce that the ALM VM updated to Visual Studio Enterprise 2017 (15.5) and Team Foundation Server 2018 is now available. Key highlights of the version:

  • Updated to Microsoft Visual Studio Enterprise 2017 (15.5) and Microsoft Visual Studio Team Foundation Server 2018
  • We have got back Standard edition of SQL Server and Microsoft Test Manager in the VM
  • We have added 5 new labs including 2 labs on Azure
    • Collaboration Experiences for Development Teams with Wiki
    • Debugging with Snapshot Debugger
    • Managing Delivery Plans with Team Foundation Server 2018
    • Authoring ARM Templates with Visual Studio
    • Building ASP.NET apps in Azure with SQL Database

To find out more including links to download the VM and the Hands-on-Labs, please check out the site

Viewing all 5308 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>