Quantcast
Viewing all 5308 articles
Browse latest View live

Icon Fonts in UWP Apps

Icons are an essential element for graphical user interfaces. They allow intuitive actions by users, provide space optimization (e.g. Toolbars), create better mnemonics around app’s functionality, among other things that go beyond the goal for this post. In summary, they are nice to have in our apps, but with proper use. Too much of something good leads to bad results.

Methods to load iconographic assets in apps have improved over the years. A great advantage to those who do not have the support of graphical designers on their teams. Icons for apps have evolved from old day clip-art galleries all the way to icons stored inside font files. This latest method besides being handy provides the ability to render vectorized icons. Vectors adapt themselves to different display resolutions where our app might be available. Most of the XAML visual elements are vectors and so should be icons.
Microsoft offers many fonts that ship with Windows containing illustration assets. From the traditional Wingdings fonts to the new ones designed for modern apps, the Segoe family of fonts. In the latest version of Windows 10, we can find Segoe MDL2 Assets and Segoe UI Emoji. They come preinstalled in the system and can be consumed by controls that render text simply by changing the FontFamily property of the control. Although, there are a series of controls created to display icon assets.
Icon Classes

Image may be NSFW.
Clik here to view.

The UWP development platform provides a series of classes that extend the IconElement class, but only two of them support fonts as their source. FontIcon and SymbolIcon, these classes differ in the way we indicate what icon to use. SymbolIcon is created exclusively to consume icons from Segoe MDL2 Assets, and its source can’t be changed. It allows the class to expose a Symbol property that accepts an easy to memorize icon friendly name, as oppose to FontIcon where we can assign a different FontFamily value and indicate the icon to use through its Glyph property using the internal Unicode value representing the glyph location inside the font file.
The value we can assign to the Symbol property in a SymbolIcon does not include the valid glyphs that the font provides. For those cases, the recommended method is to use the FontIcon class instead. We can freely indicate the Unicode value based on what the font contains. The two font icons that come with Windows usually get additional icons as new versions of Windows ship. For this reason, we need to be careful in assuming that a Unicode value will render a valid icon in the previous version of Windows where our app could be running as well. Check on MSDN or use the Character Map accessory application that comes with Windows to verify a Unicode value and if it is valid in the versions of Windows the app targets.
When using a Unicode value identified in the Character Map we use one of two formats based on where we make the assignment. In XAML code we use "" where E787 is the hexadecimal value for the selected glyph in Character Map in image 1. To use it in code-behind the format is "uE787".
Colorizing Icons

Image may be NSFW.
Clik here to view.

The icons found in Segoe UI Emoji font come with color information for its glyphs. This color is shown if the platform or the application they are rendered in supports multicolor layers coming from fonts, UWP apps support it as well as Windows 10. In cases where there is no support for color, a monochromatic version of the emoji is displayed. Some text controls (TextBlock and TextBox) provide a property IsColorFontEnabled to indicate when to use color or the fallback monochromatic version. FontIcon does not have this option. The glyphs found on Segoe MDL2 Assets are only monochromatic. There is no color information coming from the font itself, but that does not prevent us from setting the control’s Foreground property to some color and get the font’s glyphs colorized. If we look closely at all the glyphs provided by this font we will notice that certain icons come in two versions: outline and filled. Having a FontIcon instance rendering the fill glyph and assigning a color to its Foreground property, and overlapping another instance of FontIcon rendering the outline version of the icon creates the effect of multiple layers with different colors.
Custom Fonts
As seen in the previous section, there are instances where the icon we are looking for is not provided by any of the system fonts or that an icon does not have its corresponding outline/fill version we wish. To solve this, we could look for other fonts available online, free or commercial, and embed them into our project. Some projects might even have the budget to get professional font designer services where we could have fonts created with icons from trademarked visual assets, like logos or other visual assets that help our app set a differentiating element.

Image may be NSFW.
Clik here to view.

With a font file (usually TTF) that contains our custom icons embedded into the application project, we can proceed and use it in the different controls that we discussed earlier that support FontFamily changes. The same way as we add image files to the project we can add a font file. Verify that the file is added to the project with Build Action set to Content.
Once we have the file in the project is a matter of reference it in the FontFamily property of the control using this format:
FontFamily="[PathToTTF]#[NameOfFontFamily]"
A font file could define more than one font family, for this reason, we need to indicate its name after the file name. On code-behind you can set this property as well, FontFamily is a class that its constructor takes a string value indicating a Uri pointing to a resource. We can take this mechanism to refer to a custom font in our project this way.
string fontFilePath = “ms-appx:///Assets/Fonts/MyCustomFont.ttf#My Personal Icons”;
fontIcon.FontFamily = new FontFamily(fontFilePath);
Notice that the Uri uses the application Uri schema to refer to resources in a project. This provides the possibility to have font files embedded in satellite assemblies (Class Libraries).
string assemblyName = GetType().GetTypeInfo().Assembly.GetName().Name;
string fontFamilyPath = $"ms-appx:///{assemblyName}/Fonts/MyCustomFont.ttf#My Personal Icons");
Final thoughts
Before closing this post, let’s talk about the user licenses that govern the use of fonts. The contents of fonts, being them letters or symbols, are bound to license that indicates if it is possible or not for us to redistribute the file with our application. It also indicates other permissions or restrictions regarding the modifications of the glyphs it contains.
A graphical designer or a company might have trademarked those designs an explicit permission should exist. In the case of the Segoe fonts or other fonts that are part of Windows 10, when we refer to their glyphs in our app without embedding the file in our project is part of the expected usage of it allowed by its license. We are not distributing the font and when the application is launched in a customer’s Windows 10 machine it will load the font information from that machine.
Same goes for any other font from the internet. We need to check what permissions or restrictions we must follow to make use of it.
In summary, we visited the icon controls that the platform offers as well as how to consume the system provided icons in those controls and a mechanism to include custom-made fonts embedded in our project or a class library.

Office リボンをカスタマイズ – パート7 – (TIPS 3 : 特殊メニューのカスタマイズ)

こんにちは、Office 開発サポートの中村です。

これまで 6 回にわたってリボン カスタマイズについて記事を公開してきましたが、今回でいったん最後の記事の予定です。最後に、特殊なメニューである BackStage ビュー / 右クリックで表示されるコンテキスト メニュー / クイック アクセス ツール バーのカスタマイズ方法をご紹介します。
 
目次
1. BackStage ビューのカスタマイズ
2. 右クリック メニューのカスタマイズ
3. クイック アクセス ツールバーのカスタマイズ

 

 

1. BackStage ビューのカスタマイズ

BackStage ビューは、Office 2010 から導入された [ファイル] タブ内のメニューを指します。Excel 2016 の場合、以下のような画面です。

Image may be NSFW.
Clik here to view.
図 1. Backstage ビュー

図 1. Backstage ビュー

Office 2010 の名前空間 <http://schemas.microsoft.com/office/2009/07/customui"> では、この BackStage ビューをカスタマイズする方法も用意されています。言い換えますと、XML でのカスタマイズのみとなりますので、VSTO でビジュアルなデザイナーから BackStage ビューをカスタマイズすることはできません。
また、通常のリボンは、VBA Comanndbars を用いてカスタマイズすることもできますが、BackStage ビューは VBA ではカスタマイズできません。(なお、Office 2007 以降のリボン メニューでは、この VBA による方法でのカスタマイズは推奨していません)
 
リボン XML を使用すると、例えば、以下のような BackStage メニューのカスタマイズができます。

  • 既存メニューを非表示にする
  • 左側のメニューに独自メニューを追加する
  • メニュー内の項目に独自のメニューを追加する

これらを応用すると、既存メニューを非表示にして同じ名前でカスタマイズ メニューを追加して、見た目上は既定のメニューで独自の動作をさせることもできます。例えば、[名前を付けて保存] をクリックすると、特定のファイル形式でのみ保存できるダイアログが表示される、といった動作を実現でき、オートメーションなどでユーザーの操作を制限したい場合に活用できます。

 

参考資料

タイトル : Office UI のカスタマイズ - Backstage ビュー
アドレス : https://msdn.microsoft.com/ja-jp/library/bf08984t.aspx#Backstage

タイトル : Office 2010 Backstage ビューについて (開発者向け)
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ee691833(v=office.14).aspx

タイトル : Office 2010 Backstage ビューのカスタマイズ (開発者向け)
アドレス : https://msdn.microsoft.com/ja-jp/library/ee815851(office.14).aspx

タイトル : Office 2010 Backstage ビューでカスタム コマンドを追加し、コントロールの形式を変更する
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ff634163(v=office.14).aspx

タイトル : アドインを作成して Office 2010 Backstage ビューをカスタマイズする
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ff936212(v=office.14).aspx

タイトル : Office 2010 Backstage ビューのグループとコントロールの表示を動的に変更する
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ff645396(v=office.14).aspx

タイトル : 3.3 Backstage
アドレス : https://msdn.microsoft.com/en-us/library/dd947358(v=office.12).aspx

 

カスタマイズの例

今回は、2 つのカスタマイズ例をご紹介します。

 

1 : 既存メニューを無効化する

例えば Excel をビューアのように使い、ユーザーに可能な限りブックに対する操作をさせたくない場合などに、リボン メニューを消したいという要望を頂くことがあります。

以前の投稿で、startFromScratch を利用して [ファイル] タブ以外のタブを非表示にできることをご紹介しましたが、startFromScratch だけでは [ファイル] タブの操作を抑止できませんでした。このような場合に、BackStage ビューのカスタマイズを合わせて行うことが検討できます。
 
BackStage ビューのカスタマイズには、<BackStage> 要素を使用します。例として、Excel 2016 の既定のメニューをすべて非表示にする場合、以下のように XML を記述します。

ここでは、startFromScratch も合わせて設定しています。また、Excel のバージョンや更新適用状況次第でメニュー名は異なりますので、対象環境のバージョンでのメニューと対応するコントロール名は、パート 5 で紹介したコントロール一覧などからご確認ください ([Tab Set] 列が None (Backstage View) の箇所) 。

<customui xmlns="http://schemas.microsoft.com/office/2009/07/customui">
       <ribbon startfromscratch="true"/>
       <backstage>
              <tab visible="false" idmso="TabInfo"/>
              <tab visible="false" idmso="TabOfficeStart/">
              <tab visible="false" idmso="TabRecent"/>
              <button visible="false" idmso="FileSave"/>
              <tab visible="false" idmso="TabSave"/>
              <tab visible="false" idmso="TabPrint"/>
              <tab visible="false" idmso="TabShare"/>
              <tab visible="false" idmso="TabPublish"/>
              <tab visible="false" idmso="Publish2Tab"/>
              <button visible="false" idmso="FileClose"/>
              <tab visible="false" idmso="TabHelp"/>
              <tab visible="false" idmso="TabOfficeFeedback"/>
              <button visible="false" idmso="ApplicationOptionsDialog"/>
       </backstage>
</customui>

 

この XML を組み込むと、BackStage ビューは以下のようになります。

Image may be NSFW.
Clik here to view.
図 2. BackStage ビューのメニュー無効化後

図 2. BackStage ビューのメニュー無効化後

 

2 : BackStage ビューの遷移を捕捉する

BackStage ビューが表示されたら、または BackStage ビューからブックに戻ったら xx したい、という場合には、onShow / onHide を使うと、これらのタイミングで指定した関数を実行することができます。先述の公開情報の中でも紹介されています。

タイトル : Office 2010 Backstage ビューについて (開発者向け)
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ee691833(v=office.14).aspx
該当箇所 : Backstage ビューのコントロールの説明、属性、および子情報

 

では、実装例です。リボン XML を以下のように記述します。

<customUI xmlns="http://schemas.microsoft.com/office/2009/07/customui">
       <backstage onShow="onShowMethod" onHide="onHideMethod" />
</customUI>

 

また、対象ブックの VBA に以下を記述しておきます。

Sub onShowMethod(contextObject As Object)
    MsgBox "BackStage メニューに遷移しました"
End Sub

Sub onHideMethod(contextObject As Object)
    MsgBox "BackStage メニューを閉じてブックに遷移しました"
End Sub

 

このように実装すると、[ファイル] タブをクリックして BackStage メニューを表示したときに onShowMethod が、BackStage メニュー上部の左矢印をクリックしてブックに戻ったときに onHideMethod が実行され、この例の場合はメッセージボックスが表示されます。
 

 

2. 右クリック メニューのカスタマイズ

右クリック メニューは、正式にはコンテキスト メニューと呼びます。このコンテキスト メニューも、リボン XML Office 2010 の名前空間ではカスタマイズすることができます。独自メニューの追加、既存メニューの無効化などができます。
 
補足

Office 2007 の名前空間ではコンテキスト メニューのカスタマイズ方法は用意されていません。Office 2007の場合は、従来通り CommandBars を用いて VBA でカスタマイズします。また、既存メニューの無効化の場合は、リボン メニューの無効化を XML で行うことで、連動してコンテキスト メニューも無効化されます。ただし、CommandBars は、図形 メニューのように一部のメニューは対応していません。(なお、Office 2007 の製品サポートはすでに終了しています。)

 

参考資料

タイトル : すべてのバージョンの Microsoft Excel でコンテキスト メニューをカスタマイズする
アドレス : http://msdn.microsoft.com/ja-jp/library/gg469862.aspx

タイトル : Office 2010 のコンテキスト メニューのカスタマイズ
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ee691832(v=office.14).aspx

タイトル : 3.2 Context Menu
アドレス : https://msdn.microsoft.com/en-us/library/dd926324(v=office.12).aspx

 

コンテキスト メニューのカスタマイズは、<contextMenu> 要素を使用します。メニューの追加は上記参考資料にサンプルがありますので、本記事では既存メニューの無効化のサンプルを紹介します。

 

: 図形の右クリック メニューを無効化する

右クリックする対象によってコンテキスト メニューの内容は異なりますが、今回は図形を右クリックした場合のコンテキスト メニューを例にします。またバージョンごとにメニュー構成は異なりますが、Excel 2016 のあるバージョンでは、以下のメニューが表示されます。

Image may be NSFW.
Clik here to view.
図 3. 図形の右クリック コンテキスト メニュー

図 3. 図形の右クリック コンテキスト メニュー

 

以下の XML を既述すると、このコンテキスト メニュー (下側のウィンドウ) を全て表示しないようにできます。

<customUI xmlns="http://schemas.microsoft.com/office/2009/07/customui">
       <contextMenus>
              <contextMenu idMso="ContextMenuShape">
                     <control idMso="Cut" visible="false"/>
                     <control idMso="Copy" visible="false"/>
                     <control idMso="PasteGalleryMini" visible="false"/>
                     <control idMso="ObjectEditText" visible="false"/>
                     <control idMso="ObjectEditPoints" visible="false"/>
                     <control idMso="ObjectsGroupMenu" visible="false"/>
                     <control idMso="ObjectBringToFrontMenu" visible="false"/>
                     <control idMso="ObjectSendToBackMenu" visible="false"/>
                     <control idMso="InsertLinkGallery" visible="false"/>
                     <control idMso="Insights" visible="false"/>
                     <control idMso="MacroAssign" visible="false"/>
                     <control idMso="ObjectSetShapeDefaults" visible="false"/>
                     <control idMso="ObjectSizeAndPropertiesDialog" visible="false"/>
                     <control idMso="ObjectFormatDialog" visible="false"/>
              </contextMenu>
       </contextMenus>
</customUI>

 

また、上部に表示されている「ミニ ツール バー」 (スタイル、塗りつぶし、枠線 のメニュー) は、以下のプロパティかレジストリで制御できます。

<プロパティ>
タイトル : Application.ShowMenuFloaties プロパティ (Excel)
アドレス : https://msdn.microsoft.com/ja-jp/VBA/Excel-VBA/articles/application-showmenufloaties-property-excel

: このプロパティに False を設定すると、ミニ ツール バーが表示されます。True で非表示になります。ただし、設定後にプロパティを取得すると、反対の値が取得されます。(False を設定した場合は、True が取得されます)。分かりづらい動作で申し訳ありませんが、ご注意ください。
 
<レジストリ>
キー : HKEY_CURRENT_USERSoftwareMicrosoftOfficeXX.0CommonToolbarsExcel
(XX = Office
バージョンを表す数字です。2016 の場合、16 になります。)
名前 : AllowMenuFloaties
種類 : REG_DWORD
: 1 = 表示 / 0 = 非表示
※ このレジストリは、右クリックの都度参照されるため動的に変更できます。Application.ShowMenuFloaties プロパティを変更すると、このレジストリにも反映されます。
 
上述の XML とプロパティを設定すると、図形の右クリック メニューを以下のように全て無効化できます。なお、このようにすべて無効化しても、コンテキスト メニューの枠は非表示にできません。

Image may be NSFW.
Clik here to view.
図 4. 図形の右クリック コンテキスト メニュー無効化

図 4. 図形の右クリック コンテキスト メニュー無効化


 

3. クイック アクセス ツールバーのカスタマイズ

Office のメニューには、リボンの他にクイック アクセス ツールバーも用意されています。下の画像の赤枠部分です。

Image may be NSFW.
Clik here to view.
図 5. クイック アクセス ツール バー

図 5. クイック アクセス ツール バー

 

このクイック アクセス ツールバーも、<qat> というタグで XML からカスタマイズができます (Office 2007 向けの名前空間でも利用可能)

ただし、startFromScratch True の場合にしか利用できません。上記の画像のように通常通りメニューが表示されている状態で、かつクイック アクセス ツール バーもカスタマイズする、ということは、一連の記事で紹介しているリボン カスタマイズ手法ではできません。このような場合は、以下の公開情報で紹介しているように、GUI で編集したクイック アクセス ツール バーの情報を保持する XML ファイルを利用する方法が検討できます。

 

タイトル : Office 2010 でカスタマイズしたリボンとクイック アクセス ツール バーを展開する
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ee704589(v=office.14).aspx

 
本記事では、startFromScratch True にした上で、ここまでと同様の方法でのカスタマイズする例をご紹介します。
 

参考資料

タイトル : 2.2.32 qat (Quick Access Toolbar)
アドレス : https://msdn.microsoft.com/en-us/library/dd948879(v=office.12).aspx

 

: 組み込みメニューと独自メニューの追加

以下の XML では、[上書き保存] メニュー (FileSave) と、独自に作成したメニュー (Mymenu) をクイック アクセス ツール バーに表示します。ここでは詳しく解説しませんが、Mymenu ボタンをクリックすると実行される処理 (MymenuCallback) は、以前の投稿で解説したようにマクロを用意しておきます。

<customUI xmlns="http://schemas.microsoft.com/office/2009/07/customui">
       <ribbon startFromScratch="true">
              <qat>
                     <documentControls>
                            <control idMso="FileSave" />
                            <button id="Mymenu" label="Mymenu" image="MymenuIcon" onAction="MymenuCallback"/>
                     </documentControls>
              </qat>
       </ribbon>
</customUI>

 

なお今回、アイコン用の画像を image 要素で設定しています。Custom UI Editor の場合、独自に用意した画像を以下の赤枠で囲んだ [Insert Icons] メニューから簡単に追加し、アイコンとして利用できます (今回は、サンプルとして作成したハートの絵柄の png ファイルを指定しました)。追加した画像をツリーで右クリックすると、XML から指定するアイコンの ID を変更できます。

Image may be NSFW.
Clik here to view.
図 6. Custom UI Editor でのアイコン設定

図 6. Custom UI Editor でのアイコン設定

 

このように記述すると、クイック アクセス ツール バーには以下のように表示されます。

Image may be NSFW.
Clik here to view.
図 7. クイック アクセス ツール バーのカスタマイズ後

図 7. クイック アクセス ツール バーのカスタマイズ後

 
今回の投稿は以上です。

リボン カスタマイズについてはまだまだ細かいテクニックなどもありますが、Office 2007 以降のリボン カスタマイズをこれから始める方向けに、基本的な要素、よく使われるカスタマイズなどをご紹介してきました。
Office 2007 リリースから 10 年が経過していますが、Office 開発 サポートをしていて、まだなかなかこのようなカスタマイズ手法があることが浸透していないと感じます。これからリボン カスタマイズを始める方、既存のソリューションの見直しを考えている方などに、一連の記事が少しでもお役に立てば幸いです。また何かご紹介した方が良い情報がありましたら、リボン カスタマイズについての追加記事を投稿するかもしれません。

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

 

SQL Server Management Studio Provides–“XE Profiler”

Bob Ward and I worked with our SQL Server Tool developers (thanks David) to enable ‘Quick XE Trace’ capabilities. The feature is available in the latest SQL Server Management Studio (SSMS) release.

Despite the deprecation of SQL Profiler several years ago, as well as various documents and blogs pointing out the older trace facilities shortcomings and performance impact on the SQL Server, SQL Profiler is still a top choice of SQL Server Developers and DBAs.  The ‘quick’ ability kept surfacing as a reason for using SQL Profiler.   The ‘quick’ part was defined as getting live data on the DBA’s or Developer’s screen with just a few clicks.

The new tree node (XE Profiler) provides that ‘quick’ ability.   The ‘Quick XE Profiler’ displays live events using the simple ‘Launch Session’ menu selection.  The templates capture common events and leverage XEvent enhanced view capabilities to display the event data.

Here is an example using the SQL Server SSMS 2017 against my SQL 2016 server.

Image may be NSFW.
Clik here to view.
image

Bob Dorr - Principal Software Engineer SQL Server

Lets bring in the New Year together – Make 2018 Epic by adding Microsoft Azure to your skills bag!

Learning anything new always takes time and patience. Whether you’re new to Azure or already a cloud professional, training is one of the best investments you can make in your career. Enrich your technical skills with one of our hands-on training courses listed below!

Azure Fundamentals

Type: Technical (L200)

Audience: IT Professional

Cost: $299

Product: Microsoft Azure

Date & Locations: Brisbane (February 8-9); Sydney (February 8-9); Perth (February 22-23); Melbourne (February 28-March 1); Canberra (March 12-13)

This course introduces key concepts for cloud computing and how Microsoft Azure aligns with those scenarios. Students are introduced to several key Azure services and solutions that align with the following technical disciplines, Infrastructure as a Service, Hybrid Cloud, Application Development, Big Data and Analytics and Cloud Security. REGISTER HERE

Architecting Azure IAAS and Hybrid Solutions

Type: Technical (L300)

Audience: IT Professional / Architects

Cost: $699

Product: Microsoft Azure

Date & Locations: Melbourne (February 5-7); Sydney (March 14-16)

The Azure IaaS and Hybrid Architect workshop is designed to prepare the architect to design solutions with Microsoft Azure. This workshop is focused on designing solutions using Infrastructure as a Service (IaaS) and other technologies to enable hybrid solutions such as data centre connectivity, hybrid applications, and other hybrid use cases such as business continuity with backup and high availability. Individual case studies will focus on specific real-world problems that represent common IaaS and Hybrid scenarios and practices. Students will also experience several hands-on labs to introduce them to some of the key services available. REGISTER HERE

Implementing Microsoft Azure Infrastructure

Type: Technical (L300)

Audience: IT Professional / Developers

Cost: $899

Product: Microsoft Azure

Date & Locations: Melbourne (February 12-16)

This training explores Microsoft Azure Infrastructure Services (IaaS) and several PaaS technologies such as Azure Web Apps and Cloud Services from the perspective of an IT Professional. This training provides an in-depth examination of Microsoft Azure Infrastructure Services (IaaS); covering Virtual Machines and Virtual Networks starting from introductory concepts through advanced capabilities of the platform. The student will learn best practices for configuring virtual machines for performance, durability, and availability using features built into the platform. Throughout the course the student will be introduced to tasks that can be accomplished through the Microsoft Azure Management Portal and with PowerShell automation to help build a core competency around critical automation skills. REGISTER HERE

Developing Microsoft Azure Solutions with Azure .NET

Type: Technical (L300)

Audience: IT Professional / Developers

Cost: $799

Product: Microsoft Azure

Date & Locations: Perth (April 16 -19): Sydney (April 30 – May 03); Melbourne (June 4-7)

This course is intended for students who have experience building ASP.NET and C# applications. Students will also have experience with the Microsoft Azure platform and a basic understanding of the services offered. This course offers students the opportunity to take an existing ASP.NET MVC application and expand its functionality as part of moving it to Azure. This course focuses on the considerations necessary when building a highly available solution in the cloud. REGISTER HERE

Introduction to Containers on Azure

Type: Technical (L200)

Audience: IT Professional

Cost: $599

Product: Microsoft Azure

Date & Locations: Sydney (March 12-13)

This course covers demonstrates different approaches for building container-based applications and deploying them into Azure. Different modules cover Windows and Linux based Docker containers with popular container orchestrators like kubernetes and DCOS provisioned by the Azure Container Service. The course will also show integration of container registries, specifically Docker Hub and the Azure Container Registry into DevOps workflows. This course starts with the basics of building a Linux and a Windows container running a .NET Core application. The course concludes showing how to customize the ACS templates with the acs-engine to deploy advanced cluster configurations. REGISTER HERE

Next Up Exam Camp 70-532: Developing Microsoft Azure Solutions

Type: Technical (L300)

Audience: IT Professionals looking to earning formal qualifications

Cost: $399

Product: Microsoft Azure

Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)

Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Azure is a must-have certification for anyone looking to prove their skills. REGISTER HERE

Next Up Exam 70-533 Implementing Microsoft Azure Infrastructure Solutions

Type: Technical (L300)

Audience: IT Professionals looking to earning formal qualifications

Cost: $399

Product: Microsoft Azure

Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)

Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Azure is a must-have certification for anyone looking to prove their skills. REGISTER HERE

Next Up Exam 70-535 Architecting Microsoft Azure Solutions

Type: Technical (L300)

Audience: IT Professionals looking to earning formal qualifications

Cost: $399

Product: Microsoft Azure

Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)

Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Azure is a must-have certification for anyone looking to prove their skills. REGISTER HERE

How to capture an ASP.NET Core memory dump on Azure App Service

I have written numerous articles about ASP.NET and creating memory dumps, but noticed I had not written one specifically about capturing an ASP.NET Core memory dump on an Azure App Service.  Here are some of my ‘related’ articles of this matter.

I created an ASP.NET Core 2.0 application in Visual Studio 2017, like that shown in Figure 1.

Image may be NSFW.
Clik here to view.
image

Figure 1, create an ASP.NET Core 2.0 application, simple

Inside the Index.cshtml.cs file I added the infamous Sleep() method to make sure performance is not very good.  And indeed it is slow, 5 seconds exactly.

public class IndexModel : PageModel
{
  public void OnGet()
  {
     System.Threading.Thread.Sleep(5000);
  }
}

Then I published the project out to an Azure App Service via Visual Studio 2017, Figure 2, by right-clicking the project –> Publish and followed the wizard where I selected the subscription, resource group and app service plan, I show a figure of that relationship here.

Image may be NSFW.
Clik here to view.
image

Figure 2, how to publish an ASP.NET Core 2.0 application, simple, to Azure App Service

After it published I accessed KUDU / SCM as I explained here, and navigated to the Process Explorer Tab.  As Seen in Figure 3.

Image may be NSFW.
Clik here to view.
image

Figure 3, troubleshoot an ASP.NET Core 2.0 application, simple, on Azure

I reproduced the issue, right-clicked the DOTNET.EXE –> Download Memory Dump –> Full Dump, Figure 4.  Note that the issue must be happening at the time the dump is taken in order for the issue to be seen in the dump.  A dump is just a snapshot of what is happening at the time it is taken.

Image may be NSFW.
Clik here to view.
image

Figure 4, troubleshoot / memory dump an ASP.NET Core 2.0 application, simple, on Azure

I tried to have 5 request running at the time I took the memory dump, let’s see how it looks.

If you have not already seen my article “Must use, must know WinDbg commands, my most used”, then check it out here.  As seen in Figure 5, running !mex.us grouped thread 15 and 16 together as they had the same stack patterns.  I found 1 other thread that was running my request, but the stack was a little different so it didn’t make that group.

Image may be NSFW.
Clik here to view.
image

Figure 5, troubleshoot / analyze a memory dump of an ASP.NET Core 2.0 application, simple, on Azure

Like always, it is easy to find the problem when you coded it on purpose, but the point is, if you see a lot of threads doing the same thing in the process and when you took the dump there was high CPU or high latency, it is highly probable that the method at the top of the stack is the one that needs to be looked into more.

An added tip, to see the value of the Int32 that is passed to the System.Threading.Thread.Sleep() method, as that is managed code, you can decompile the module and then look at the code, but if you didn’t want to do that you can execute kp, as seen in Figure 6.

Image may be NSFW.
Clik here to view.
image

Figure 6, troubleshoot / analyze a memory dump of an ASP.NET Core 2.0 application, simple, on Azure

Had it been a heap variable you can use !sos.dso and you’d see it stored on the heap, however, we all know that Integers are not stored on the heap right(*)?

DevOps for Data Science – Automated Testing

I have a series of posts on DevOps for Data Science where I am covering a set of concepts for a DevOps “Maturity Model” – a list of things you can do, in order, that will set you on the path for implementing DevOps in Data Science. In this article, I'll cover the next maturity you should focus on - Automated Testing.

This might possibly be the most difficult part of implementing DevOps for a Data Science project. Keep in mind that DevOps isn't a team, or a set of tools - it's a mindset of "shifting left", of thinking of the steps that come after what you are working on, and even before it. That means you think about the end-result, and all of the steps that get to the end-result, while you are creating the first design. And key to all of that is the ability to test the solution, as automatically as possible.

There are a lot of types of software testing, from Unit Testing (checking to make sure individual code works), Branch Testing (making sure the code works with all the other software you've changed in your area) to integration testing (making sure your code works with everyone else's) and Security Testing (making sure your code doesn't allow bad security things to happen). In this article, I'll focus on only two types of testing to keep it simple: Unit Testing and Integration Testing.

For most software, this is something that is easy to think about (but not necessarily to implement). If a certain function in the code takes in two numbers and averages them, that can be Unit tested with a function that ensures the result is accurate. You can then check your changes in, and Integration tests can run against the new complete software build with a fabricated set of results to ensure that everything works as expected.

But not so with Data Science work - or at least not all the time. There are a lot of situations where the answer is highly dependent on minute changes in the data, parameters, or other transitory conditions, and since many of these results fall within ranges (even between runs) you can't always put in a 10 and expect a 42 to come out. In Data Science you're doing predictive work, which by definition is a guess.

So is it possible to perform software tests against a Data Science solution? Absolutely! You not only can you test your algorithms and parameters, you should. Here's how:

First, make sure you know how to code with error-checking and handling routines in your chosen language. You should know how to work with standard "debugging" tools in whatever Integrated Development Environment (IDE) as well. Next, implement a Unit Test framework within your code. Data Scientists most often use Python and/or R in their work, as well as SQL. Unit testing frameworks exist within all of these:

 

After you've done the basics above, it's time to start thinking about the larger testing framework. It's not just that the code runs and integrates correctly, it's that it returns an expected result. In some cases, you can set a deterministic value to test with, and check that value against the run. In that case, you can fully automate the testing within the solution's larger Automated Testing framework, whatever that is in your organization. But odds are (see what I did there) you can't - the values can't be deterministic due to the nature of the algorithm.

In that case, pick the metric you use for the algorithm (p-value, F1-score, or AUC, or whatever is appropriate for the algorithm or family you're using) and store it in text or PNG output. From there, you'll need a "manual step" in the testing regimen of your organization's testing framework. This means that as the software is running through all of the tests of everyone else's software as it creates a new build, it stops and sends a message to someone that a manual test has been requested.

No one likes these stops - they slow everything down, and form a bottleneck. But in this case, they are unavoidable, with the alternative being that you just don't test that part of the software, which is unacceptable. So the way to make this as painless as possible is to appoint one of the Data Science team members as the "tester on call", that will watch the notification system (which should be sent to the whole Data Science team alias, not an individual) and manually check the results quickly (but thoroughly) and allow the test run to complete. You can often do this in just a few minutes, so after a while it will just be part of the testing routine, allowing a "mostly" automated testing system, essential for the Continuous Integration and Continuous Delivery phases (CI/CD). We'll pick up on Continuous Delivery in the next article.

 

[Service Fabric] Auto-scaling your Service Fabric cluster–Part II

In the second post in his series on Auto-scaling a Service Fabric cluster, Premier Developer consultant Larry Wall highlights a new feature that allows you to tie auto-scaling to an Application Insights metric.


In Part I of this article, I demonstrated how to set up auto-scaling on the Service Fabric clusters scale set based on a metric that is part of a VM scale set (Percentage CPU). This setting doesn't have much to do at all with the applications that are running in your cluster, it's just a pure hardware scaling that may take place because of your services CPU consumption or some other thing consuming CPU.

There was a recent addition to the auto-scaling capability of a Service Fabric cluster that allows you to use an Application Insights metric, reported by your service to control the cluster scaling. This capability gives you more finite control over not just auto-scaling, but which metric in which service to provide the metric values.

Continue reading on Larry’s blog here.

IIS FTP with ASP.NET membership authentication

This is my second blog on FTP. The first one being this.

This blog specifically explains how to address the below error which most of them have encountered while setting up the ASP.NET SQL membership authentication with FTP site:

Response:       220 Microsoft FTP Service

Command:        USER test

Response:       331 Password required

Command:        PASS *********

Response:       530-User cannot log in.

Response:       Win32 error:

Response:       Error details: System.Web: Default Membership Provider could not be found.

Response:       530 End

Error:          Critical error: Could not connect to server

You can follow this blog to configure the authentication with SQL membership for the FTP site on IIS.

Post this if you still run into the above issue then you need to ensure that the following steps are followed:

Steps 1:

Have the below setting within the web.config within the <configuration> tag. (depending on the framework and the bitness your application pool is using)

Example: If your AppPool is 64 bit running under .NET 4.0. You should be using the location C:WindowsMicrosoft.NETFramework64v4.0.30319Configweb.config

<location path="GlobalFtpSite/ftpsvc">
  <connectionStrings>

    <add connectionString="server=localhost;database=aspnetdb;Integrated Security=SSPI" name="FtpLocalSQLServer" />

   </connectionStrings>

<system.web>

   <membership defaultProvider="FtpSqlMembershipProvider">

     <providers>

        <add name="FtpSqlMembershipProvider"

           type="System.Web.Security.SqlMembershipProvider,System.Web,Version=4.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"

           connectionStringName="FtpLocalSQLServer"

           enablePasswordRetrieval="false"

           enablePasswordReset="false"

           requiresQuestionAndAnswer="false"

          applicationName="/"

           requiresUniqueEmail="false"

           passwordFormat="Clear" />

      </providers>

   </membership>

   <roleManager defaultProvider="FtpSqlRoleProvider" enabled="true">

      <providers>

        <add name="FtpSqlRoleProvider"

           type="System.Web.Security.SqlRoleProvider,System.Web,Version=4.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"

           connectionStringName="FtpLocalSQLServer"

           applicationName="/" />

         </providers>

      </roleManager>

   </system.web>

</location>

</configuration>

Note:

If you are making use AppPool which is on .NET version 2.0 with 64 bit, then you need to modify the web.config in the location C:WindowsMicrosoft.NETFramework64v2.0.50727CONFIG. Also ensure that you have modified the above highlighted section to 2.0.0.0.

Steps 2:

Grant the permissions to the Network Service Account Write / Modify permissions to the C:WindowsMicrosoft.NETFramework64v4.0.30319Temporary ASP.NET Files folder. Note: depending on your AppPool framework and bitness the above path can differ.

Steps 3:

Remember You don't need to have any settings at the FTP website level for the .NET Roles, .NET Users etc, Because the root web.config setting should apply to all the applications which run on that version of framework and bitness.

Hope this helps Image may be NSFW.
Clik here to view.
🙂


Capturing Perfview traces for ASPNET Core application

Recently, I worked with one of the customer who wanted assistance in implementing the logging and capturing the perfview traces for aspnet core web application.

So, I decided to blog on this topic and explain how to enable the logging, capture and analyze perfview trace.

I am making use of a simple ASPNET Core MVC application here to explain this.

I have the below lines in my Program.cs file:

static void Main(string[] args)

{

    BuildWebHost(args).Run();

}

public static IWebHost BuildWebHost(string[] args) =>

WebHost.CreateDefaultBuilder(args)

.CaptureStartupErrors(true)

// THE ABOVE LINE IS FOR THE STARTUP RELATED ISSUES

.UseSetting(WebHostDefaults.DetailedErrorsKey,"true")

//THE ABOVE LINE WILL GIVE A DETAILED ERROR PAGE IN CASE OF ANY FAILURE

.UseStartup<Startup>()

.UseKestrel()

.ConfigureLogging((hostingContext,logging)=>

{

   logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));

   logging.AddConsole();

   //THE ABOVE LINE WILL DISPLAY THE LOGS IN CONSOLE WHILE RUNNING THE APP FROM THE CMD LINE

   logging.AddDebug();

   //THE ABOVE LINE WILL DISPLAY THE LOGS IN DEBUG OUTPUT WINDOW

   logging.AddEventSourceLogger();

   //THE ABOVE LINE IS FOR THE PERFVIEW

})

.Build();

 

Ensure that you have placed the below namespaces within your Program.cs

   using Microsoft.AspNetCore.Hosting;

   using Microsoft.Extensions.Logging;

   using Microsoft.AspNetCore;

You can make use of the Nuget PM to install the Microsoft.Extensions.Logging component.

Go to your concerned Controller and ensure that you have referenced the logging component

public class HomeController : Controller
{
    // GET: /<controller>/

    private readonly ILogger _logger;

    public HomeController(ILogger<HomeController> logger)

    {

       _logger = logger;

    }

    public IActionResult Index()

    {

       // This code specifies the loglevel.

       // Ex: LogInformation, LogWarning, LogCritical, LogDebug etc

       _logger.LogInformation(1000, "In Index method..............");

       return View();

    }

}

 

Ensure that you have placed the below namespaces within your Controller:

   using Microsoft.AspNetCore.Mvc;

   using Microsoft.Extensions.Logging;

After doing the above changes, you will see the Logging information within your Debug Output window and the CMD console.

 

In the Debug window:

Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request starting HTTP/1.1 GET http://localhost:5761/home/index
Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request starting HTTP/1.1 GET http://localhost:5761/home/index
------------------------------------

Demo.Controllers.HomeController:Information: In Index method..............

Demo.Controllers.HomeController:Information: In Index method..............

 

In the CMD console:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5762/home/index
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]

Request starting HTTP/1.1 GET http://localhost:5762/home/index

info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]

Executing action method Demo.Controllers.HomeController.Index (Demo) with arguments ((null)) - ModelState is Valid

info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]

Executing action method Demo.Controllers.HomeController.Index (Demo) with arguments ((null)) - ModelState is Valid

infoinfo: Demo.Controllers.HomeController[1000] In Index method..............

: Demo.Controllers.HomeController[1000] In Index method..............

info: Microsoft.AspNetCore.Mvc.ViewFeatures.Internal.ViewResultExecutor[1]

Executing ViewResult, running view at path /Views/Home/Index.cshtml.

info: Microsoft.AspNetCore.Mvc.ViewFeatures.Internal.ViewResultExecutor[1]

Executing ViewResult, running view at path /Views/Home/Index.cshtml.

info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]

Executed action Demo.Controllers.HomeController.Index (Demo) in 4628.1449ms

info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]

Executed action Demo.Controllers.HomeController.Index (Demo) in 4628.1449ms

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]

Request finished in 4686.3757ms 200 text/html; charset=utf-8

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]

Request finished in 4686.3757ms 200 text/html; charset=utf-8

 

Perfview Data Collection:

Download the Perfview from the below location and place it on the machine where you have the ASPNET Core app running:

https://www.microsoft.com/en-us/download/details.aspx?id=28567

Data Capture:

  • Open Perfview.
  • Go to Collect and select Collect option as shown below:

Image may be NSFW.
Clik here to view.

  • Expand the Advanced Options.
  • Enable the Thread Time and IIS providers. (This will help us give some additional Information while troubleshooting) Note: IIS provider is needed if the app is hosted on IIS. Else you can skip it. This provider can be used only if the IIS Tracing feature is installed.
  • Place the *Microsoft-Extensions-Logging section within Additional Providers section as shown below:

Image may be NSFW.
Clik here to view.

 

Perfview Analysis:

After we open the perfview ETL trace we see our logging information being logged:

Event Name                                    Time MSec Process Name   Rest  
Microsoft-Extensions-Logging/FormattedMessage 39,706.201 dotnet (25292) ThreadID="285,016" Level="2" FactoryID="1" LoggerName="Demo.Controllers.HomeController" EventId="1000" FormattedMessage="In Index method.............." ActivityID="//1/3/1/"
Microsoft-Extensions-Logging/Message          39,708.668 dotnet (25292) ThreadID="285,016" Level="2" FactoryID="1" LoggerName="Demo.Controllers.HomeController" EventId="1000" Exception="{ TypeName="", Message="", HResult=0, VerboseMessage="" }" Arguments="[{ Key="{OriginalFormat}", Value="In Index method.............." }]" ActivityID="//1/3/1/"
Microsoft-Extensions-Logging/MessageJson      39,708.700 dotnet (25292) ThreadID="285,016" Level="2" FactoryID="1" LoggerName="Demo.Controllers.HomeController" EventId="1000" ExceptionJson="{}" ArgumentsJson="{"{OriginalFormat}":

"In Index method.............."}" ActivityID="//1/3/1/"

 

Hope this helps Image may be NSFW.
Clik here to view.
🙂

Postmortem – Intermittent Failures for Visual Studio Team Services on 14 Dec 2017

On December 14th 2017 we started having a series of incidents with Visual Studio Team Services (VSTS) that had a serious impact on the availability of our service for many customers (incident blogs #1 #2 #3). We apologize for the disruption. Below we describe the cause and the actions we are taking to address the issues.

Customer Impact

This incident caused intermittent failures across multiple instances of the VSTS service within the US and Brazil. During this time we experienced failures within our application which caused IIS to restart resulting in customer impact for various VSTS scenarios.

The incident started on 14 December. The graph below shows periods of customer impact for the Central US (CUS) and South Brazil (SBR) scale units.
Image may be NSFW.
Clik here to view.

What Happened

For context VSTS uses DNS for routing to the correct scale unit. On account signup VSTS queues a job to create a DNS record for {account}.visualstudio.com to point to the right scale unit. Because there is a delay between the DNS entry being added and being used, we use Application Request Routing (ARR) to re-route requests to the right scale unit until the DNS update is visible to clients. Additionally, VSTS uses web sockets to provide real time updates in the browser for pull requests and builds via SignalR.

The application pools (w3wp process) on the Brazil and Central US VSTS scale units began crashing intermittently on December 14th. IIS would restart the application pools on failure, but all existing connections would be terminated. Analysis of the dumps revealed that a certain call pattern would trigger the crash.

The request characteristics common to each crash were the following.

  1. The request was a web socket request.
  2. The request was proxied using ARR.

The issue took a while to track down because we suspected recent changes to the code that uses SignalR. However, the root cause was that on December 14th we had a released a fix to an unrelated issue, and that fix added code to use the ASP.Net PreSendRequestHeaders event.  Using this event in combination with web sockets and ARR caused an AccessViolationException terminating the process. We spoke with the ASP.Net team and they informed us that the PreSendRequestHeaders method is unreliable and we should replace it with HttpResponse.AddOnSendingHeaders instead. We have released a fix with that change.

While debugging the issue, we mitigated customer impact by redirecting ARR traffic once we realized that was the key cause.

Here is a timeline of the incident.
Image may be NSFW.
Clik here to view.

  1. IIS errors start in SBR
  2. IIS errors start in CUS1
  3. Workaround – stopped ARR traffic from going to SBR.
  4. Workaround – Redirected *.visualstudio.com wildcard from CUS1 to pre-flight (an internal instance).

Next Steps

In order to prevent this issue in the future, we are taking the following actions.

  1. We have added monitoring and alerting specifically for w3wp crashes.
  2. We are working with the ASP.NET team to document or deprecate the PreSendRequestHeaders method. This page has been updated, and we are working to get the others updated.
  3. We are adding more detailed markers to our telemetry to make it easier to identify which build a given scale unit is on at any point in time to help correlate errors with the builds that introduced them.

Sincerely,
Buck Hodges
Director of Engineering, VSTS

Ubuntu 환경에서 .NET Core의 profiling 및 postmortem debugging

Ubuntu(16.04) 환경에서 .NET Core 애플리케이션에서 대한 CPU Profiling은 기존의 Windows 환경에서 사용했던 PerfView라는 툴을 이용할 수 있다.

CPU profiling을 위해서 우선 perfcollect 라는 툴을 Ubuntu 환경에 설치해야 한다.

curl -OL http://aka.ms/perfcollect
sudo chmod +x perfcollect
sudo ./perfcollect install

설치가 완료 되면, 다음과 같은 순서로 CPU sampling을 할 수 있다.

애플리케이션을 수행할 terminal 창에서 아래를 수행한다.

export COMPlus_PerfMapEnabled=1
export COMPlus_EnableEventLog=1

이후에 perfcollect를 수행할 terminal 창을 하나 더 연후에 아래를 수행하여 CPU sampling을 시작한다.

sudo ./perfcollect collect hicputrace

수행과 더불어 sampling은 시작된다.

이후에는 애플리케이션 수행창에서 애플리케이션을 수행한다. 그리고 일정 시간이 흐른 후에 perfcollect가 호출되었던 창에서 ctrl+c를 눌러 중지하면, 현 directory에 hicputrace.trace.zip 파일이 생성된다. 해당 trace 파일은 windows 환경에서 perfview (http://aka.ms/perfview) 툴을 이용하여 볼 수 있다.

Perfview.exe 가 위치한 폴더에 Ubuntu에서 수집된 trace 파일을 위치하고 perfview를 실행하면 아래와 같은 화면을 볼 수 있다.
Image may be NSFW.
Clik here to view.

이후에 해당 zip파일을 click 하면, 아래와 같이 profile 정보를 볼 수 있다.

특히, 보고자 하는 것이 .NET core 애플리케이션의 CPU 점유 call stack을 확인하는 것이므로, CallTree 메뉴에서 dotnet process를 check 하고 tree를 확장하면, CPU를 점유하고 있는 call stack을 볼 수 있다.

Image may be NSFW.
Clik here to view.

아쉽게도 PerfCollect 툴은 현재 Memory Profiling은 제공하지 않는 다. 만일, Memory Profiling과 같은 정보를 추출하려면 core dump를 통해서 .NET managed memory사용량을 확인할 수 있다. 그 방법에 대해서는 다음과 같다.

먼저, Core dump를 수집하는 방법은 여러가지가 있으나, dump의 size를 고려해 볼 때, 아래의 방법을 생각해 볼 수 있다. 우선 애플리케이션의 수행에 앞서서 아래의 명령을 수행하여 충분한 size의 덤프를 생성할 수 있도록 한다.

ulimit -c Unlimited

그리고, 애플리케이션을 수행하여 메모리가 충분히 누수가 되는 시점에 새로운 terminal 창을 오픈하여 아래와 같이 애플리케이션을 중지시키면 해당 시점에 덤프가 떨어진다.

sudo kill -4 <pid>

덤프는 애플리케이션 수행시점의 현재폴더 혹은 /var/crash 폴더에서 확인할 수 있다.
core덤프를 수집하는 일반적인 방법은 다음과 같다. Gdb를 이용하는 방법이다.

sudo apt-get install gdb
sudo gdb

이후에 ps -efH 를 이용하여 pid값을 얻은 후에 gdb를 attach 한다.

attach <pid>

Gdb가 attach 된 이후에 적절한 시점에 generate-core-file 명령을 통해서 core file을 얻을 수 있다.

generate-core-file <core file path>

이후 덤프분석을 위해 해당 덤프를 lldb 디버거를 통해서 오픈 할 수 있다. Gdb에서 사용할 수 있는 .NET Core plugin이 존재하지 않기 때문에 lldb 디버거를 사용해야 한다.

아래는 “core” 라는 이름의 core file을 lldb 디버거에서 오픈한다.
Image may be NSFW.
Clik here to view.

그리고, libsosplugin.so 파일을 load 한다.

plugin load /usr/share/dotnet/shared/Microsoft.NETCore.App/2.0.0/libsosplugin.so
setclrpath /usr/share/dotnet/shared/Microsoft.NETCore.App/2.0.0

먼저, Libsosplugin.so 에서 제공하는 “eeheap –gc” 명령을 통해서 해당 프로세스가 사용하고 있는 전체 managed memory의 size를 알 수 있다.

Image may be NSFW.
Clik here to view.

“dumpheap –stat” 명령은 해당 managed memory 영역내에서 개별적으로 메모리를 사용하는 오브젝트들의 정보를 알 수 있다.

Image may be NSFW.
Clik here to view.

메모리 누수에 대해서 검토한다면, count와 TotalSize가 큰 오브젝트들이 사실은 관심대상이다. 아래를 보면, System.Byte[]가 그 중 많은 메모리를 점유하고 있다.

Image may be NSFW.
Clik here to view.

System.Byte[]의 MethodTable의 정보는 00007f8138ba1210 이며, 10565개의 동일한 타입의 오브젝트가 존재하는 것을 알 수 있다.  “dumpheap -mt” 명령어는 System.Byte[]타입의 오브젝트들을 나열해준다.

dumpheap –mt 00007f8138ba1210

상위의 명령을 수행하면, <MT> <Address> <Size> 의 배열로 나열된 정보를 확인할 수 있다. 그리고, “dumpobj” 명령은 “dumpheap -mt” 명령의 결과값에 존재하는 address 값을 parameter로 사용하여 개별적인 오브젝트의 정보를 출력한다.

Image may be NSFW.
Clik here to view.

사실 중요한 것은 해당 오브젝트를 사용하는 코드이다. 그 정보를 확인할 수 있는 명령이 gcroot 인데, 아래와 같이 덤프에서는 정상적으로 출력이 되지 않는다.

Image may be NSFW.
Clik here to view.

하지만, lldb(3.6)를 문제가 발생하고 있는 애플리케이션에 직접 붙여서 확인한다면, 아래와 같은 정보를 확인할 수 있다.

Image may be NSFW.
Clik here to view.

memleakdemo.Program.ManagedLeak 메소드 안에 존재하는 ArrayList가 System.Byte[]를 참조하고 있다. 그러므로, ArrayList의 Size 가 얼마나 큰지, 그리고, 언제 Byte[]를 release 하는 지 등을 검토함으로써 Memory 문제여부를 isolation할 수 있을 것 같다.

// memleakdemo.Program.ManagedLeak (System.Object)
private static void ManagedLeak(object s)
{
    State state = (State)s;
    ArrayList list = new ArrayList();

    for (int i = 0; i < state._iterations; i++)
    {
        if (i % 100 == 0)
            Console.WriteLine(string.Format("Allocated: {0}", state._size));

            System.Threading.Thread.Sleep(10);
            list.Add(new byte[state._size]);
    }
}

그러기 위해서는 결국 얼마나 많은 오브젝트를 살펴보느냐가 필요한데, 이것보다는 이러한 부분을 직관적으로 살펴볼 수 있는 memory profiler가 제공되면 좋을 듯 하다.

Capital Raising: How I raised over $1m in funding for my startup Recomazing

Recomazing is Australia's largest shared knowledge bank of recommended tools and services for startup growth.  Members of the Recomazing community get access to weekly recommendations on solving common startup problems from the likes of CanvaAtlassianAirtasker, zipMoney, DropBox, Hubspot, Slack and more.  In this article, we've included the Recomazing profile links for Marc's recommended tools and resources.  Feel free to visit the profiles to see tips and insights from other leading entrepreneurs.  

 

  Image may be NSFW.
Clik here to view.
Guest post by Marc Cowper, founder of Recomazing.

 

When I first quit my job to start Recomazing I had absolutely no idea about the world of capital raising. I found the entire process incredibly overwhelming. Around every corner lingered a new term I had never heard of before...'Term Sheets', 'Pre-Money Value', 'Equity Splits', 'Series A'… it was like learning a new language

To make matters worse, I found the info online to be conflicting and rarely would I find anything from the entrepreneur's point of view. As a solo founder, I was already working 20 hours a day getting my business off the ground, so the thought of trying to piece together hundreds of separate articles to paint a full picture was utterly exhausting

Two and half years on and I’m happy to say Recomazing has completed two successful (read gruelling) rounds of funding and I've basically made every mistake you could possibly make. I've pitched 100+ times, received helpful advice from leading investors in the industry, spent countless hours researching online and attended numerous seminars

This guide is an attempt to condense all my learnings in one place so any wide-eyed founder reading this will start from a better point than I did. To be as helpful as possible I will share my recommendations for all the tools and resources that helped me navigate through 2 successful rounds of funding

I've organised my reco lists into the 3 most important segments of my journey:

      The Approach

      The Pitch

      The Money

 

♦The Approach

The first rule of startups is NEVER RAISE CAPITAL! Instead, go back, interrogate your business model and try to work out a way to create a case where you don't need to raise capital.

You should strive to get to a position where if investment is needed it is only to SCALE your established business, not CREATE your business (just ask the founders of Atlassian, Australia's most successful startup, who never needed a dollar of investor money until they wanted to scale).

Not every startup needs to raise capital, nor should they. Take it from someone who is stupid enough to have gone through this process multiple times; there is no greater distraction to your business than raising capital

However, if you do need to go down the path of raising capital, here are some tips.

 

How do I get an investor to invest in my startup?

Think of it like it is your own money. If someone comes up to you and says "hey, I’ve got a good idea but I need 100k from you" you're going to have some pretty serious concerns about the risk involved. What would it take to convince you to invest? No doubt, it would require a hell of a lot more than just an idea on a swish looking presentation.

Here are my top recos for connecting with investors and making a great first impression (so you’ll get a chance to make a second).

 

Recomazing Score Card

Investors will look to minimise their risk. The less risk, the more likely they will invest. I’ve created a graphic outlining some of the most common 'scorecard' factors discussed in my pitch meetings.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Pitching Hacks

Pitching Hacks is the first book released from Venture Hacks about pitching startups to investors and raising capital.

 

A warm referral beats a cold intro every day of the week! The investor will be more open to hearing your pitch if you come with a warm intro.

Always write a suggested intro email for whoever is referring you. It makes everyone's lives easier and no-one should be able to sell it better than you.   Pitching Hacks is a great resource to check out if you want to dig into the detail of how to structure your emails.  I found out about Pitching hacks by checking out Will Davies’, founder of Car Next Door, profile on Recomazing.  Will has raised millions so I jump on any reco he has to offer.

Image may be NSFW.
Clik here to view.

Streak

CRM in your Inbox

Before you start identifying which investors are right for you, you'll need a place to store all the info. Put down that excel sheet and download Streak. Streak is a FREE CRM tool that lives right within your Gmail account. The templates are perfect for keeping track of your investor details and comms.

Image may be NSFW.
Clik here to view.

LinkedIn

 Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.

LinkedIn is the ultimate tool for professional stalking. Find someone that knows your lead and get an intro. If you don’t know them stalk them. Look at what events they are going to, go to the next one. Comment on their posts and build your online reputation. Make sure you get your profile looking schmick before you outreach.

 

Image may be NSFW.
Clik here to view.

Meetup

Find or create groups to meet people near you who share your interests.

A lot of the top investors regularly speak at meet ups and seminars so it really isn't hard to introduce yourself. Download meetups and type in 'startups' - there's a stack of events to attend. Remember, an investor's job is to find startups worth investing in so don't feel bad about sharing your elevator pitch.  

P.S. I met my first investor by bailing him up after he spoke on an angel investment panel at a Meetup seminar. He has been our ‘fairy god father’ ever since (Hi David!)…so get out there to some meetups and hustle.

 

 

♦The Pitch

What should I say when first reaching out?

I always found it helpful to have 3 key items sorted before reaching out to investors:

1. Your one sentence pitch.

2. Your elevator pitch.

3. Your pitch presentation.

 

Your one sentence pitch

This is your ‘sound bite’ that should immediately convey the essence of your startup and commonly (but not necessarily) references a well known business model (eg Recomazing is the 'Trip Advisor' for the best tools & services to run a business.)

I actually hate using this sound bite as I view Recomazing as so much more than a review site with a real sense of community, collaboration and curation...but that's not the point of the soundbite.

I came to realise that the point of the one sentence sound bite is to give your investor an immediate grasp of your model (and the strengths/weaknesses you may be incurring). It's an entry point to then talk about the rest.

The truth is a lot of startups just apply a well known model to a different industry. It's the reason you hear so many startups refer to themselves as the "Uber for XYZ". It keeps things simple.

 

Your elevator pitch

This needs to be a bit longer than your sound bite and provides a bit more insight into your secret sauce eg.

Recomazing helps entrepreneurs discover the best tools and services to grow their business.

We believe in the 3 C's:

Community: Our members help each other by contributing to the shared knowledge bank of trusted recommendations.

Collaboration: We partner with leading business communities to foster greater collaboration between members (eg coworking spaces, accelerators, online networks etc).

Curation: We source expertise from leading tech entrepreneurs who want to 'give back' and help our members grow (eg Atlassian, Dropbox, AirTasker etc).

 

Your pitch deck

Typically, investors will request your pitch be 10-12 slides with time for Q&A at the end. I always found there were similar questions at the end so I prepared those slides in advance and put them in the appendix in case they were asked.

Here's a little infographic to demonstrate the basics on what to include. After your first pitch you should get a sense of how to adapt it and where you need to potentially add more detail but this is a great base to start from.

Image may be NSFW.
Clik here to view.

If you are constantly getting asked questions to clarify your slides it's a sign your pitch isn't clear and needs updating. 

 If you find you are getting asked more detailed questions than what is expected from your 10 slides, that's typically a good sign.

 

 

Image may be NSFW.
Clik here to view.

Startup Pitch Decks

A collection of real pitch decks from real startups that have raised over $400 million.

 

If you leave a pitch feeling like the investors just aren't understanding how awesome your idea is then try to avoid shifting blame to them, instead you need to ask yourself why they aren't getting it:

       Are your slides clear enough?

       Are you not encapsulating the vision well enough?

       Are you getting stuck in the detail instead of focusing on the core premise

Startup Pitch Decks gives you a view on how some of the world's top startups initially pitched their businesses (including Airbnb, Intercom, Buffer etc)

 

Image may be NSFW.
Clik here to view.

Pitchbot

Pitchbot is an email based dealflow tool for VCs and angel investors.

Pitchbot is a great tool I wish I had before starting my raising process. It's a bot that asks you common questions from pitch meetings - give it a crack!]

 

♦The Money

It’s exciting when you get to this stage, but, for a first time founder, this can be the most confusing and harrowing bit of the entire process. Here are some fantastic tools, resources and partners that helped me to demystify the investment process and more efficiently work my way through terms sheets and agreements. AND, this guide would not be complete without including recos for my investors in each phase. I wouldn’t be writing this without them Image may be NSFW.
Clik here to view.
🙂
Image may be NSFW.
Clik here to view.

Funders and Founders

Startup blog covering entrepreneurs and investors through infographic media.

I found a great infographic on Fundersandfounders.com that was really helpful when trying to wrap my head around the investment and valuation process.

Image may be NSFW.
Clik here to view.

AVCAL

National association which represents the Australian venture capital industry's participants, promotes the industry and encourages investment in high growth businesses.

You can view the open source seed financial docs released by Avcal here to get an idea of the terms you can expect to see in a term sheet. A number of major VC firms have pledged to use these templates.

Image may be NSFW.
Clik here to view.

Airtree VC

AirTree is an early and growth stage venture firm backing world-class entrepreneurs.

Airtree have released a 'plain english' term sheet which is available here.

 

Image may be NSFW.
Clik here to view.

muru-D

Email marketing & marketing automation software.

Our friends and valued partners at Muru-D just implemented SAFE (Simple Agreement for Future Equity) docs after they were created by the world's leading accelerator, Y Combinator. It's a great move for our ecosystem. To see how the terms differ you can view the details of a SAFE doc here.

 

Image may be NSFW.
Clik here to view.

Monash Private Capital

Monash Private Capital is an independent principal investment and advisory firm providing capital, asset management and strategic advisory solutions.

My experience with Monash Private Capital for our seed round was great. A lot of investors say they ‘invest in people’ above all else but personally I found that revenue often gets a higher priority over people (which is understandable). I found Monash truly believe in investing in people (although those people need to have a very good commercialisation strategy in place).

Image may be NSFW.
Clik here to view.

Our Innovation Fund

OIF Venture Capital invests in early stage businesses with innovative, high growth or disruptive technologies or business models with demonstrated market demand.

 Our Innovation Fund is the VC fund that led our second round. I speak to a lot of founders who aren’t happy to recommend their investment team but I can gladly say that’s not the case with us. We love working with the OIF team, they’ve always helped in any way they can.

Image may be NSFW.
Clik here to view.

Investible

Investible provides high potential founders the support, capital and networks they need to grow businesses to scale under one roof.

Investible have been very helpful in introducing us to their networks and setting up key partnerships.

Image may be NSFW.
Clik here to view.

Y Combinator Blog

Insights from the world's leading accelerator.

 Get insights from the Y Combinator Blog, the #1 accelerator in the world. This blog is full of startup wisdom. I wish I discovered it earlier in my startup journey.

 

And there you have it, everything I wish I knew when I started out. I hope this has helped demystify the space and give you some immediate shortcuts on your own cap raising journey!

 

We hope you found this content helpful.  If you’ve had a good experience with BizSpark, we’d love for you to share your 'reco' with the rest of the startup community on our profile. 

Lets bring in the New Year together – Make 2018 Epic by getting to know Smart Partner Marketing

Wherever you are in your business journey, from budding start-up to sprawling enterprise, chances are you’d never turn your back on a promising marketing tip. That’s why our Smart Partner Marketing site is a can’t-miss. Get started by using our assessment tool to see where your business stands, then dive in to a curated collection of marketing resources that align with your goals.

The Smart Partner Marketing site has marketing recommendations for companies of all sizes—whether you’re looking to build a foundation, amplify your presence, or strengthen customer relationships. Microsoft is here to help you separate your business from the competition, attract and retain the right customers, and get on the path to sustained growth. Learn more about Smart Partner Marketing today.

社内向け Web システムを Azure Web App でホストするためのユーザー認証

はじめに

現在社内で利用されている Web システムを Azure Web App に移行したい、というご相談を良くいただくのですが、多くの場合は以下の理由によるようです。

  • PaaS を利用することで保守・運用コストを抑えたい
  • ワークスタイル改善のためにモバイル対応したい

一般的に社内システムは営業日の業務時間中に使用されるため、全体の運用時間から考えるとリソースの利用率が低くなりがちです。このためスケールアウト/スケールインが容易でコスト効率の良い Web Apps は適切なソリューションであると考えます。また Web Apps は基本的にパブリックインターネットに公開された Web サイトですので、インフラという面でも既に整っているといえます。あとはアプリケーションやコンテンツがユーザーのモバイルデバイスに対応していれば良いだけです。

Image may be NSFW.
Clik here to view.

一方で二の足を踏まれるお客様も多く、その理由は多くの場合セキュリティです。Web Apps で考慮すべきセキュリティオプションは多岐にわたりますが、本記事では社内システム構築においては確実に必要となる、「ユーザーの認証と認可」についてご紹介したいと思います。

ID 管理とユーザー認証

多くの場合「社内向けのシステム」を利用するユーザーは同一組織に属し、そのユーザー ID は何らかの ID 管理システム上で管理・運用がなされていると思います。これは従来から Windows プラットフォームをご利用いただいていた場合には AD : Active Directory が該当します。イントラネットからの利用を想定した Web サーバーであればユーザー認証を AD に任せることで「社内向けシステム」としては必要な認証を行うことが出来るわけです。

Image may be NSFW.
Clik here to view.

ただ、ほとんどの組織・企業においてはインターネット環境で同様の方式を利用することができないと思います。このようなケースにではインターネットで利用できる ID 管理および認証サービスとして Azure Active Directory がご利用いただけます。Azure AD はそれ単独で利用することもできますが、社内に設置された AD と ID 情報を同期したり、認証をフェデレーションすることも可能です。社内 AD と連携した Azure AD に対して認証処理を委託することで、「社員しか利用できないインターネット Web システム」を構築することができるわけです。

Azure Active Directory を使用した認証

Azure AD を利用してユーザー認証を行うためには以下の 2 つの方法があります。

  • 任意の Web サーバー上で動作するアプリケーションから利用する
  • Web Apps 自体に搭載された認証機能(EasyAuth)を利用する

Image may be NSFW.
Clik here to view.

前者(図中右側)の場合、アプリケーション自体が Azure AD が対応する認証プロトコルを使用することで認証を実現します。この方式ではアプリケーション自体が動作するプラットフォームに非依存になりますが、各アプリケーション開発言語用の認証ミドルウェアや SDK を準備、設定し、コーディングするといった手間が発生します。

後者(図中左側)の場合は Web App や Azure AD の構成設定のみで、つまりノンコーディングで認証を有効にできますので非常に簡単です。ただしアプリケーションとしては Web Apps 上で動作するときだけ認証が効くことになりますので、開発中やテスト中に若干の工夫が必要になります。以降では簡単に構成可能な後者の方をご紹介します。

Web App が Azure AD で認証する際の挙動

Web App が Azure AD の認証を使用した際の挙動は以下のようになります。従来から良く利用されていた Cookie ベースのフォーム認証画面が外部サイトになったもの、と考えるとわかりやすいと思います(技術的には異なります)。

  1. ユーザー(ブラウザー)が Web App にアクセス
  2. ユーザーが認証されてないので Azure AD の認証画面へリダイレクト
  3. ユーザーが ID とパスワード等を入力し、認証に成功したら再度 Web App にリダイレクト
  4. 今度はユーザーが認証済みであるため、Web App はアプリケーションにリクエストを通す

Image may be NSFW.
Clik here to view.

Web App の Azure AD 認証を構成する

前述のような挙動を行わせるためには Azure AD と Web App の双方で設定情報の交換を行います。実際に設定を行う際には Web App と Azure AD 双方の設定画面を行ったり来たりする必要があり、若干混乱しやすいのでご注意ください。

  • Azure AD 側で必要な情報
    • 認証の委託を受ける Web App の登録
    • 認証が成功した後にその結果を渡す宛先
  • Web App 側で必要な情報
    • 認証を委託する先となる Azure ADの テナント
    • Azure AD に登録された Web Apps を表す ID

Image may be NSFW.
Clik here to view.

[Step 1] Web App を作成する

まず Azure Web App を作成します。この段階ではまだ認証を設定していないので、URL にアクセスすると既定のページが表示されることが確認できます。この URL は Azure AD 側でアプリケーション登録する際に使用するので控えておきます。

Image may be NSFW.
Clik here to view.

[Step 2] Azure AD 側 でユーザー認証を行う Web App を登録する

次に Azure ポータルの右上の方で認証したいユーザーが登録されている Azure AD テナントを選択し、左側のメニューから「Azure Active Directory」を選択し、プロパティ画面にて「ディレクトリ ID」を控えます。

Image may be NSFW.
Clik here to view.

そして「アプリの登録」からアプリケーション情報の登録を行います。「名前」は Azure AD テナントへの登録名ですので、一意でかつ分かりやすい名前を付けてください。「サインオン URL」には先ほど作成した Web App の URL を入力します。

Image may be NSFW.
Clik here to view.

登録が終わると「アプリの登録」画面に表示されますので、そこをクリックして「アプリケーション ID」を控えておきます。また登録されたアプリのプロパティとして「ホームページURL」として Web App の URL を入力しておいてください。「応答 URL」には Web App の URL がそのまま入力されているので、プロトコルを HTTPS に変更し、パスに「 /.auth/login/aad/callback 」を追加します。

Image may be NSFW.
Clik here to view.

[Step3] Web App 側でユーザー認証をAzure AD に委託する

始めに作成しておいた Web App を表示し、「認証/承認」を選択します。App Service 認証を「オン」に設定し、未認証状態のリクエストに対して「Azure Active Directory でのログイン」を要求するように指定します。認証プロバイダーとして「Azure Active Directory」を選択、設定情報として先ほどアプリケーション登録を行った際の内容を入力します。

  • 「管理モード」は「詳細」を選択
  • 「クライアントID」には Azure AD で発行された「アプリケーション ID 」を入力
  • 「発行者の URL」には「https://login.microsoftonline.com/ディレクトリID」を入力

Image may be NSFW.
Clik here to view.

Azure AD テナントを表す「ディレクトリ ID」と、そこに登録されたアプリケーションを表す「アプリケーション ID」を指定することで、Web App 側から認証を委託する先を一意に決めることが出来るわけです。この情報を元に未認証のユーザーに対しては「アクセスする前に指定のディレクトリでアプリに対してアクセス許可を持っていることを証明するトークンを持ってこい」と言って門前払いリダイレクトすることになるわけです。つまり本来はユーザー認証ではなく「認可」というのが正確な表現なのでしょうが、この記事では通りの良い(?)認証と表現しています。

[Step 4] ユーザー認証が動作していることを確認する

ここまで設定した状態で作成した Web App にアクセスすると、未認証状態では自動的に Azure AD のログイン画面にリダイレクトされることが確認できます。Azure ポータルを操作していたものと同じ Web ブラウザを使用すると認証情報のキャッシュによって自動的にログインしてしまうことがあり、挙動が分かりにくいことがありますので、InPrivate ブラウズ機能や別の Web ブラウザを使用することをお勧めします。

Image may be NSFW.
Clik here to view.

簡易モード vs 詳細モード

Step 3 にて管理モードとして「詳細」を選択しましたが、実はここで「簡易」を選択すると Step 2 に相当する設定を自動的に行ってくれますので、その場合 Step 2 の作業は不要です。ただ簡易モードでの設定を行うためには以下の 2 つが前提条件となるため、環境によっては利用できないケースもあります。

  • Azure Web App が認証に使用している Azure AD テナントと、ユーザー認証に使用する Azure AD テナントが同一であること
  • Azure AD テナントの設定として「ユーザーはアプリケーションを登録できる」ように設定されていること

Image may be NSFW.
Clik here to view.

異なる AAD テナントを使用したユーザー認証

例えば Web アプリの開発側と利用者が別の組織に所属しているようなケースでは、開発者の ID 管理とアプリ利用者の ID 管理が独立しているケースが考えられます。また同一の組織に所属している場合でも、開発者と利用者の ID を管理するテナントをAzure のサブスクリプション管理に利用せず、異なるテナントないしはマイクロソフトアカウントを利用しているケースも多くあります。簡易モードではユーザー認証に使用するテナントを選択することができないため、このようなケースでは Azure AD 側の設定と Web App 側の設定を独立して実施せざるを得ず、詳細モードが必要になります。

Image may be NSFW.
Clik here to view.

ユーザー認証のためにアプリ登録を代行する

比較的大きな組織では Azure 上でアプリケーション開発するチームと、組織ユーザーの ID を管理チームが独立していることが多いと思います。このようなケースでは Azure AD の管理権限を持って保守・運用を行うのはID 管理チームになり、アプリ開発者はたとえ「Azure の管理者」であってとしても「Azure AD 上では一般のユーザー」でしかないことになります。

組織のセキュリティポリシーによっては 、一般ユーザーに対してアプリケーション登録権限が解放されていないことがあり、その場合には ID 管理チームに Azure AD に対するアプリケーション登録を代行してもらわざるを得ません。その際には開発側からは Step 2 で設定したような Web App の情報を引き渡し、Step 3 で使用するテナント ID やアプリケーション ID を教えてもらう必要があります。よってこのようなケースでも詳細モードによる設定が必要になってきます。

なおユーザーがアプリケーション登録の権限を持っていない状態で簡易モードでの構成を行うと、以下のようなエラーが発生して構成に失敗します。。

Image may be NSFW.
Clik here to view.

代行って面倒ですよね

Azure AD のデフォルト設定では「ユーザーはアプリケーションを登録できる」ようになっています。逆に言えばこれが出来ないということは、組織側のセキュリティポリシー等に抵触するためあえて設定が変更されている可能性があります。一方でアプリケーションにとっては組織が運営する正規の Azure AD テナントでユーザー認証ができないと、ID 管理や認証の基盤が別途必要になってしまうことになり、これはこれでセキュリティやガバナンスの観点から問題と言えます。となると前述の「登録代行」のような申請ワークフローが組織的に必要になってくると思いますが、これはこれで手間も時間もかかりますので、既定の設定である「ユーザーはアプリケーションを登録できる」ようにしていただくことをお勧めします。

まとめ

簡易モードでも詳細モードでも Azure Web App に Azure AD 認証を組み込むのはそれほど難しくはないのですが、開発者はある程度 Azure AD 側での設定も理解しておいた方がいろいろと応用が利きますので、本記事ではあえて比較的「面倒な」手順でもある詳細モードを中心に紹介してみました。

New book: Windows 10 Step by Step, 2nd Edition

Image may be NSFW.
Clik here to view.
We’re happy to announce the availability of Windows 10 Step by Step, 2nd Edition (ISBN 9781509306725), by Joan Lambert.

Purchase from these online retailers:

Microsoft Press Store
Amazon
Barnes & Noble
Independent booksellers – Shop local

Overview

The quick way to learn today’s Windows 10!

This is learning made easy. Get more done quickly with the newest version of Windows 10. Jump in wherever you need answers—brisk lessons and colorful screenshots show you exactly what to do, step by step.

  • Do what you want to do with Windows 10!
  • Explore fun and functional improvements in the newest version
  • Customize your sign-in and manage connections
  • Quickly find files on your computer or in the cloud
  • Tailor your Windows 10 experience for easy access to the information and tools you want
  • Work more efficiently with Quick Action and other shortcuts
  • Get personalized assistance and manage third-party services with Cortana
  • Interact with the web faster and more safely with Microsoft Edge
  • Protect your computer, information, and privacy

Introduction

Welcome to the wonderful world of Windows 10! This Step by Step book has been designed so you can read it from the beginning to learn about Windows 10 and then build your skills as you learn to perform increasingly specialized procedures. Or, if you prefer, you can jump in wherever you need ready guidance for performing tasks. The how-to steps are delivered crisply and concisely—just the facts. You’ll also find informative, full-color graphics that support the instructional content.

Who this book is for

Windows 10 Step by Step, Second Edition is designed for use as a learning and reference resource by home and business users of desktop and mobile computers and devices running Windows 10 Home or Windows 10 Pro. The content of the book is designed to be useful for people who have previously used earlier versions of Windows and for people who are discovering Windows for the first time.

What this book is (and isn’t) about

This book is about the Windows 10 operating system. Your computer’s operating system is the interface between you and all the apps you might want to run, or that run automatically in the background to allow you to communicate with other computers around the world, and to protect you from those same computers.

In this book, we explain how you can use the operating system and the accessory apps, such as Cortana, File Explorer, Microsoft Edge, and Windows Store, to access and manage the apps and data files you use in your work and play.

Many useful apps that are part of the Windows “family” are installed by manufacturers or available from the Store. You might be familiar with common apps such as Calendar,

Camera, Groove Music, Mail, Maps, News, Photos, and Windows Media Player. This book isn’t about those apps, although we do mention and interact with a few of them while demonstrating how to use features of the Windows 10 operating system.

The Step by Step approach

The book’s coverage is divided into parts that represent general computer usage and management skill sets. Each part is divided into chapters that represent skill set areas, and each chapter is divided into topics that group related skills. Each topic includes expository information followed by generic procedures. At the end of the chapter, you’ll find a series of practice tasks you can complete on your own by using the skills taught in the chapter. You can use the practice files that are available from this book’s website to work through the practice tasks, or you can use your own files.

Features and conventions

This book has been designed to lead you step by step through all the tasks you’re most likely to want to perform in Windows 10. If you start at the beginning and work your way through all the procedures, you’ll have the information you need to administer all aspects of the Windows 10 operating system on a non-domain-joined computer. However, the topics are self-contained, so you can reference them independently. If you have worked with a previous version of Windows, or if you complete all the exercises and later need help remembering how to perform a procedure, the following features of this book will help you locate specific information.

  • Detailed table of contents   Search the listing of the topics, sections, and sidebars within each chapter.
  • Chapter thumb tabs and running heads   Identify the pages of each chapter by the colored thumb tabs on the book’s open fore edge. Find a specific chapter by number or title by looking at the running heads at the top of even-numbered (verso) pages.
  • Topic-specific running heads   Within a chapter, quickly locate the topic you want by looking at the running heads at the top of odd-numbered (recto) pages.
  • Practice task page tabs   Easily locate the practice task sections at the end of each chapter by looking for the full-page colored stripe on the book’s fore edge.
  • Glossary   Look up the meaning of a word or the definition of a concept.
  • Keyboard shortcuts   If you prefer to work from the keyboard rather than with a mouse, find all the shortcuts in one place in the appendix, “Keyboard shortcuts and touchscreen tips.”
  • Detailed index   Look up specific tasks and features in the index, which has been carefully crafted with the reader in mind.

You can save time when reading this book by understanding how the Step by Step series provides procedural instructions and auxiliary information and identifies on-screen and physical elements that you interact with.

About the author

Joan Lambert has worked closely with Microsoft technologies since 1986, and in the training and certification industry since 1997. As President and CEO of Online Training Solutions, Inc. (OTSI), Joan guides the translation of technical information and requirements into useful, relevant, and measurable resources for people who are seeking certification of their computer skills or who simply want to get things done efficiently.

Joan is the author or coauthor of more than four dozen books about Windows and Office apps (for Windows, Mac, and iPad), five generations of Microsoft Office Specialist certification study guides, video-based training courses for SharePoint and OneNote, QuickStudy guides for Windows and Office apps, and the GO! series book for Outlook 2016.

Blissfully based in America’s Finest City, Joan is a Microsoft Certified Professional, Microsoft Office Specialist Master (for all versions of Office since Office 2003), Microsoft Certified Technology Specialist (for Windows and Windows Server), Microsoft Certified Technology Associate (for Windows), Microsoft Dynamics Specialist, and Microsoft Certified Trainer.

Windows 10 Step by Step, Second Edition, is based on the original book coauthored by Joan and her father, Steve Lambert. Joan’s first publishing collaboration with Steve was the inclusion of her depiction of Robots in Love in one of his earliest books, Presentation Graphics on the Apple Macintosh (Microsoft Press, 1984).


High Value Scenarios – 4-Step Load Test

High Value Scenarios consist of multiple elements that can help to describe attributes of an applications performance effectively. There are a few different scenarios that I use on a day to day basis, which provide excellent analysis points. In this post, I will attempt to explain one of the scenarios, how to build it, how to analyze the results and why it is effective.

Stair Step
My standard load test is a four-step stair step test. I use this for standard baselining as well as KPI analysis of throughput and response times.

It looks something like this:
Image may be NSFW.
Clik here to view.

There are four stair steps each one accounting for 0.5X of the target throughput. Where X is defined as the average hourly load of the application, either projected or taken from actual production load.

Once you have X defined, use that to calculate the number of users you need using this previous post. After you have the number of users needed for 1X you can just start with .5X and add .5X every 15 Minutes for a total test time of one hour.

Analyzing Response Times
There are a few different things that response times can do during this scenario: stay constant, increase linearly, increase logarithmically. Consistent response times tell me that the application can handel much more load than what we are running. Linear response times show that the application is starting to queue but not so much that it is past the point of failure. Logarithmic growth shows that the is substantial queuing happing somewhere in the application and the increasing response times are adding to that.

Analyzing Throughput
When analyzing throughput, you should see a steady linear trend. Actual throughput should increase at the same rate of scheduled throughput from the test rig.

Putting it together
To determine how the application is performing under varying degrees of load, analyze the response times and throughput of this scenario. This should explain how the application will respond in a real-world production environment with fluctuating load.

Some great places to use this type of scenario would be in predictable workload applications. It is a great baselining tool that can provide multiple points of analysis that you just can’t get from a standard consistent load scenario and prevents the data skewing associated with a max capacity scenario.

Exceptions to this scenario
This scenario will not work well for applications that have large spikes in throughput.
This scenario will not typically tell you the maximum capacity of the system (Unless the application tips over during one of the load levels)

Example 1 – Consistent KPI:
Image may be NSFW.
Clik here to view.

Example 2 – Linear KPI:
Image may be NSFW.
Clik here to view.

Example 3 – Logarithmic KPI:Image may be NSFW.
Clik here to view.

KPI – Key Performance Indicator (Response Times, Throughput, Error Rate, CPU, etc.)

Deploy the ASPNET Core App on Linux and Capture Perfview traces

My previous blog was specifically to capture the perfview traces for aspnet core MVC application on a Windows box.

The current blog targets capturing the perfview traces for aspnet core MVC application on a LINUX box.

Pre-requisites:
1. On the Windows Development Box ensure that you have the below components:

  • putty.exe: This is used to connect to your Guest LINUX box
  • pscp.exe: This is a command line application to securely transfer the files.

2. Have a LINUX Operating system with most recent release. I am making use of Ubuntu 17.04 release for my demo.

3. The ASPNET Core application with the logging enabled. More Info on enabling logging can be found here in my previous blog.

 

Let's get started.

Step 1: Installing the Dotnet Core SDK on Linux

1. Install the Dotnet Core SDK for Linux from this article

Then run the commands mentioned in the above article. Note that you need to run the command as per the product version. If you are not sure of the version, you can simply run the below command:

navba@CoreLinuxDemo:~$ lsb_release -a

No LSB modules are available.

Distributor ID: Ubuntu

Description: Ubuntu 17.04

Release: 17.04

Codename: zesty

 

2. Post adding dotnet product feed and installing dotnet SDK, you can test the installation by simply creating a sample .NET     MVC application using the dotnet CLI. We are creating the app within the myapp directory.

      navba@CoreLinuxDemo:~$ dotnet new mvc -o myapp

To see the contents / files the above command gave you, run the below commands:

navba@CoreLinuxDemo:~$ cd myapp

navba@CoreLinuxDemo:~/myapp$ ls

appsettings.Development.json appsettings.json bower.json bundleconfig.json Controllers Models myapp.csproj obj Program.cs Startup.cs Views wwwroot

 

3. To restore the dependencies of your application, you can run the below command:

navba@CoreLinuxDemo:~/myapp$dotnet restore

Restore completed in 39.36 ms for /home/navba/myapp/myapp.csproj.

Restore completed in 27.37 ms for /home/navba/myapp/myapp.csproj.

To run the application, you can run the below command:

navba@CoreLinuxDemo:~/myapp$ dotnet run

warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]

No XML encryptor configured. Key {79a1ec79-a634-4707-936e-a91a85576f75} may be persisted to storage in unencrypted form.

Hosting environment: Production

Content root path: /home/navba/myapp

Now listening on: http://localhost:5000

Application started. Press Ctrl+C to shut down.

Note: dotnet run command will internally run the dotnet restore / dotnet build commands.

4.  To test the application, you can launch another instance of the putty and connect to your LINUX box and try accessing             the  http://localhost:5000 using curl as shown below:

    navba@CoreLinuxDemo:~$ curl http://localhost:5000

You should be getting the html response of the application.

Alternatively, you can also try accessing the app using wget as shown below:

navba@CoreLinuxDemo:~$ wget http://localhost:5000

--2017-12-21 12:11:50-- http://localhost:5000/

Resolving localhost (localhost)... 127.0.0.1

Connecting to localhost (localhost)|127.0.0.1|:5000... connected.

HTTP request sent, awaiting response... 200 OK

Length: unspecified

Saving to: 'index.html'

index.html [ <=> ] 8.62K --.-KB/s in 0s

2017-12-21 12:11:50 (162 MB/s) - 'index.html' saved [8827]

Once we confirm that we get the expected response from the application we can confirm that the .NET Core SDK is installed fine, and our application is functioning right.

 

Step 2: Install Nginx server

1. We install the nginx server using the below command:

      navba@CoreLinuxDemo:~/myapp$ sudo apt-get install nginx

2. Once it is installed, try to access your nginx server using your external IP address of the LINUX box. If you see the nginx HOME page then your nginx server installation did succeed and its functioning fine.

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.



Note: Remember to open PORT 80 in the external firewall (if any). See my configurations below:

Image may be NSFW.
Clik here to view.

 

Step 3: Configuring Nginx as reverse proxy to dotnet core application:

1. We will clear the default configuration using the below command:

    navba@CoreLinuxDemo:~/myapp$ sudo truncate -s 0 /etc/nginx/sites-available/default

2. Modify the nginx config using the below command:

    navba@CoreLinuxDemo:~/myapp$ sudo nano /etc/nginx/sites-available/default

The editor opens, and you need to manually enter the below to avoid any syntactical errors:

server {
    listen 80;
    location / {
        proxy_pass http://localhost:5000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Note: The port which dotnet.exe process will be listening to can be different, ensure that you place a correct one in the nginx config file.

3. To ensure that you have the syntax placed right in the configuration file you can run the below command:

navba@CoreLinuxDemo:~/myapp$ sudo nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

 

4. If the above command throws any syntactical errors, you need to verify the nginx configuration and manually type the above entry again.

5. The below command should reload the new configuration settings.

   navba@CoreLinuxDemo:~/myapp$ sudo nginx -s reload

6. Then you run the dotnet run command again to spawn up the dotnet.exe process.

   navba@CoreLinuxDemo:~/myapp$ dotnet run

7. Now try to access the application externally. You should see that the Home/Index page of your MVC application. Our nginx server successfully proxied the request to dotnet application.

Image may be NSFW.
Clik here to view.

Note: If you are having trouble in getting this page, please follow the above steps again.

Now that the application is up and running on Linux via nginx you can try capturing perfview traces.

Note that you can do custom logging so that it gets logged within the perfview using this article.

 

Step 4: Deploying your concerned Core application to Linux:

If you need help in moving your own application from windows environment to Linux before capturing the Perfview traces, you can use pscp.exe
1. On your Dev box, Open the CMD prompt in admin mode.

2. Navigate to the location where you placed your application.

3. From this folder place the path to your pscp.exe and run the below command to place your application contents within the myapp folder within your profile.

  E:dotnetCoreDemo> C:temppscp.exe -r * navba@52.170.94.20:/home/navba/myapp

4. Post this go back to you linux files and you can run the ls  command to ensure that your files have been placed right.

5. Run the dotnet run command to launch the dotnet.exe and note the port number it is listening on. Then you modify the nginx configuration file by following the steps from Step 3 section mentioned above.

 

Step 5: Capturing Perfview traces for your application

1. Launch a different instance of PUTTY and connect to your LINUX box.

2. The below command will download the perfview tool:

   navba@CoreLinuxDemo:~$curl -OL http://aka.ms/perfcollect

3.  You can confirm it again by using ls command.

navba@CoreLinuxDemo:~$ ls

index.html myapp perfcollect

4. You can try giving it JUST executable permissions. I simply gave it FULL Control for all groups to avoid any issues.

  navba@CoreLinuxDemo:~$chmod 777 perfcollect

5. Install the perfview using the below command:

  navba@CoreLinuxDemo:~$ sudo ./perfcollect install

6. Add the below environment variable before you start collecting the trace:

  navba@CoreLinuxDemo:~$ export COMPlus_PerfMapEnabled=1

7. Then you run the below command to start collecting it. Ensure that your dotnet.exe process is running and try accessing your application.

8. Once you are sure that you have captured the issue, you can stop the perfview by clicking Ctrl + C as shown below:

navba@CoreLinuxDemo:~$ sudo ./perfcollect collect MyPerfviewFile

Collection started. Press CTRL+C to stop.

^C

...STOPPED.

Starting post-processing. This may take some time.

Generating native image symbol files

...SKIPPED

Crossgen not found. Framework symbols will be unavailable.

See https://github.com/dotnet/coreclr/blob/master/Documentation/project-docs/lin ux-performance-tracing.md#resolving-framework-symbols for details.

Saving native symbols

...FINISHED

Exporting perf.data file

...FINISHED

Compressing trace files

...FINISHED

Cleaning up artifacts

...FINISHED

Trace saved to MyPerfviewFile.trace.zip

8. You can run the ls command to see the collected trace:

navba@CoreLinuxDemo:~$ ls

index.html lttng-traces myapp MyPerfviewFile.trace.zip perfcollect

 

More Info about the analysis of perfview trace on Linux you can see this:

https://blogs.msdn.microsoft.com/vancem/2016/02/20/analyzing-cpu-traces-from-linux-with-perfview/

 

Hope this helps Image may be NSFW.
Clik here to view.
🙂

Excel 2016 バージョン 1712 で Workbook_Open から表示したユーザーフォームを閉じるとクラッシュする

こんにちは、Office 開発サポート チームの中村です。

2017 年 12 月に先行リリース チャネルに公開済み / 2018 年 1 月以降に他チャネルに公開予定の Office 2016 の更新バージョン 1712 を適用した環境の Excel 2016 で、ブック オープンと同時にユーザーフォームを表示している場合に、これを閉じると Excel がクラッシュする現象を確認しています。本記事では、現象が発生する状況と、現時点での回避策についてご案内します。今後、修正状況など随時更新を予定しています。

 

1. 現象の詳細

Office 2016 バージョン 1712 (8827.2082 以降) の環境で、Workbook_Open イベント ハンドラなどのブック オープンと同時に実行される処理からユーザーフォームをモーダル表示するマクロが組み込まれているファイルを開くときに、ユーザーフォームは正常に表示されますが、ユーザーフォームを閉じると Excel がクラッシュします。

 

再現手順
1. 新規 Excel ファイルを作成します。
2. Visual Basic Editor を起動し、ユーザーフォームを追加します。(UserForm1)
3. ThisWorkbook オブジェクトに以下のコードを記述します。

Private Sub Workbook_Open()
    UserForm1.Show
End Sub

4. ファイルを .xlsm 形式、または .xls 形式で保存していったん閉じます。
5. 4. で保存したファイルを開きます。
6. マクロが自動実行されてユーザーフォームが表示されます。 ※ マクロのセキュリティ設定やドキュメントの信頼状況によっては、警告バーが表示されてマクロが自動実行されません。これについては、2. で詳細を記載します。
7. 表示されたユーザーフォームを閉じます。
結果 : Excel がクラッシュします。

 

発生条件詳細

  • マクロがブック オープンと同時に自動実行される場合にのみ発生します。セキュリティの警告の黄色いバーが表示されて一旦マクロが無効化され、[コンテンツの有効化] でマクロを実行する場合は問題ありません。
  • ブック オープン時に動作する Workbook_Open イベント ハンドラ、Application.WorkbookOpen イベント ハンドラ、Auto_Open メソッドのいずれも現象が発生します。
  • ユーザーフォームを閉じる操作は、×ボタン、Unload メソッド、Hide メソッドのいずれも現象が発生します。
  • ユーザーフォームがモーダル表示の場合にのみ現象が発生します。モードレス表示の場合は問題ありません。
  • Excel アドイン (.xlam / .xla) をアドインとして登録して同様の処理を行う場合は問題ありません。

 

2. マクロの自動実行に関する設定

現象の再現条件であるブック オープン同時に Workbook_Open イベント ハンドラが実行される動作は、以下の設定によって制御されます。

1. オプションの [セキュリティ センター] - [セキュリティ センターの設定] - [マクロの設定]

  •  [すべてのマクロを有効にする] に設定されている場合
  • [デジタル署名されたマクロを除き、全てのマクロを無効にする] に設定されており、かつマクロに信頼された証明書でデジタル署名が行われている場合
  • [警告を表示してすべてのマクロを無効にする] または [警告を表示せずにすべてのマクロを無効にする] に設定されており、ドキュメントが信頼できる状態である場合

2. オプションの [セキュリティ センター] - [セキュリティ センターの設定] - [信頼済みドキュメント]

マクロのセキュリティ設定が [警告を表示してすべてのマクロを無効にする] の場合に、1 度そのブックを開いてマクロを有効にすると、そのブックは信頼できるドキュメントに登録され、次回以降マクロが自動的に有効化されます。

3. オプションの [セキュリティ センター] - [セキュリティ センターの設定] - [信頼できる場所]

ここに登録されたフォルダに格納されたブックは信頼され、マクロが自動的に実行されます。

4. オプションの [セキュリティ センター] - [セキュリティ センターの設定] - [信頼できる発行元]

ここに表示されている証明書 (ユーザーの証明書ストア [信頼された発行元] に登録された証明書) でマクロにコード署名が行われているブックは信頼され、マクロが自動的に実行されます。

 

3. 状況

2017 年 12 月 28 日現在、バージョン 1712 は Insider ファースト、および Insider スローに公開されています。月次チャネル (Monthly Channel) へも 1 月初旬に公開が予定されています。 現在弊社内で調査を進め、修正を検討していますので、進展があり次第本記事でお知らせします。

 

4. 暫定回避方法

修正が行われるまでの間、以下のいずれかの方法での回避をご検討ください。

4-1. Office 2016 の自動更新を停止し、バージョンを 1711 以前のままで利用する
4-2. マクロが自動実行されず警告が表示されるようにする
4-3. ユーザーフォームをモードレス表示にする
4-4. Application.OnTime メソッドでユーザーフォームを表示する

 

以下にそれぞれ詳細を記載します。

 

4-1. Office 2016 の自動更新を停止し、バージョンを 1711 以前のままで利用する

Office 2016 の自動更新を停止し、またすでにバージョン 1712 に更新してしまっている場合は、以前のバージョンに戻します。これらの手順は、以下の記事で以前に紹介した手順と同様となりますので、詳しい手順はリンク先を参照してください。

タイトル : Office 2016 バージョン 1708 以降で日本語の VBA モジュール名を含むファイルを開くとエラー
アドレス : https://blogs.msdn.microsoft.com/office_client_development_support_blog/2017/08/23/ver1708-issue-japanesenamevbamodule/
該当箇所 : ”3. 対応状況" の "暫定対応手順"

 

今回、戻すときに指定するバージョンは、それぞれ以下の通りです。

Insider ファースト : 16.0.8730.2122
Insider スロー : 16.0.8730.2127
月次チャネル : 16.0.8730.2127 (12/28 時点での最新です。月次チャネルはまだ問題は発生しませんが、本日以降、問題が発生した場合はこのバージョンに戻して回避できます)

 

4-2. マクロが自動実行されず警告が表示されるようにする

ブックを開いたときに、以下のように [セキュリティの警告] が一旦表示される状態にし、[コンテンツの有効化] をクリックしてマクロが実行されるようにします。

Image may be NSFW.
Clik here to view.
図 1. マクロのセキュリティ警告

図 1. マクロのセキュリティ警告

 

このような状態とするには、「2. マクロの自動実行に関する設定」に記載した自動実行される条件を満たさないように Excel を構成します。ユーザーの環境やマクロの運用によって様々な構成が想定されるため、推奨される設定を一概にご案内することは難しいですが、一般的には以下の設定としていただくとほとんどのブックでセキュリティの警告を表示できます。

  • オプションの [セキュリティ センター] - [セキュリティ センターの設定] - [マクロの設定] を [警告を表示してすべてのマクロを無効にする] に設定 (既定)
  • オプションの [セキュリティ センター] - [セキュリティ センターの設定] - [信頼済みドキュメント] の [信頼済みドキュメントを無効にする] のチェックを入れる

このように設定すると、信頼できる場所にあるか、信頼済みの証明書で署名されたブック以外はセキュリティの警告が表示されます。
信頼できる場所は、既定ではアドイン格納フォルダ等の Excel で特別な意味を持つフォルダのみが登録されているため、これらのフォルダにユーザーが作成したマクロ ブックが格納されるケースは少ないかと思います。また、署名がマクロに付与されていることもあまり多くはありません。したがって、このように設定すると、一般的な構成ではほとんどのファイルでセキュリティの警告が表示されます。

ただし、組織のポリシーやご利用のシステムの要件などで、独自のフォルダを信頼できる場所に登録していたり、マクロのセキュリティ設定を変更しているといった状況も考えられますので、この方法で回避する場合には、環境構成を十分にご確認ください。


 

4-3. ユーザーフォームをモードレス表示にする

以下のようにコードを記述すると、ユーザーフォームをモードレスで表示できます。モードレスでの表示で問題ない場合は、モードレス表示に変更することで回避できます。

UserForm1.Show vbModeless


 

4-4. Application.OnTime メソッドでユーザーフォームを表示する

Application.OnTime メソッドを利用すると、一定時間の経過後に指定した標準モジュールの関数を呼び出すことができます。

タイトル : Application.OnTime メソッド (Excel)
アドレス : https://msdn.microsoft.com/ja-jp/library/office/ff196165(v=office.15).aspx

例えば以下のように記述すると、1 秒後に Sample メソッドを実行します。

Application.OnTime Now + TimeValue("00:00:01"), "Sample"

 

Workbook_Open イベント ハンドラには Application.OnTime で指定したメソッドを呼び出すように記述し、呼び出し先のメソッドでユーザーフォームの表示などの現在 Workbook_Open で行っている一連の処理を実行するように変更します。Application.OnTime メソッドでの関数呼び出しは、一旦ブック オープンの一連の処理の流れから抜けた後に実行される内部動作となりますので、今回の現象を回避できます。

<実装イメージ>
ThisWorkbook オブジェクト に以下を記述

Private Sub Workbook_Open()
    Application.OnTime Now + TimeValue("00:00:01"), "Sample"
End Sub

標準モジュールに以下を記述 (ThisWorkbook オブジェクトなどに記述すると呼び出せませんのでご注意ください)

Public Sub Sample()
    UserForm1.Show
End Sub

 

今回の投稿は以上です。

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

If you use Office 365 but your MX record doesn’t point to Office, you may want to close down your security settings

Even though it's not a recommend configuration for our customers (in terms of spam filtering), some customers of Office 365 route their email through a competing spam filtering service in the cloud, or through an on-prem server. That is, the mail flow looks like this:

Image may be NSFW.
Clik here to view.

I've written previously about the problems this can cause, see Hooking up additional spam filters in front of or behind Office 365. However, if you must do it, you may want to ensure that you force all email to go through the 3rd party server. If you hook up a 3rd party server, email can be delivered to your organization in Office 365 through that server or by connecting directly to EOP (Exchange Online Protection). This is not good because our spam filters don't understand that email was sent directly to the service and not through a gateway; if your MX record does not point to Office 365 or EOP, some spam filtering checks are suppressed automatically to avoid false positives.

Therefore, to get the fullest protection possible, I recommend relying upon the 3rd party service, and then maybe or maybe not doing double-filtering in EOP (accepting the fact that there will be false positives and false negatives). But, don't just rely on EOP.

So, to force email through your on-prem server, you will need to install a TLS cert in your on-prem server, and then always ensure that it's used when connecting from the server to Office 365. Then, create a partner connecter using a seldom used attribute which isn’t exposed via the mainstream UX but only through cmdlet. It’s called AssociatedAcceptedDomains.

To do this with TLS-cert-based connectors using cmdlets:

New-InboundConnector
–Name "OnlyAcceptEmailFrom<OnPremServer>"
-ConnectorType Partner
-SenderDomains *
-RestrictDomainsToCertificate $true
-TlsSenderCertificateName <Full set of TLS cert names>
-AssociatedAcceptedDomains <full list of accepted domains that belong to your organization>

For more information on TLS connectors and how to set up a partner connector, see Set up connectors for secure mail flow with a partner organization.

You may have to tweak this a bit to get it right, so you may want to experiment with some smaller domains before enabling it for every domain in your organization.

What this does is reject messages that don't come over the TLS cert; so long as your on-prem server is correctly configured, any email that tries to connect directly to Office 365/EOP should be rejected.

You would want to use this when you are connecting through an on-premise mail server and you can control the certificate. However, if you are connecting through a shared service and cannot specify the TLS-cert, then this would probably not be appropriate without some modifications to the connector.

Hopefully, you find this useful.

Do the malware writers know something about cryptocurrency that the rest of us don’t?

Disclaimer - If you haven't read my disclaimer yet, make sure you do so here. TL;DR version - Buyer beware, I am not an expert, I am fumbling my way through this like the rest of you.

Also, I hold a little bit of Bitcoin and Ethereum.


Way back in the fall of 2012, I attend the Virus Bulletin conference in Dallas, TX. While I was there, I remember either attending, or hearing about a session, entitled Malware taking a bit(coin) more than we bargained for.

The presentation was by a researcher at Microsoft, and they talked about how bitcoin was a new digital currency just starting to gain traction. In response, new malware families were arising that would either take over user's computers to mine bitcoin (this was back in the day when a single computer still had a reasonable change of actually mining one), or try to steal users' bitcoins. I think that may have been my first introduction to Bitcoin, and I remember at the time that it was interesting, but wasn't sure whether or not it would catch on as a digital currency. If the malware creators succeeded in mining Bitcoin, they would have seen it go up in value by 100x.

Fast forward several years, to 2017, with the WannaCry malware outbreak. Malware hurts, and ransomware is even more painful as you're locked out of your system, but the market incentive to pay the fine is enticing if you can be certain that it will unlock your system; the drawback is that it incentivizes bad behavior for the malware author.

Both cases are examples of malware creators looking toward alternative payment methods to make themselves less trackable.

But what's interesting is how malware writers have stayed with that principle but have switched out cryptocurrencies. Whereas before they were mining bitcoin, now they are mining Monero:

These are just a few snippets of articles I found, and you can see they span 15 months. So, while it's a newer thing, it's not totally brand new. But the point is: Hackers are diversifying into alt-coins (an alt-coin is anything that is not a bitcoin).

As I say in some of my other cryptocurrency articles, the value of a digital currency built on blockchain is how many users believe in it, build on top of it, and start using it. Hackers and malware authors were early adopters of Bitcoin, and they seem to be proven right (so far... barring a collapse of Bitcoin). Do they have any special insight into whether or not Monero will eventually be successful?

You can do your own research into what Monero is and how it differs from Bitcoin. My own quick summary is that it's a digital currency like Bitcoin, but it's not built on the Bitcoin code like how a lot of other cryptocurrencies are. And whereas Bitcoin is pseudonymous, all transactions are public. If you observe enough patterns, you can see that randomNumber ID #1 that sends 0.5 btc to randomNumber ID #2 is a transaction. You don't have the identities of everyone yet, but with enough observations you may be able to figure out some of the identities. Bitcoin leaves a trail that is reversible back to its original transaction participants (in some cases, depending upon how many resources the investigator wants to spend).

Monero is different because it is much more private. Instead of this:

A sends xx bitcoins to B

You get this:

? sends ? to ?

You can see that's more private and not trackable.

There are some legitimate use cases of hiding your financial transactions from all viewing eyes. Using regular cash is kind of like this. But on the other hand, one of those use cases is criminal activity; if you're exchanging illegal goods or services, you want that to be hidden from everyone. Thus, if Bitcoin had a reputation as being useful for underground transactions, Monero could market itself the same way. No doubt cyber criminals already do, as that's why they are mining Monero using other people's machines.

I sympathize with the solutions to problems that altcoins are trying to solve. But, by introducing stronger privacy, they also set themselves up as a magnet for criminal activity. The maintainers of the code may say that they are building a platform and are not responsible for its usage. I'm not so sure about that.

Just ask Facebook.

Viewing all 5308 articles
Browse latest View live