Icons are an essential element for graphical user interfaces. They allow intuitive actions by users, provide space optimization (e.g. Toolbars), create better mnemonics around app’s functionality, among other things that go beyond the goal for this post. In summary, they are nice to have in our apps, but with proper use. Too much of something good leads to bad results.
Methods to load iconographic assets in apps have improved over the years. A great advantage to those who do not have the support of graphical designers on their teams. Icons for apps have evolved from old day clip-art galleries all the way to icons stored inside font files. This latest method besides being handy provides the ability to render vectorized icons. Vectors adapt themselves to different display resolutions where our app might be available. Most of the XAML visual elements are vectors and so should be icons.
Microsoft offers many fonts that ship with Windows containing illustration assets. From the traditional Wingdings fonts to the new ones designed for modern apps, the Segoe family of fonts. In the latest version of Windows 10, we can find Segoe MDL2 Assets and Segoe UI Emoji. They come preinstalled in the system and can be consumed by controls that render text simply by changing the FontFamily property of the control. Although, there are a series of controls created to display icon assets.
Icon Classes
Image may be NSFW. Clik here to view.
The UWP development platform provides a series of classes that extend the IconElement class, but only two of them support fonts as their source. FontIcon and SymbolIcon, these classes differ in the way we indicate what icon to use. SymbolIcon is created exclusively to consume icons from Segoe MDL2 Assets, and its source can’t be changed. It allows the class to expose a Symbol property that accepts an easy to memorize icon friendly name, as oppose to FontIcon where we can assign a different FontFamily value and indicate the icon to use through its Glyph property using the internal Unicode value representing the glyph location inside the font file.
The value we can assign to the Symbol property in a SymbolIcon does not include the valid glyphs that the font provides. For those cases, the recommended method is to use the FontIcon class instead. We can freely indicate the Unicode value based on what the font contains. The two font icons that come with Windows usually get additional icons as new versions of Windows ship. For this reason, we need to be careful in assuming that a Unicode value will render a valid icon in the previous version of Windows where our app could be running as well. Check on MSDN or use the Character Map accessory application that comes with Windows to verify a Unicode value and if it is valid in the versions of Windows the app targets.
When using a Unicode value identified in the Character Map we use one of two formats based on where we make the assignment. In XAML code we use "" where E787 is the hexadecimal value for the selected glyph in Character Map in image 1. To use it in code-behind the format is "uE787".
Colorizing Icons
Image may be NSFW. Clik here to view.
The icons found in Segoe UI Emoji font come with color information for its glyphs. This color is shown if the platform or the application they are rendered in supports multicolor layers coming from fonts, UWP apps support it as well as Windows 10. In cases where there is no support for color, a monochromatic version of the emoji is displayed. Some text controls (TextBlock and TextBox) provide a property IsColorFontEnabled to indicate when to use color or the fallback monochromatic version. FontIcon does not have this option. The glyphs found on Segoe MDL2 Assets are only monochromatic. There is no color information coming from the font itself, but that does not prevent us from setting the control’s Foreground property to some color and get the font’s glyphs colorized. If we look closely at all the glyphs provided by this font we will notice that certain icons come in two versions: outline and filled. Having a FontIcon instance rendering the fill glyph and assigning a color to its Foreground property, and overlapping another instance of FontIcon rendering the outline version of the icon creates the effect of multiple layers with different colors.
Custom Fonts
As seen in the previous section, there are instances where the icon we are looking for is not provided by any of the system fonts or that an icon does not have its corresponding outline/fill version we wish. To solve this, we could look for other fonts available online, free or commercial, and embed them into our project. Some projects might even have the budget to get professional font designer services where we could have fonts created with icons from trademarked visual assets, like logos or other visual assets that help our app set a differentiating element.
Image may be NSFW. Clik here to view.
With a font file (usually TTF) that contains our custom icons embedded into the application project, we can proceed and use it in the different controls that we discussed earlier that support FontFamily changes. The same way as we add image files to the project we can add a font file. Verify that the file is added to the project with Build Action set to Content.
Once we have the file in the project is a matter of reference it in the FontFamily property of the control using this format:
FontFamily="[PathToTTF]#[NameOfFontFamily]"
A font file could define more than one font family, for this reason, we need to indicate its name after the file name. On code-behind you can set this property as well, FontFamily is a class that its constructor takes a string value indicating a Uri pointing to a resource. We can take this mechanism to refer to a custom font in our project this way.
string fontFilePath = “ms-appx:///Assets/Fonts/MyCustomFont.ttf#My Personal Icons”;
fontIcon.FontFamily = new FontFamily(fontFilePath);
Notice that the Uri uses the application Uri schema to refer to resources in a project. This provides the possibility to have font files embedded in satellite assemblies (Class Libraries).
string assemblyName = GetType().GetTypeInfo().Assembly.GetName().Name;
string fontFamilyPath = $"ms-appx:///{assemblyName}/Fonts/MyCustomFont.ttf#My Personal Icons");
Final thoughts
Before closing this post, let’s talk about the user licenses that govern the use of fonts. The contents of fonts, being them letters or symbols, are bound to license that indicates if it is possible or not for us to redistribute the file with our application. It also indicates other permissions or restrictions regarding the modifications of the glyphs it contains.
A graphical designer or a company might have trademarked those designs an explicit permission should exist. In the case of the Segoe fonts or other fonts that are part of Windows 10, when we refer to their glyphs in our app without embedding the file in our project is part of the expected usage of it allowed by its license. We are not distributing the font and when the application is launched in a customer’s Windows 10 machine it will load the font information from that machine.
Same goes for any other font from the internet. We need to check what permissions or restrictions we must follow to make use of it.
In summary, we visited the icon controls that the platform offers as well as how to consume the system provided icons in those controls and a mechanism to include custom-made fonts embedded in our project or a class library.
Sub onShowMethod(contextObject As Object)
MsgBox "BackStage メニューに遷移しました"
End Sub
Sub onHideMethod(contextObject As Object)
MsgBox "BackStage メニューを閉じてブックに遷移しました"
End Sub
Bob Ward and I worked with our SQL Server Tool developers (thanks David) to enable ‘Quick XE Trace’ capabilities. The feature is available in the latest SQL Server Management Studio (SSMS) release.
Despite the deprecation of SQL Profiler several years ago, as well as various documents and blogs pointing out the older trace facilities shortcomings and performance impact on the SQL Server, SQL Profiler is still a top choice of SQL Server Developers and DBAs. The ‘quick’ ability kept surfacing as a reason for using SQL Profiler. The ‘quick’ part was defined as getting live data on the DBA’s or Developer’s screen with just a few clicks.
The new tree node (XE Profiler) provides that ‘quick’ ability. The ‘Quick XE Profiler’ displays live events using the simple ‘Launch Session’ menu selection. The templates capture common events and leverage XEvent enhanced view capabilities to display the event data.
Here is an example using the SQL Server SSMS 2017 against my SQL 2016 server.
Learning anything new always takes time and patience. Whether you’re new to Azure or already a cloud professional, training is one of the best investments you can make in your career. Enrich your technical skills with one of our hands-on training courses listed below!
Azure Fundamentals
Type: Technical (L200)
Audience: IT Professional
Cost: $299
Product: Microsoft Azure
Date & Locations: Brisbane (February 8-9); Sydney (February 8-9); Perth (February 22-23); Melbourne (February 28-March 1); Canberra (March 12-13)
This course introduces key concepts for cloud computing and how Microsoft Azure aligns with those scenarios. Students are introduced to several key Azure services and solutions that align with the following technical disciplines, Infrastructure as a Service, Hybrid Cloud, Application Development, Big Data and Analytics and Cloud Security. REGISTER HERE
Architecting Azure IAAS and Hybrid Solutions
Type: Technical (L300)
Audience: IT Professional / Architects
Cost: $699
Product: Microsoft Azure
Date & Locations:Melbourne (February 5-7); Sydney (March 14-16)
The Azure IaaS and Hybrid Architect workshop is designed to prepare the architect to design solutions with Microsoft Azure. This workshop is focused on designing solutions using Infrastructure as a Service (IaaS) and other technologies to enable hybrid solutions such as data centre connectivity, hybrid applications, and other hybrid use cases such as business continuity with backup and high availability. Individual case studies will focus on specific real-world problems that represent common IaaS and Hybrid scenarios and practices. Students will also experience several hands-on labs to introduce them to some of the key services available. REGISTER HERE
Implementing Microsoft Azure Infrastructure
Type: Technical (L300)
Audience: IT Professional / Developers
Cost: $899
Product: Microsoft Azure
Date & Locations: Melbourne (February 12-16)
This training explores Microsoft Azure Infrastructure Services (IaaS) and several PaaS technologies such as Azure Web Apps and Cloud Services from the perspective of an IT Professional. This training provides an in-depth examination of Microsoft Azure Infrastructure Services (IaaS); covering Virtual Machines and Virtual Networks starting from introductory concepts through advanced capabilities of the platform. The student will learn best practices for configuring virtual machines for performance, durability, and availability using features built into the platform. Throughout the course the student will be introduced to tasks that can be accomplished through the Microsoft Azure Management Portal and with PowerShell automation to help build a core competency around critical automation skills. REGISTER HERE
Developing Microsoft Azure Solutions with Azure .NET
Type: Technical (L300)
Audience: IT Professional / Developers
Cost: $799
Product: Microsoft Azure
Date & Locations: Perth (April 16 -19): Sydney (April 30 – May 03); Melbourne (June 4-7)
This course is intended for students who have experience building ASP.NET and C# applications. Students will also have experience with the Microsoft Azure platform and a basic understanding of the services offered. This course offers students the opportunity to take an existing ASP.NET MVC application and expand its functionality as part of moving it to Azure. This course focuses on the considerations necessary when building a highly available solution in the cloud. REGISTER HERE
Introduction to Containers on Azure
Type: Technical (L200)
Audience: IT Professional
Cost: $599
Product: Microsoft Azure
Date & Locations:Sydney (March 12-13)
This course covers demonstrates different approaches for building container-based applications and deploying them into Azure. Different modules cover Windows and Linux based Docker containers with popular container orchestrators like kubernetes and DCOS provisioned by the Azure Container Service. The course will also show integration of container registries, specifically Docker Hub and the Azure Container Registry into DevOps workflows. This course starts with the basics of building a Linux and a Windows container running a .NET Core application. The course concludes showing how to customize the ACS templates with the acs-engine to deploy advanced cluster configurations. REGISTER HERE
Next Up Exam Camp 70-532: Developing Microsoft Azure Solutions
Type: Technical (L300)
Audience: IT Professionals looking to earning formal qualifications
Cost: $399
Product: Microsoft Azure
Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)
Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Azure is a must-have certification for anyone looking to prove their skills. REGISTER HERE
Next Up Exam 70-533 Implementing Microsoft Azure Infrastructure Solutions
Type: Technical (L300)
Audience: IT Professionals looking to earning formal qualifications
Cost: $399
Product: Microsoft Azure
Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)
Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Azure is a must-have certification for anyone looking to prove their skills. REGISTER HERE
Next Up Exam 70-535 Architecting Microsoft Azure Solutions
Type: Technical (L300)
Audience: IT Professionals looking to earning formal qualifications
Cost: $399
Product: Microsoft Azure
Date & Locations: Online Self Study February 12 – March 12 / In Person Exam Dates; Melbourne (March 20); Adelaide (Adelaide 20); Perth (March 21); Brisbane (March 23): Sydney (March 26)
Earning any kind of specialist certification is a great way to stand out from the crowd, whether you’re looking for a new challenge, a new job, or a way to make yourself more valuable to your current employer. With the growing importance of the cloud, Microsoft Azure is a must-have certification for anyone looking to prove their skills. REGISTER HERE
I have written numerous articles about ASP.NET and creating memory dumps, but noticed I had not written one specifically about capturing an ASP.NET Core memory dump on an Azure App Service. Here are some of my ‘related’ articles of this matter.
Figure 1, create an ASP.NET Core 2.0 application, simple
Inside the Index.cshtml.cs file I added the infamous Sleep() method to make sure performance is not very good. And indeed it is slow, 5 seconds exactly.
public class IndexModel : PageModel
{
public void OnGet()
{
System.Threading.Thread.Sleep(5000);
}
}
Then I published the project out to an Azure App Service via Visual Studio 2017, Figure 2, by right-clicking the project –> Publish and followed the wizard where I selected the subscription, resource group and app service plan, I show a figure of that relationship here.
Figure 3, troubleshoot an ASP.NET Core 2.0 application, simple, on Azure
I reproduced the issue, right-clicked the DOTNET.EXE –> Download Memory Dump –> Full Dump, Figure 4. Note that the issue must be happening at the time the dump is taken in order for the issue to be seen in the dump. A dump is just a snapshot of what is happening at the time it is taken.
Figure 4, troubleshoot / memory dump an ASP.NET Core 2.0 application, simple, on Azure
I tried to have 5 request running at the time I took the memory dump, let’s see how it looks.
If you have not already seen my article “Must use, must know WinDbg commands, my most used”, then check it out here. As seen in Figure 5, running !mex.us grouped thread 15 and 16 together as they had the same stack patterns. I found 1 other thread that was running my request, but the stack was a little different so it didn’t make that group.
Figure 5, troubleshoot / analyze a memory dump of an ASP.NET Core 2.0 application, simple, on Azure
Like always, it is easy to find the problem when you coded it on purpose, but the point is, if you see a lot of threads doing the same thing in the process and when you took the dump there was high CPU or high latency, it is highly probable that the method at the top of the stack is the one that needs to be looked into more.
An added tip, to see the value of the Int32 that is passed to the System.Threading.Thread.Sleep() method, as that is managed code, you can decompile the module and then look at the code, but if you didn’t want to do that you can execute kp, as seen in Figure 6.
Figure 6, troubleshoot / analyze a memory dump of an ASP.NET Core 2.0 application, simple, on Azure
Had it been a heap variable you can use !sos.dso and you’d see it stored on the heap, however, we all know that Integers are not stored on the heap right(*)?
I have a series of posts on DevOps for Data Science where I am covering a set of concepts for a DevOps “Maturity Model” – a list of things you can do, in order, that will set you on the path for implementing DevOps in Data Science. In this article, I'll cover the next maturity you should focus on - Automated Testing.
This might possibly be the most difficult part of implementing DevOps for a Data Science project. Keep in mind that DevOps isn't a team, or a set of tools - it's a mindset of "shifting left", of thinking of the steps that come after what you are working on, and even before it. That means you think about the end-result, and all of the steps that get to the end-result, while you are creating the first design. And key to all of that is the ability to test the solution, as automatically as possible.
There are a lot of types of software testing, from Unit Testing (checking to make sure individual code works), Branch Testing (making sure the code works with all the other software you've changed in your area) to integration testing (making sure your code works with everyone else's) and Security Testing (making sure your code doesn't allow bad security things to happen). In this article, I'll focus on only two types of testing to keep it simple: Unit Testing and Integration Testing.
For most software, this is something that is easy to think about (but not necessarily to implement). If a certain function in the code takes in two numbers and averages them, that can be Unit tested with a function that ensures the result is accurate. You can then check your changes in, and Integration tests can run against the new complete software build with a fabricated set of results to ensure that everything works as expected.
But not so with Data Science work - or at least not all the time. There are a lot of situations where the answer is highly dependent on minute changes in the data, parameters, or other transitory conditions, and since many of these results fall within ranges (even between runs) you can't always put in a 10 and expect a 42 to come out. In Data Science you're doing predictive work, which by definition is a guess.
So is it possible to perform software tests against a Data Science solution? Absolutely! You not only can you test your algorithms and parameters, you should. Here's how:
First, make sure you know how to code with error-checking and handling routines in your chosen language. You should know how to work with standard "debugging" tools in whatever Integrated Development Environment (IDE) as well. Next, implement a Unit Test framework within your code. Data Scientists most often use Python and/or R in their work, as well as SQL. Unit testing frameworks exist within all of these:
After you've done the basics above, it's time to start thinking about the larger testing framework. It's not just that the code runs and integrates correctly, it's that it returns an expected result. In some cases, you can set a deterministic value to test with, and check that value against the run. In that case, you can fully automate the testing within the solution's larger Automated Testing framework, whatever that is in your organization. But odds are (see what I did there) you can't - the values can't be deterministic due to the nature of the algorithm.
In that case, pick the metric you use for the algorithm (p-value, F1-score, or AUC, or whatever is appropriate for the algorithm or family you're using) and store it in text or PNG output. From there, you'll need a "manual step" in the testing regimen of your organization's testing framework. This means that as the software is running through all of the tests of everyone else's software as it creates a new build, it stops and sends a message to someone that a manual test has been requested.
No one likes these stops - they slow everything down, and form a bottleneck. But in this case, they are unavoidable, with the alternative being that you just don't test that part of the software, which is unacceptable. So the way to make this as painless as possible is to appoint one of the Data Science team members as the "tester on call", that will watch the notification system (which should be sent to the whole Data Science team alias, not an individual) and manually check the results quickly (but thoroughly) and allow the test run to complete. You can often do this in just a few minutes, so after a while it will just be part of the testing routine, allowing a "mostly" automated testing system, essential for the Continuous Integration and Continuous Delivery phases (CI/CD). We'll pick up on Continuous Delivery in the next article.
In the second post in his series on Auto-scaling a Service Fabric cluster, Premier Developer consultant Larry Wall highlights a new feature that allows you to tie auto-scaling to an Application Insights metric.
In Part I of this article, I demonstrated how to set up auto-scaling on the Service Fabric clusters scale set based on a metric that is part of a VM scale set (Percentage CPU). This setting doesn't have much to do at all with the applications that are running in your cluster, it's just a pure hardware scaling that may take place because of your services CPU consumption or some other thing consuming CPU.
There was a recent addition to the auto-scaling capability of a Service Fabric cluster that allows you to use an Application Insights metric, reported by your service to control the cluster scaling. This capability gives you more finite control over not just auto-scaling, but which metric in which service to provide the metric values.
This is my second blog on FTP. The first one being this.
This blog specifically explains how to address the below error which most of them have encountered while setting up the ASP.NET SQL membership authentication with FTP site:
Response: 220 Microsoft FTP Service
Command: USER test
Response: 331 Password required
Command: PASS *********
Response: 530-User cannot log in.
Response: Win32 error:
Response: Error details: System.Web: Default Membership Provider could not be found.
Response: 530 End
Error: Critical error: Could not connect to server
You can follow this blog to configure the authentication with SQL membership for the FTP site on IIS.
Post this if you still run into the above issue then you need to ensure that the following steps are followed:
Steps 1:
Have the below setting within the web.config within the <configuration> tag. (depending on the framework and the bitness your application pool is using)
Example: If your AppPool is 64 bit running under .NET 4.0. You should be using the location C:WindowsMicrosoft.NETFramework64v4.0.30319Configweb.config
If you are making use AppPool which is on .NET version 2.0 with 64 bit, then you need to modify the web.config in the location C:WindowsMicrosoft.NETFramework64v2.0.50727CONFIG. Also ensure that you have modified the above highlighted section to 2.0.0.0.
Steps 2:
Grant the permissions to the Network Service Account Write / Modify permissions to the C:WindowsMicrosoft.NETFramework64v4.0.30319Temporary ASP.NET Files folder. Note: depending on your AppPool framework and bitness the above path can differ.
Steps 3:
Remember You don't need to have any settings at the FTP website level for the .NET Roles, .NET Users etc, Because the root web.config setting should apply to all the applications which run on that version of framework and bitness.
Hope this helps Image may be NSFW. Clik here to view.
Recently, I worked with one of the customer who wanted assistance in implementing the logging and capturing the perfview traces for aspnet core web application.
So, I decided to blog on this topic and explain how to enable the logging, capture and analyze perfview trace.
I am making use of a simple ASPNET Core MVC application here to explain this.
I have the below lines in my Program.cs file:
static void Main(string[] args)
{
BuildWebHost(args).Run();
}
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.CaptureStartupErrors(true)
// THE ABOVE LINE IS FOR THE STARTUP RELATED ISSUES
Go to Collect and select Collect option as shown below:
Image may be NSFW. Clik here to view.
Expand the Advanced Options.
Enable the Thread Time and IIS providers. (This will help us give some additional Information while troubleshooting) Note: IIS provider is needed if the app is hosted on IIS. Else you can skip it. This provider can be used only if the IIS Tracing feature is installed.
Place the *Microsoft-Extensions-Logging section within Additional Providers section as shown below:
Image may be NSFW. Clik here to view.
Perfview Analysis:
After we open the perfview ETL trace we see our logging information being logged:
Event Name
Time MSec
Process Name
Rest
Microsoft-Extensions-Logging/FormattedMessage
39,706.201
dotnet (25292)
ThreadID="285,016" Level="2" FactoryID="1" LoggerName="Demo.Controllers.HomeController" EventId="1000" FormattedMessage="In Index method.............." ActivityID="//1/3/1/"
On December 14th 2017 we started having a series of incidents with Visual Studio Team Services (VSTS) that had a serious impact on the availability of our service for many customers (incident blogs #1#2#3). We apologize for the disruption. Below we describe the cause and the actions we are taking to address the issues.
Customer Impact
This incident caused intermittent failures across multiple instances of the VSTS service within the US and Brazil. During this time we experienced failures within our application which caused IIS to restart resulting in customer impact for various VSTS scenarios.
The incident started on 14 December. The graph below shows periods of customer impact for the Central US (CUS) and South Brazil (SBR) scale units. Image may be NSFW. Clik here to view.
What Happened
For context VSTS uses DNS for routing to the correct scale unit. On account signup VSTS queues a job to create a DNS record for {account}.visualstudio.com to point to the right scale unit. Because there is a delay between the DNS entry being added and being used, we use Application Request Routing (ARR) to re-route requests to the right scale unit until the DNS update is visible to clients. Additionally, VSTS uses web sockets to provide real time updates in the browser for pull requests and builds via SignalR.
The application pools (w3wp process) on the Brazil and Central US VSTS scale units began crashing intermittently on December 14th. IIS would restart the application pools on failure, but all existing connections would be terminated. Analysis of the dumps revealed that a certain call pattern would trigger the crash.
The request characteristics common to each crash were the following.
The request was a web socket request.
The request was proxied using ARR.
The issue took a while to track down because we suspected recent changes to the code that uses SignalR. However, the root cause was that on December 14th we had a released a fix to an unrelated issue, and that fix added code to use the ASP.Net PreSendRequestHeaders event. Using this event in combination with web sockets and ARR caused an AccessViolationException terminating the process. We spoke with the ASP.Net team and they informed us that the PreSendRequestHeaders method is unreliable and we should replace it with HttpResponse.AddOnSendingHeaders instead. We have released a fix with that change.
While debugging the issue, we mitigated customer impact by redirecting ARR traffic once we realized that was the key cause.
Workaround – stopped ARR traffic from going to SBR.
Workaround – Redirected *.visualstudio.com wildcard from CUS1 to pre-flight (an internal instance).
Next Steps
In order to prevent this issue in the future, we are taking the following actions.
We have added monitoring and alerting specifically for w3wp crashes.
We are working with the ASP.NET team to document or deprecate the PreSendRequestHeaders method. This page has been updated, and we are working to get the others updated.
We are adding more detailed markers to our telemetry to make it easier to identify which build a given scale unit is on at any point in time to help correlate errors with the builds that introduced them.
Sincerely,
Buck Hodges
Director of Engineering, VSTS
이후에 perfcollect를 수행할 terminal 창을 하나 더 연후에 아래를 수행하여 CPU sampling을 시작한다.
sudo ./perfcollect collect hicputrace
수행과 더불어 sampling은 시작된다.
이후에는 애플리케이션 수행창에서 애플리케이션을 수행한다. 그리고 일정 시간이 흐른 후에 perfcollect가 호출되었던 창에서 ctrl+c를 눌러 중지하면, 현 directory에 hicputrace.trace.zip 파일이 생성된다. 해당 trace 파일은 windows 환경에서 perfview (http://aka.ms/perfview) 툴을 이용하여 볼 수 있다.
아쉽게도 PerfCollect 툴은 현재 Memory Profiling은 제공하지 않는 다. 만일, Memory Profiling과 같은 정보를 추출하려면 core dump를 통해서 .NET managed memory사용량을 확인할 수 있다. 그 방법에 대해서는 다음과 같다.
먼저, Core dump를 수집하는 방법은 여러가지가 있으나, dump의 size를 고려해 볼 때, 아래의 방법을 생각해 볼 수 있다. 우선 애플리케이션의 수행에 앞서서 아래의 명령을 수행하여 충분한 size의 덤프를 생성할 수 있도록 한다.
ulimit -c Unlimited
그리고, 애플리케이션을 수행하여 메모리가 충분히 누수가 되는 시점에 새로운 terminal 창을 오픈하여 아래와 같이 애플리케이션을 중지시키면 해당 시점에 덤프가 떨어진다.
sudo kill -4 <pid>
덤프는 애플리케이션 수행시점의 현재폴더 혹은 /var/crash 폴더에서 확인할 수 있다.
core덤프를 수집하는 일반적인 방법은 다음과 같다. Gdb를 이용하는 방법이다.
sudo apt-get install gdb
sudo gdb
이후에 ps -efH 를 이용하여 pid값을 얻은 후에 gdb를 attach 한다.
attach <pid>
Gdb가 attach 된 이후에 적절한 시점에 generate-core-file 명령을 통해서 core file을 얻을 수 있다.
generate-core-file <core file path>
이후 덤프분석을 위해 해당 덤프를 lldb 디버거를 통해서 오픈 할 수 있다. Gdb에서 사용할 수 있는 .NET Core plugin이 존재하지 않기 때문에 lldb 디버거를 사용해야 한다.
System.Byte[]의 MethodTable의 정보는 00007f8138ba1210 이며, 10565개의 동일한 타입의 오브젝트가 존재하는 것을 알 수 있다. “dumpheap -mt” 명령어는 System.Byte[]타입의 오브젝트들을 나열해준다.
dumpheap –mt 00007f8138ba1210
상위의 명령을 수행하면, <MT> <Address> <Size> 의 배열로 나열된 정보를 확인할 수 있다. 그리고, “dumpobj” 명령은 “dumpheap -mt” 명령의 결과값에 존재하는 address 값을 parameter로 사용하여 개별적인 오브젝트의 정보를 출력한다.
memleakdemo.Program.ManagedLeak 메소드 안에 존재하는 ArrayList가 System.Byte[]를 참조하고 있다. 그러므로, ArrayList의 Size 가 얼마나 큰지, 그리고, 언제 Byte[]를 release 하는 지 등을 검토함으로써 Memory 문제여부를 isolation할 수 있을 것 같다.
// memleakdemo.Program.ManagedLeak (System.Object)
private static void ManagedLeak(object s)
{
State state = (State)s;
ArrayList list = new ArrayList();
for (int i = 0; i < state._iterations; i++)
{
if (i % 100 == 0)
Console.WriteLine(string.Format("Allocated: {0}", state._size));
System.Threading.Thread.Sleep(10);
list.Add(new byte[state._size]);
}
}
그러기 위해서는 결국 얼마나 많은 오브젝트를 살펴보느냐가 필요한데, 이것보다는 이러한 부분을 직관적으로 살펴볼 수 있는 memory profiler가 제공되면 좋을 듯 하다.
Recomazing is Australia's largest shared knowledge bank of recommended tools and services for startup growth. Members of the Recomazing community get access to weekly recommendations on solving common startup problems from the likes of Canva, Atlassian, Airtasker, zipMoney, DropBox, Hubspot, Slack and more. In this article, we've included the Recomazing profile links for Marc's recommended tools and resources. Feel free to visit the profiles to see tips and insights from other leading entrepreneurs.
When I first quit my job to start Recomazing I had absolutely no idea about the world of capital raising. I found the entire process incredibly overwhelming. Around every corner lingered a new term I had never heard of before...'Term Sheets', 'Pre-Money Value', 'Equity Splits', 'Series A'… it was like learning a new language
To make matters worse, I found the info online to be conflicting and rarely would I find anything from the entrepreneur's point of view. As a solo founder, I was already working 20 hours a day getting my business off the ground, so the thought of trying to piece together hundreds of separate articles to paint a full picture was utterly exhausting
Two and half years on and I’m happy to say Recomazing has completed two successful (read gruelling) rounds of funding and I've basically made every mistake you could possibly make. I've pitched 100+ times, received helpful advice from leading investors in the industry, spent countless hours researching online and attended numerous seminars
This guide is an attempt to condense all my learnings in one place so any wide-eyed founder reading this will start from a better point than I did. To be as helpful as possible I will share my recommendations for all the tools and resources that helped me navigate through 2 successful rounds of funding
I've organised my reco lists into the 3 most important segments of my journey:
●The Approach
●The Pitch
●The Money
♦The Approach
The first rule of startups is NEVER RAISE CAPITAL! Instead, go back, interrogate your business model and try to work out a way to create a case where you don't need to raise capital.
You should strive to get to a position where if investment is needed it is only to SCALE your established business, not CREATE your business (just ask the founders of Atlassian, Australia's most successful startup, who never needed a dollar of investor money until they wanted to scale).
Not every startup needs to raise capital, nor should they. Take it from someone who is stupid enough to have gone through this process multiple times; there is no greater distraction to your business than raising capital
However, if you do need to go down the path of raising capital, here are some tips.
How do I get an investor to invest in my startup?
Think of it like it is your own money. If someone comes up to you and says "hey, I’ve got a good idea but I need 100k from you" you're going to have some pretty serious concerns about the risk involved. What would it take to convince you to invest? No doubt, it would require a hell of a lot more than just an idea on a swish looking presentation.
Here are my top recos for connecting with investors and making a great first impression (so you’ll get a chance to make a second).
Recomazing Score Card
Investors will look to minimise their risk. The less risk, the more likely they will invest. I’ve created a graphic outlining some of the most common 'scorecard' factors discussed in my pitch meetings.
A warm referral beats a cold intro every day of the week! The investor will be more open to hearing your pitch if you come with a warm intro.
Always write a suggested intro email for whoever is referring you. It makes everyone's lives easier and no-one should be able to sell it better than you.Pitching Hacks is a great resource to check out if you want to dig into the detail of how to structure your emails.I found out about Pitching hacks by checking out Will Davies’, founder of Car Next Door, profile on Recomazing. Will has raised millions so I jump on any reco he has to offer.
Before you start identifying which investors are right for you, you'll need a place to store all the info. Put down that excel sheet and download Streak. Streak is a FREE CRM tool that lives right within your Gmail account. The templates are perfect for keeping track of your investor details and comms.
LinkedIn is the ultimate tool for professional stalking. Find someone that knows your lead and get an intro. If you don’t know them stalk them. Look at what events they are going to, go to the next one. Comment on their posts and build your online reputation. Make sure you get your profile looking schmick before you outreach.
A lot of the top investors regularly speak at meet ups and seminars so it really isn't hard to introduce yourself. Download meetups and type in 'startups' - there's a stack of events to attend. Remember, an investor's job is to find startups worth investing in so don't feel bad about sharing your elevator pitch.
P.S. I met my first investor by bailing him up after he spoke on an angel investment panel at a Meetup seminar. He has been our ‘fairy god father’ ever since (Hi David!)…so get out there to some meetups and hustle.
♦The Pitch
What should I say when first reaching out?
I always found it helpful to have 3 key items sorted before reaching out to investors:
1. Your one sentence pitch.
2. Your elevator pitch.
3. Your pitch presentation.
Your one sentence pitch
This is your ‘sound bite’ that should immediately convey the essence of your startup and commonly (but not necessarily) references a well known business model (eg Recomazing is the 'Trip Advisor' for the best tools & services to run a business.)
I actually hate using this sound bite as I view Recomazing as so much more than a review site with a real sense of community, collaboration and curation...but that's not the point of the soundbite.
I came to realise that the point of the one sentence sound bite is to give your investor an immediate grasp of your model (and the strengths/weaknesses you may be incurring). It's an entry point to then talk about the rest.
The truth is a lot of startups just apply a well known model to a different industry. It's the reason you hear so many startups refer to themselves as the "Uber for XYZ". It keeps things simple.
Your elevator pitch
This needs to be a bit longer than your sound bite and provides a bit more insight into your secret sauce eg.
Recomazing helps entrepreneurs discover the best tools and services to grow their business.
We believe in the 3 C's:
Community: Our members help each other by contributing to the shared knowledge bank of trusted recommendations.
Collaboration: We partner with leading business communities to foster greater collaboration between members (eg coworking spaces, accelerators, online networks etc).
Curation: We source expertise from leading tech entrepreneurs who want to 'give back' and help our members grow (eg Atlassian, Dropbox, AirTasker etc).
Your pitch deck
Typically, investors will request your pitch be 10-12 slides with time for Q&A at the end. I always found there were similar questions at the end so I prepared those slides in advance and put them in the appendix in case they were asked.
Here's a little infographic to demonstrate the basics on what to include. After your first pitch you should get a sense of how to adapt it and where you need to potentially add more detail but this is a great base to start from.
If you leave a pitch feeling like the investors just aren't understanding how awesome your idea is then try to avoid shifting blame to them, instead you need to ask yourself why they aren't getting it:
●Are your slides clear enough?
●Are you not encapsulating the vision well enough?
●Are you getting stuck in the detail instead of focusing on the core premise
Startup Pitch Decks gives you a view on how some of the world's top startups initially pitched their businesses (including Airbnb, Intercom, Buffer etc)
Pitchbot is a great tool I wish I had before starting my raising process. It's a bot that asks you common questions from pitch meetings - give it a crack!]
♦The Money
It’s exciting when you get to this stage, but, for a first time founder, this can be the most confusing and harrowing bit of the entire process. Here are some fantastic tools, resources and partners that helped me to demystify the investment process and more efficiently work my way through terms sheets and agreements. AND, this guide would not be complete without including recos for my investors in each phase. I wouldn’t be writing this without them Image may be NSFW. Clik here to view.Image may be NSFW. Clik here to view.
You can view the open source seed financial docs released by Avcal here to get an idea of the terms you can expect to see in a term sheet. A number of major VC firms have pledged to use these templates.
Our friends and valued partners at Muru-D just implemented SAFE (Simple Agreement for Future Equity) docs after they were created by the world's leading accelerator, Y Combinator. It's a great move for our ecosystem. To see how the terms differ you can view the details of a SAFE doc here.
My experience with Monash Private Capital for our seed round was great. A lot of investors say they ‘invest in people’ above all else but personally I found that revenue often gets a higher priority over people (which is understandable). I found Monash truly believe in investing in people (although those people need to have a very good commercialisation strategy in place).
Our Innovation Fund is the VC fund that led our second round. I speak to a lot of founders who aren’t happy to recommend their investment team but I can gladly say that’s not the case with us. We love working with the OIF team, they’ve always helped in any way they can.
Get insights from the Y Combinator Blog, the #1 accelerator in the world. This blog is full of startup wisdom. I wish I discovered it earlier in my startup journey.
And there you have it, everything I wish I knew when I started out. I hope this has helped demystify the space and give you some immediate shortcuts on your own cap raising journey!
We hope you found this content helpful. If you’ve had a good experience with BizSpark, we’d love for you to share your 'reco' with the rest of the startup community on our profile.
Wherever you are in your business journey, from budding start-up to sprawling enterprise, chances are you’d never turn your back on a promising marketing tip. That’s why our Smart Partner Marketing site is a can’t-miss. Get started by using our assessment tool to see where your business stands, then dive in to a curated collection of marketing resources that align with your goals.
The Smart Partner Marketing site has marketing recommendations for companies of all sizes—whether you’re looking to build a foundation, amplify your presence, or strengthen customer relationships. Microsoft is here to help you separate your business from the competition, attract and retain the right customers, and get on the path to sustained growth. Learn more about Smart Partner Marketing today.
現在社内で利用されている Web システムを Azure Web App に移行したい、というご相談を良くいただくのですが、多くの場合は以下の理由によるようです。
PaaS を利用することで保守・運用コストを抑えたい
ワークスタイル改善のためにモバイル対応したい
一般的に社内システムは営業日の業務時間中に使用されるため、全体の運用時間から考えるとリソースの利用率が低くなりがちです。このためスケールアウト/スケールインが容易でコスト効率の良い Web Apps は適切なソリューションであると考えます。また Web Apps は基本的にパブリックインターネットに公開された Web サイトですので、インフラという面でも既に整っているといえます。あとはアプリケーションやコンテンツがユーザーのモバイルデバイスに対応していれば良いだけです。
多くの場合「社内向けのシステム」を利用するユーザーは同一組織に属し、そのユーザー ID は何らかの ID 管理システム上で管理・運用がなされていると思います。これは従来から Windows プラットフォームをご利用いただいていた場合には AD : Active Directory が該当します。イントラネットからの利用を想定した Web サーバーであればユーザー認証を AD に任せることで「社内向けシステム」としては必要な認証を行うことが出来るわけです。
ただ、ほとんどの組織・企業においてはインターネット環境で同様の方式を利用することができないと思います。このようなケースにではインターネットで利用できる ID 管理および認証サービスとして Azure Active Directory がご利用いただけます。Azure AD はそれ単独で利用することもできますが、社内に設置された AD と ID 情報を同期したり、認証をフェデレーションすることも可能です。社内 AD と連携した Azure AD に対して認証処理を委託することで、「社員しか利用できないインターネット Web システム」を構築することができるわけです。
前者(図中右側)の場合、アプリケーション自体が Azure AD が対応する認証プロトコルを使用することで認証を実現します。この方式ではアプリケーション自体が動作するプラットフォームに非依存になりますが、各アプリケーション開発言語用の認証ミドルウェアや SDK を準備、設定し、コーディングするといった手間が発生します。
後者(図中左側)の場合は Web App や Azure AD の構成設定のみで、つまりノンコーディングで認証を有効にできますので非常に簡単です。ただしアプリケーションとしては Web Apps 上で動作するときだけ認証が効くことになりますので、開発中やテスト中に若干の工夫が必要になります。以降では簡単に構成可能な後者の方をご紹介します。
Web App が Azure AD で認証する際の挙動
Web App が Azure AD の認証を使用した際の挙動は以下のようになります。従来から良く利用されていた Cookie ベースのフォーム認証画面が外部サイトになったもの、と考えるとわかりやすいと思います(技術的には異なります)。
始めに作成しておいた Web App を表示し、「認証/承認」を選択します。App Service 認証を「オン」に設定し、未認証状態のリクエストに対して「Azure Active Directory でのログイン」を要求するように指定します。認証プロバイダーとして「Azure Active Directory」を選択、設定情報として先ほどアプリケーション登録を行った際の内容を入力します。
Azure AD テナントを表す「ディレクトリ ID」と、そこに登録されたアプリケーションを表す「アプリケーション ID」を指定することで、Web App 側から認証を委託する先を一意に決めることが出来るわけです。この情報を元に未認証のユーザーに対しては「アクセスする前に指定のディレクトリでアプリに対してアクセス許可を持っていることを証明するトークンを持ってこい」と言って門前払いリダイレクトすることになるわけです。つまり本来はユーザー認証ではなく「認可」というのが正確な表現なのでしょうが、この記事では通りの良い(?)認証と表現しています。
[Step 4] ユーザー認証が動作していることを確認する
ここまで設定した状態で作成した Web App にアクセスすると、未認証状態では自動的に Azure AD のログイン画面にリダイレクトされることが確認できます。Azure ポータルを操作していたものと同じ Web ブラウザを使用すると認証情報のキャッシュによって自動的にログインしてしまうことがあり、挙動が分かりにくいことがありますので、InPrivate ブラウズ機能や別の Web ブラウザを使用することをお勧めします。
例えば Web アプリの開発側と利用者が別の組織に所属しているようなケースでは、開発者の ID 管理とアプリ利用者の ID 管理が独立しているケースが考えられます。また同一の組織に所属している場合でも、開発者と利用者の ID を管理するテナントをAzure のサブスクリプション管理に利用せず、異なるテナントないしはマイクロソフトアカウントを利用しているケースも多くあります。簡易モードではユーザー認証に使用するテナントを選択することができないため、このようなケースでは Azure AD 側の設定と Web App 側の設定を独立して実施せざるを得ず、詳細モードが必要になります。
比較的大きな組織では Azure 上でアプリケーション開発するチームと、組織ユーザーの ID を管理チームが独立していることが多いと思います。このようなケースでは Azure AD の管理権限を持って保守・運用を行うのはID 管理チームになり、アプリ開発者はたとえ「Azure の管理者」であってとしても「Azure AD 上では一般のユーザー」でしかないことになります。
組織のセキュリティポリシーによっては 、一般ユーザーに対してアプリケーション登録権限が解放されていないことがあり、その場合には ID 管理チームに Azure AD に対するアプリケーション登録を代行してもらわざるを得ません。その際には開発側からは Step 2 で設定したような Web App の情報を引き渡し、Step 3 で使用するテナント ID やアプリケーション ID を教えてもらう必要があります。よってこのようなケースでも詳細モードによる設定が必要になってきます。
Azure AD のデフォルト設定では「ユーザーはアプリケーションを登録できる」ようになっています。逆に言えばこれが出来ないということは、組織側のセキュリティポリシー等に抵触するためあえて設定が変更されている可能性があります。一方でアプリケーションにとっては組織が運営する正規の Azure AD テナントでユーザー認証ができないと、ID 管理や認証の基盤が別途必要になってしまうことになり、これはこれでセキュリティやガバナンスの観点から問題と言えます。となると前述の「登録代行」のような申請ワークフローが組織的に必要になってくると思いますが、これはこれで手間も時間もかかりますので、既定の設定である「ユーザーはアプリケーションを登録できる」ようにしていただくことをお勧めします。
まとめ
簡易モードでも詳細モードでも Azure Web App に Azure AD 認証を組み込むのはそれほど難しくはないのですが、開発者はある程度 Azure AD 側での設定も理解しておいた方がいろいろと応用が利きますので、本記事ではあえて比較的「面倒な」手順でもある詳細モードを中心に紹介してみました。
Image may be NSFW. Clik here to view.We’re happy to announce the availability of Windows 10 Step by Step, 2nd Edition (ISBN 9781509306725), by Joan Lambert.
This is learning made easy. Get more done quickly with the newest version of Windows 10. Jump in wherever you need answers—brisk lessons and colorful screenshots show you exactly what to do, step by step.
Do what you want to do with Windows 10!
Explore fun and functional improvements in the newest version
Customize your sign-in and manage connections
Quickly find files on your computer or in the cloud
Tailor your Windows 10 experience for easy access to the information and tools you want
Work more efficiently with Quick Action and other shortcuts
Get personalized assistance and manage third-party services with Cortana
Interact with the web faster and more safely with Microsoft Edge
Protect your computer, information, and privacy
Introduction
Welcome to the wonderful world of Windows 10! This Step by Step book has been designed so you can read it from the beginning to learn about Windows 10 and then build your skills as you learn to perform increasingly specialized procedures. Or, if you prefer, you can jump in wherever you need ready guidance for performing tasks. The how-to steps are delivered crisply and concisely—just the facts. You’ll also find informative, full-color graphics that support the instructional content.
Who this book is for
Windows 10 Step by Step, Second Edition is designed for use as a learning and reference resource by home and business users of desktop and mobile computers and devices running Windows 10 Home or Windows 10 Pro. The content of the book is designed to be useful for people who have previously used earlier versions of Windows and for people who are discovering Windows for the first time.
What this book is (and isn’t) about
This book is about the Windows 10 operating system. Your computer’s operating system is the interface between you and all the apps you might want to run, or that run automatically in the background to allow you to communicate with other computers around the world, and to protect you from those same computers.
In this book, we explain how you can use the operating system and the accessory apps, such as Cortana, File Explorer, Microsoft Edge, and Windows Store, to access and manage the apps and data files you use in your work and play.
Many useful apps that are part of the Windows “family” are installed by manufacturers or available from the Store. You might be familiar with common apps such as Calendar,
Camera, Groove Music, Mail, Maps, News, Photos, and Windows Media Player. This book isn’t about those apps, although we do mention and interact with a few of them while demonstrating how to use features of the Windows 10 operating system.
The Step by Step approach
The book’s coverage is divided into parts that represent general computer usage and management skill sets. Each part is divided into chapters that represent skill set areas, and each chapter is divided into topics that group related skills. Each topic includes expository information followed by generic procedures. At the end of the chapter, you’ll find a series of practice tasks you can complete on your own by using the skills taught in the chapter. You can use the practice files that are available from this book’s website to work through the practice tasks, or you can use your own files.
Features and conventions
This book has been designed to lead you step by step through all the tasks you’re most likely to want to perform in Windows 10. If you start at the beginning and work your way through all the procedures, you’ll have the information you need to administer all aspects of the Windows 10 operating system on a non-domain-joined computer. However, the topics are self-contained, so you can reference them independently. If you have worked with a previous version of Windows, or if you complete all the exercises and later need help remembering how to perform a procedure, the following features of this book will help you locate specific information.
Detailed table of contents Search the listing of the topics, sections, and sidebars within each chapter.
Chapter thumb tabs and running heads Identify the pages of each chapter by the colored thumb tabs on the book’s open fore edge. Find a specific chapter by number or title by looking at the running heads at the top of even-numbered (verso) pages.
Topic-specific running heads Within a chapter, quickly locate the topic you want by looking at the running heads at the top of odd-numbered (recto) pages.
Practice task page tabs Easily locate the practice task sections at the end of each chapter by looking for the full-page colored stripe on the book’s fore edge.
Glossary Look up the meaning of a word or the definition of a concept.
Keyboard shortcuts If you prefer to work from the keyboard rather than with a mouse, find all the shortcuts in one place in the appendix, “Keyboard shortcuts and touchscreen tips.”
Detailed index Look up specific tasks and features in the index, which has been carefully crafted with the reader in mind.
You can save time when reading this book by understanding how the Step by Step series provides procedural instructions and auxiliary information and identifies on-screen and physical elements that you interact with.
About the author
Joan Lambert has worked closely with Microsoft technologies since 1986, and in the training and certification industry since 1997. As President and CEO of Online Training Solutions, Inc. (OTSI), Joan guides the translation of technical information and requirements into useful, relevant, and measurable resources for people who are seeking certification of their computer skills or who simply want to get things done efficiently.
Joan is the author or coauthor of more than four dozen books about Windows and Office apps (for Windows, Mac, and iPad), five generations of Microsoft Office Specialist certification study guides, video-based training courses for SharePoint and OneNote, QuickStudy guides for Windows and Office apps, and the GO! series book for Outlook 2016.
Blissfully based in America’s Finest City, Joan is a Microsoft Certified Professional, Microsoft Office Specialist Master (for all versions of Office since Office 2003), Microsoft Certified Technology Specialist (for Windows and Windows Server), Microsoft Certified Technology Associate (for Windows), Microsoft Dynamics Specialist, and Microsoft Certified Trainer.
Windows 10 Step by Step, Second Edition, is based on the original book coauthored by Joan and her father, Steve Lambert. Joan’s first publishing collaboration with Steve was the inclusion of her depiction of Robots in Love in one of his earliest books, Presentation Graphics on the Apple Macintosh (Microsoft Press, 1984).
High Value Scenarios consist of multiple elements that can help to describe attributes of an applications performance effectively. There are a few different scenarios that I use on a day to day basis, which provide excellent analysis points. In this post, I will attempt to explain one of the scenarios, how to build it, how to analyze the results and why it is effective.
Stair Step
My standard load test is a four-step stair step test. I use this for standard baselining as well as KPI analysis of throughput and response times.
It looks something like this: Image may be NSFW. Clik here to view.
There are four stair steps each one accounting for 0.5X of the target throughput. Where X is defined as the average hourly load of the application, either projected or taken from actual production load.
Once you have X defined, use that to calculate the number of users you need using this previous post. After you have the number of users needed for 1X you can just start with .5X and add .5X every 15 Minutes for a total test time of one hour.
Analyzing Response Times
There are a few different things that response times can do during this scenario: stay constant, increase linearly, increase logarithmically. Consistent response times tell me that the application can handel much more load than what we are running. Linear response times show that the application is starting to queue but not so much that it is past the point of failure. Logarithmic growth shows that the is substantial queuing happing somewhere in the application and the increasing response times are adding to that.
Analyzing Throughput
When analyzing throughput, you should see a steady linear trend. Actual throughput should increase at the same rate of scheduled throughput from the test rig.
Putting it together
To determine how the application is performing under varying degrees of load, analyze the response times and throughput of this scenario. This should explain how the application will respond in a real-world production environment with fluctuating load.
Some great places to use this type of scenario would be in predictable workload applications. It is a great baselining tool that can provide multiple points of analysis that you just can’t get from a standard consistent load scenario and prevents the data skewing associated with a max capacity scenario.
Exceptions to this scenario
This scenario will not work well for applications that have large spikes in throughput.
This scenario will not typically tell you the maximum capacity of the system (Unless the application tips over during one of the load levels)
Example 1 – Consistent KPI: Image may be NSFW. Clik here to view.
Example 2 – Linear KPI: Image may be NSFW. Clik here to view.
Example 3 – Logarithmic KPI:Image may be NSFW. Clik here to view.
My previous blog was specifically to capture the perfview traces for aspnet core MVC application on a Windows box.
The current blog targets capturing the perfview traces for aspnet core MVC application on a LINUX box.
Pre-requisites: 1. On the Windows Development Box ensure that you have the below components:
putty.exe: This is used to connect to your Guest LINUX box
pscp.exe: This is a command line application to securely transfer the files.
2. Have a LINUX Operating system with most recent release. I am making use of Ubuntu 17.04 release for my demo.
3. The ASPNET Core application with the logging enabled. More Info on enabling logging can be found here in my previous blog.
Let's get started.
Step 1: Installing the Dotnet Core SDK on Linux
1. Install the Dotnet Core SDK for Linux from this article
Then run the commands mentioned in the above article. Note that you need to run the command as per the product version. If you are not sure of the version, you can simply run the below command:
navba@CoreLinuxDemo:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 17.04
Release: 17.04
Codename: zesty
2. Post adding dotnet product feed and installing dotnet SDK, you can test the installation by simply creating a sample .NET MVC application using the dotnet CLI. We are creating the app within the myapp directory.
navba@CoreLinuxDemo:~$ dotnet new mvc -o myapp
To see the contents / files the above command gave you, run the below commands:
No XML encryptor configured. Key {79a1ec79-a634-4707-936e-a91a85576f75} may be persisted to storage in unencrypted form.
Hosting environment: Production
Content root path: /home/navba/myapp
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Note: dotnet run command will internally run the dotnet restore / dotnet build commands.
4. To test the application, you can launch another instance of the putty and connect to your LINUX box and try accessing the http://localhost:5000using curl as shown below:
navba@CoreLinuxDemo:~$ curl http://localhost:5000
You should be getting the html response of the application.
Alternatively, you can also try accessing the app using wget as shown below:
navba@CoreLinuxDemo:~$ wget http://localhost:5000
--2017-12-21 12:11:50-- http://localhost:5000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
Once we confirm that we get the expected response from the application we can confirm that the .NET Core SDK is installed fine, and our application is functioning right.
Step 2: Install Nginx server
1. We install the nginx server using the below command:
2. Once it is installed, try to access your nginx server using your external IP address of the LINUX box. If you see the nginx HOME page then your nginx server installation did succeed and its functioning fine.
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
Note: Remember to open PORT 80 in the external firewall (if any). See my configurations below:
Image may be NSFW. Clik here to view.
Step 3: Configuring Nginx as reverse proxy to dotnet core application:
1. We will clear the default configuration using the below command:
Note: The port which dotnet.exe process will be listening to can be different, ensure that you place a correct one in the nginx config file.
3. To ensure that you have the syntax placed right in the configuration file you can run the below command:
navba@CoreLinuxDemo:~/myapp$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
4. If the above command throws any syntactical errors, you need to verify the nginx configuration and manually type the above entry again.
5. The below command should reload the new configuration settings.
navba@CoreLinuxDemo:~/myapp$ sudo nginx -s reload
6. Then you run the dotnet run command again to spawn up the dotnet.exe process.
navba@CoreLinuxDemo:~/myapp$ dotnet run
7. Now try to access the application externally. You should see that the Home/Index page of your MVC application. Our nginx server successfully proxied the request to dotnet application.
Image may be NSFW. Clik here to view.
Note: If you are having trouble in getting this page, please follow the above steps again.
Now that the application is up and running on Linux via nginx you can try capturing perfview traces.
Note that you can do custom logging so that it gets logged within the perfview using this article.
Step 4: Deploying your concerned Core application to Linux:
If you need help in moving your own application from windows environment to Linux before capturing the Perfview traces, you can use pscp.exe 1. On your Dev box, Open the CMD prompt in admin mode.
2. Navigate to the location where you placed your application.
3. From this folder place the path to your pscp.exe and run the below command to place your application contents within the myapp folder within your profile.
4. Post this go back to you linux files and you can run the ls command to ensure that your files have been placed right.
5. Run the dotnet run command to launch the dotnet.exe and note the port number it is listening on. Then you modify the nginx configuration file by following the steps from Step 3 section mentioned above.
Step 5: Capturing Perfview traces for your application
1. Launch a different instance of PUTTY and connect to your LINUX box.
2. The below command will download the perfview tool:
Even though it's not a recommend configuration for our customers (in terms of spam filtering), some customers of Office 365 route their email through a competing spam filtering service in the cloud, or through an on-prem server. That is, the mail flow looks like this:
I've written previously about the problems this can cause, see Hooking up additional spam filters in front of or behind Office 365. However, if you must do it, you may want to ensure that you force all email to go through the 3rd party server. If you hook up a 3rd party server, email can be delivered to your organization in Office 365 through that server or by connecting directly to EOP (Exchange Online Protection). This is not good because our spam filters don't understand that email was sent directly to the service and not through a gateway; if your MX record does not point to Office 365 or EOP, some spam filtering checks are suppressed automatically to avoid false positives.
Therefore, to get the fullest protection possible, I recommend relying upon the 3rd party service, and then maybe or maybe not doing double-filtering in EOP (accepting the fact that there will be false positives and false negatives). But, don't just rely on EOP.
So, to force email through your on-prem server, you will need to install a TLS cert in your on-prem server, and then always ensure that it's used when connecting from the server to Office 365. Then, create a partner connecter using a seldom used attribute which isn’t exposed via the mainstream UX but only through cmdlet. It’s called AssociatedAcceptedDomains.
To do this with TLS-cert-based connectors using cmdlets:
New-InboundConnector
–Name "OnlyAcceptEmailFrom<OnPremServer>"
-ConnectorType Partner
-SenderDomains *
-RestrictDomainsToCertificate $true
-TlsSenderCertificateName <Full set of TLS cert names>
-AssociatedAcceptedDomains <full list of accepted domains that belong to your organization>
You may have to tweak this a bit to get it right, so you may want to experiment with some smaller domains before enabling it for every domain in your organization.
What this does is reject messages that don't come over the TLS cert; so long as your on-prem server is correctly configured, any email that tries to connect directly to Office 365/EOP should be rejected.
You would want to use this when you are connecting through an on-premise mail server and you can control the certificate. However, if you are connecting through a shared service and cannot specify the TLS-cert, then this would probably not be appropriate without some modifications to the connector.
Disclaimer - If you haven't read my disclaimer yet, make sure you do so here. TL;DR version - Buyer beware, I am not an expert, I am fumbling my way through this like the rest of you.
Also, I hold a little bit of Bitcoin and Ethereum.
Way back in the fall of 2012, I attend the Virus Bulletin conference in Dallas, TX. While I was there, I remember either attending, or hearing about a session, entitled Malware taking a bit(coin) more than we bargained for.
The presentation was by a researcher at Microsoft, and they talked about how bitcoin was a new digital currency just starting to gain traction. In response, new malware families were arising that would either take over user's computers to mine bitcoin (this was back in the day when a single computer still had a reasonable change of actually mining one), or try to steal users' bitcoins. I think that may have been my first introduction to Bitcoin, and I remember at the time that it was interesting, but wasn't sure whether or not it would catch on as a digital currency. If the malware creators succeeded in mining Bitcoin, they would have seen it go up in value by 100x.
Fast forward several years, to 2017, with the WannaCry malware outbreak. Malware hurts, and ransomware is even more painful as you're locked out of your system, but the market incentive to pay the fine is enticing if you can be certain that it will unlock your system; the drawback is that it incentivizes bad behavior for the malware author.
Both cases are examples of malware creators looking toward alternative payment methods to make themselves less trackable.
But what's interesting is how malware writers have stayed with that principle but have switched out cryptocurrencies. Whereas before they were mining bitcoin, now they are mining Monero:
These are just a few snippets of articles I found, and you can see they span 15 months. So, while it's a newer thing, it's not totally brand new. But the point is: Hackers are diversifying into alt-coins (an alt-coin is anything that is not a bitcoin).
As I say in some of my other cryptocurrency articles, the value of a digital currency built on blockchain is how many users believe in it, build on top of it, and start using it. Hackers and malware authors were early adopters of Bitcoin, and they seem to be proven right (so far... barring a collapse of Bitcoin). Do they have any special insight into whether or not Monero will eventually be successful?
You can do your own research into what Monero is and how it differs from Bitcoin. My own quick summary is that it's a digital currency like Bitcoin, but it's not built on the Bitcoin code like how a lot of other cryptocurrencies are. And whereas Bitcoin is pseudonymous, all transactions are public. If you observe enough patterns, you can see that randomNumber ID #1 that sends 0.5 btc to randomNumber ID #2 is a transaction. You don't have the identities of everyone yet, but with enough observations you may be able to figure out some of the identities. Bitcoin leaves a trail that is reversible back to its original transaction participants (in some cases, depending upon how many resources the investigator wants to spend).
Monero is different because it is much more private. Instead of this:
A sends xx bitcoins to B
You get this:
? sends ? to ?
You can see that's more private and not trackable.
There are some legitimate use cases of hiding your financial transactions from all viewing eyes. Using regular cash is kind of like this. But on the other hand, one of those use cases is criminal activity; if you're exchanging illegal goods or services, you want that to be hidden from everyone. Thus, if Bitcoin had a reputation as being useful for underground transactions, Monero could market itself the same way. No doubt cyber criminals already do, as that's why they are mining Monero using other people's machines.
I sympathize with the solutions to problems that altcoins are trying to solve. But, by introducing stronger privacy, they also set themselves up as a magnet for criminal activity. The maintainers of the code may say that they are building a platform and are not responsible for its usage. I'm not so sure about that.