Join startup founders & business angels for the 8th edition of Startup&Angels in Sydney!
SQL Server アップグレードパスについて
皆さん、こんにちは。 BI Data Platform サポートチームです。今回は SQL Server のアップグレードパスについてご紹介いたします。
アップグレードパスにつきましては、弊社では以下のような技術情報を公開いたしております。(下記の技術情報は SQL Server 2016 へのアップグレードパスに関する情報です。)
タイトル : サポートされるバージョンとエディションのアップグレード
上述の公開情報は、同一サーバーにおけるアップグレードパス (インプレース アップグレード) に関し記載しているものであり、別サーバーへのデータベース移行によるアップグレード (マイグレーション) の場合には該当いたしません。
例えば、上述の公開情報によるインプレース アップグレードでは、SQL Server 2014 Standard Edition から SQL Server 2016 Express Edition はサポートされていないアップグレードパスである為、アップグレードを実施した場合には失敗します。一方、マイグレーションによるアップグレードの場合には、上述の公開情報には該当しない為、SQL Server 2014 Standard edition から別のサーバーの SQL Server 2016 Express Edition へのデータ移行によるアップグレードは成功いたします。
つまり、マイグレーションによるアップグレードの場合には、上述の公開情報のパスを意識する必要はなく、下位のエディションにもアップグレードが可能となります。また、公開情報にあるアップグレード元のサービスパックの適用条件も考慮する必要はありません。
ただし、移行先となる SQL Server のエディションによる制限 (例えば、Express Edition の場合では 1 データベースに対し、最大 10 GB の容量制限) には該当いたしますので、移行先の SQL Server のエディションが移行元より下位の場合には特に注意が必要です。また、一度上位のバージョンにアップグレードしたデータベースは、下位のバージョン環境にリストアする事ができません。
なお、上記の類似内容は以下にご案内する自習書に記載がございますので、併せてご確認いただけると幸いに存じます。
タイトル : SQL Server 2016 実践シリーズ No.1
項目 : 移行可能なデータベース
※ 本Blogの内容は、2017 年 12 月現在の内容となっております。
(Cross-Post) Cloud storage now more affordable: Announcing general availability of Azure Archive Storage
Today we’re excited to announce the general availability of Archive Blob Storage starting at an industry leading price of $0.002 per gigabyte per month! Last year, we launched Cool Blob Storage to help customers reduce storage costs by tiering their infrequently accessed data to the Cool tier. Organizations can now reduce their storage costs even further by storing their rarely accessed data in the Archive tier. Furthermore, we’re also excited to announce the general availability of Blob-Level Tiering, which enables customers to optimize storage costs by easily managing the lifecycle of their data across these tiers at the object level.
From startups to large organizations, our customers in every industry have experienced exponential growth of their data. A significant amount of this data is rarely accessed but must be stored for a long period of time to meet either business continuity or compliance requirements; think employee data, medical records, customer information, financial records, backups, etc. Additionally, recent and coming advances in artificial intelligence and data analytics are unlocking value from data that might have previously been discarded. Customers in many industries want to keep more of these data sets for a longer period but need a scalable and cost-effective solution to do so.
“We have been working with the Azure team to preview Archive Blob Storage for our cloud archiving service for several months now. I love how easy it is to change the storage tier on an existing object via a single API. This allows us to build Information Lifecycle Management into our application logic directly and use Archive Blob Storage to significantly decrease our total Azure Storage costs.”
-Tom Inglis, Director of Enabling Solutions at BP
For more detail, please see the original post at Microsoft Azure Blog.
Broken Warnings Theory
The "broken warnings theory" is a fictional theory of the norm-setting and signaling effect of coding practices and bug-checking techniques in 3rd party libraries on new bugs and design anti-patterns. The theory states that maintaining and monitoring warning levels to prevent small problems such as "signed/unsigned mismatch", "no effect before comma", and "non-standard extension used" helps to create an atmosphere of order and lawfulness, thereby preventing more serious bugs, like buffer overruns, from happening.
Problem Description
Jokes aside though, not all warnings have been made equal:
- Some are precise
- Some are useful
- Some are actionable
- Some are fast to detect
- Some have little effect on existing code bases
Virtually none have all 5 of these nice-to-have characteristics, so a particular warning would usually fall somewhere on the spectrum of these traits creating endless discussions on which should or should not be reported. Naturally, different teams would settle on different criteria as to what set of warnings should be emitted, while compiler developers would try to put them into some overapproximated taxonomy trying to satisfy those numerous criteria. Clang and GCC try to be more fine-grained by using warning families, Visual C++ is more coarse-grained with its use of warning levels.
In our Diagnostics Improvements Survey, 15% of 270 respondents indicated they build their code with /Wall /WX
indicating they have a zero tolerance for any warnings. Another 12% indicated they build with /Wall
, which implies /W4
with all off-by-default warnings enabled. Another 30% build with /W4
. These were disjoint groups that altogether make 57% of users that have stricter requirements to code than the default of Visual C++ IDE - /W3
or the compiler by itself - /W1
. These levels are somewhat arbitrary and in no way represent our own practices. Visual C++ libraries team, for example, strives hard to have all our libraries be /W4
clean.
While everyone disagrees on which subset of the warnings should be reported, most agree there should be 0 warnings from the agreed upon set admitted in a project: all should be fixed or suppressed. On one hand, 0 makes any new warning a JND of the infamous Weber-Fechner law, but on another it is often a necessity in cross-platform code, where it's been repeatedly reported that warnings on one platform/compiler can often manifest themselves as errors or worse - bugs on another. This zero-tolerance to warnings can be easily enforced for internal code, yet virtually unenforceable for external code of 3rd-party libraries, whose authors may have settled on a different set of [in]tolerable warnings. Requiring all libraries to be clean with regard to all known warnings is both impractical (due to false positives and absence of standard notation to suppress them) and impossible to achieve (as the set of all warnings is an ever-growing target). The latter one is a result of compilers and libraries ecosystems coevolution where improvements in one require improvements, and thus keeping up in the race, in the other. Because of this coevolution, a developer will often be dealing with compilers that haven't caught up with their libraries or libraries that haven't caught up with their compilers and neither of those would be under the developer's control. The developers under such circumstances, which we'd argue are all the developers using an alive and vibrant language like C++, effectively want to have control over emission of warnings in the code they don't have control over.
Proposed Solution
We offer a new compiler switch group: /external:*
dealing with "external" headers. We chose the notion of "external header" over "system header" that other compilers use as it better represents the variety of 3rd party libraries in existence. Besides, the standard already refers to external headers in [lex.header], so it was only natural. We define a group instead of just new switches to ease discoverability by users, which would be able to guess the full syntax of the switch based on the switches they already know. For now, this group consists of 5 switches split into 2 categories (each described in its own section below):
Switches defining the set of external headers
/external:I <path>
/external:anglebrackets
/external:env:<var>
Switches defining diagnostics behavior on external headers
/external:W<n>
/external:templates-
The 2nd group may later be extended to /external:w, /external:Wall, /external:Wv:<version>, /external:WX[-], /external:w<n><warning>, /external:wd<warning>, /external:we<warning>, /external:wo<warning>
etc. which would constitute an equivalent of the corresponding warning switch when applied to an external (as opposed to user) header or any other switch when it would make sense to specialize it for external headers. Please note that since this is an experimental feature, you will have to additionally use /experimental:external
switch to enable the feature until we finalize its functionality. Let's see what those switches do.
External Headers
We currently offer 4 ways for users and library writers to define what constitutes an external header, which differ in the level of ease of adding to build scripts, intrusiveness and control.
/external:I <path>
- a moral equivalent of -isystem, or just -i (lowercase) from GCC, Clang and EDG that defines which directories contain external headers. All recursive sub-directories of that path are considered external as well, but only the path itself is added to the list of directories searched for includes./external:env:<var>
- specifies the name of an environment variable that holds a semicolon-separated list of directories with external headers. This is useful for build systems that rely on environment variables like INCLUDE and CAExcludePath to specify the list of external includes and those that shouldn't be analyzed by/analyze
respectively. The user can simply use/external:env:INCLUDE
and/external:env:CAExcludePath
instead of a long list of directories passed via/external:I
switch./external:anglebrackets
- a switch that allows a user to treat all headers included via#include <>
(as opposed to#include ""
) as external headers#pragma system_header
- an intrusive header marker that allows library writers to mark certain headers as external.
Warning Level for External Headers
The basic idea of /external:W<n>
switch is to define the default warning level for external headers. We wrap those inclusions with a moral equivalent of:
#pragma warning (push, n) // the global warning level is now n here #pragma warning (pop)
Combined with your preferred way to define the set of external headers, /external:W0
is everything you need to do to entirely shut off any warnings emanating from those external headers.
Example:
External Header: some_lib_dir/some_hdr.hpp
template <typename T> struct some_struct { static const T value = -7; // W4: warning C4245: 'initializing': conversion from 'int' to 'unsigned int', signed/unsigned mismatch };
User code: my_prog.cpp
#include "some_hdr.hpp" int main() { return some_struct<unsigned int>().value; }
Compiling this code as:
cl.exe /I some_lib_dir /W4 my_prog.cpp
will emit a level-4 C4245 warning inside the header, mentioned in the comment. Running it with:
cl.exe /experimental:external /external:W0 /I some_lib_dir /W4 my_prog.cpp
has no effect as we haven't specified what external headers are. Likewise, running it as:
cl.exe /experimental:external /external:I some_lib_dir /W4 my_prog.cpp
has no effect either as we haven't specified what the warning level in external headers should be and by default it is the same as the level specified in /W switch, which is 4 in our case. To suppress the warning in external headers, we need to both specify which headers are external and what the warning level in those headers should be:
cl.exe /experimental:external /external:I some_lib_dir /external:W0 /W4 my_prog.cpp
This would effectively get rid of any warning inside some_hdr.hpp while preserving warnings inside my_prog.cpp.
Warnings Crossing an Internal/External Boundary
Simple setting of warning level for external headers would have been good enough if doing so wouldn't hide some user-actionable warnings. The problem with doing just pragma push/pop around include directives is that it effectively shuts off all the warnings that would have been emitted on template instantiations originating from the user code, many of which could have been actionable. Such warnings might still indicate a problem in user's code that only happens in instantiations with particular types (e.g. the user forgot to apply a type trait removing const or &) and the user should be aware of them. Before this update, the determination of warning level effective at warning's program point was entirely lexical, while reasons that caused that warning could have originated from other scopes. With templates, it seems reasonable that warning levels in place at instantiation points should play a role in what warnings are and what aren't permitted for emission.
In order to avoid silencing the warnings inside the templates whose definitions happen to be in external headers, we allow the user to exclude templates from the simplified logic for determining warning levels at a given program point by passing /external:templates-
along with /external:W<n>
. In this case, we look not only at the effective warning level at the program point where template is defined and warning occurred, but also at warning levels in place at every program point across template instantiation chain. Our warning levels form a lattice with respect to the set of messages emitted at each level (well not a perfect one, since we sometimes emit warnings at multiple levels). One over-approximation of what warnings should be allowed at a given program point with respect to this lattice would be to take the union of messages allowed at each program point across instantiation chain, which is exactly what passing /external:template-
does. With this flag, you will be able to see warnings from external headers as long as they are emitted from inside a template and the template is instantiated from within user (non-external) code.
cl.exe /experimental:external /external:I some_lib_dir /external:W0 /external:templates- /W4 my_prog.cpp
This makes the warning inside the external header reappear, even though the warning is inside an external header with warning level set to 0.
Suppressing and Enforcing Warnings
The above mechanism does not by itself enable or disable any warnings, it only sets the default warning level for a set of files, and thus all the existing mechanisms for enabling, disabling and suppressing the warnings still work:
/wdNNNN, /w1NNNN, /weNNNN, /Wv:XX.YY.ZZZZ
etc.#pragma warning( disable : 4507 34; once : 4385; error : 4164 )
#pragma warning( push[ ,n ] ) / #pragma warning( pop )
In addition to these, when /external:templates-
is used, we allow a warning to be suppressed at the point of instantiation. In the above example, the user can explicitly suppress the warning that reappeared due to use of /external:templates-
as following:
int main() { #pragma warning( suppress : 4245) return some_struct<unsigned int>().value; }
On the other side of developers continuum, the library writers can use the exact same mechanisms to enforce certain warnings or all the warnings at certain level if they feel those should never be silenced with /external:W<n>
.
Example:
External Header: some_lib_dir/some_hdr.hpp
#pragma warning( push, 4 ) #pragma warning( error : 4245 ) template <typename T> struct some_struct { static const T value = -7; // W4: warning C4245: 'initializing': conversion from 'int' // to 'unsigned int', signed/unsigned mismatch }; #pragma warning( pop )
With the above change to the library header the owner of the library now ensures that the global warning level in his header is going to be 4 no matter what the user specified in /external:W<n>
and thus all level 4 and above warnings will be emitted. Moreover, like in the above example she can enforce that a certain warning will be always treated as error, disabled, suppressed or emitted once in her header, and, again, the user will not be able to override that deliberate choice.
Limitations
In the current implementation you will still occasionally get a warning through from an external header when that warning was emitted by the compiler's back-end (as opposed to front-end). These warnings usually start with C47XX, though not all C47XX warnings are back-end warnings. A good rule of thumb is that if detection of a given warning may require data or control-flow analysis, then it is likely done by the back-end in our implementation and such a warning won't be suppressed by the current mechanism. This is a known problem and the proper fix may not arrive until the next major release of Visual Studio as it requires breaking changes to our intermediate representation. You can still disable these warnings the traditional way with /wd47XX.
Besides, this experimental feature hasn't been integrated yet with /analyze
warnings as we try to gather some feedback from the users first. /analyze
warnings do not have warning levels, so we are also investigating the best approach to integrate them with the current logic.
We currently don't have a guidance on the use of this feature for SDL compliance, but we will be in contact with the SDL team to provide such guidance.
Conclusion
Coming back to the analogy with the Broken Windows Theory, we had mixed feelings about the net effect of this feature on the broader libraries ecosystem. On one hand it does a disservice to library writers by putting their users into "not my problem" mode and making them less likely to report or fix problems upstream. On the other hand, it gives them more control over their own code as they can now enforce stricter requirements over it by subduing rogue libraries that prevented such enforcement in the past.
While we agree that the secondary effect of this feature might limit contributions back to the library, fixing issues upstream is usually not a user's top priority given the code she is working on, but fixing issues in her own code is her topmost priority and warnings from other libraries obstruct detection of warnings in it because she cannot enforce /WX on her code only. More importantly, we believe this will have a tertiary effect that would balance the net loss of the secondary effect.
By enabling a developer to abstract from 3rd party library warnings we encourage her to concentrate on her own code - make it cleaner, possibly even warning free on as large level as she possibly can. 3rd party library developers are also developers in this chain, so by allowing them to abstract from their 3rd party dependencies, we encourage them to clean up their code and make it compile with as large warning level as they possibly can, etc. etc. Why is this important? In essence, in the current world the warnings avalanche across the entire chain of library dependencies and the further you are on this chain, the more difficult it becomes to do something about them - the developer feels overwhelmed and gives up on any attempt to do so. On the other hand, in the world where we can distinguish our own code from 3rd party code, each developer in the chain has means to stop (block the effects of) the avalanche and is encouraged to minimize its impact, resulting in minimizing the overall impact to the entire chain. This is a speculation of course, but we think it is as plausible as the secondary effect we were concerned about.
In closing, we would like to invite you to try the feature out for yourself and let us know what you think. Please do tell us both what you like and what you don't like about it as otherwise the vocal minority might decide for you. The feature is available as of Visual Studio 15.6 Preview 1. As always, we can be reached via the comments below, via email (visualcpp@microsoft.com) and you can provide feedback via Help -> Report A Problem in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).
P.S. Kudos to Robert Schumacher for pointing to the analogy with the Broken Windows Theory!
Install SQL Server 2017 Using PowerShell Desired State Configuration and SqlServerDsc
How many times have you clicked through the SQL Server installation interface just clicking the same old buttons, entering the same old information, not giving it much of a second thought? Then the installation finishes and the realization of "I forgot to specify the DBA group in the sysadmins role" hits. Now you have to spend precious time dropping into single-user mode, adding the appropriate users/groups, bringing SQL back up in multi-user mode, and testing. What's worse is now the confidence of the entire installation is shaken. "What else did I forget?" I for one have been in that situation more than once.
Enter PowerShell Desired State Configuration (DSC). Using DSC I can build one configuration template that can be reused over hundreds and thousands of servers. Depending on the build, I may have to tweak a few of the setup parameters, but that's not a big deal because I can still keep all of the standard settings in place. The beautiful thing is it eliminates the possibility that will forget to enter an important parameter after spending a sleepless night caring for my kids.
In this article I will explore the initial setup of a standalone instance of SQL Server 2017 on Windows Server 2016 using the SqlServerDsc DSC resource. Some prior knowledge of DSC will be helpful as I will not explore the hows and whys of how DSC works.
The following items are required for this walkthrough:
- A machine running Windows Server 2016
- SQL Server 2017 installation media
- The SqlServerDsc DSC resource (version 10.0.0.0 is the current release at the time of this writing)
Prerequisites
In most cases DSC will be used to handle the prerequisite requirements. However, for the purposes of this demo, I will handle the prerequisites manually.
Install the SqlServerDsc DSC Resource
The SqlServerDsc DSC resource can be downloaded from the PowerShell Gallery using the Install-Module cmdlet. Note: Ensure PowerShell is running "As Administrator" to install the module.
Install-Module -Name SqlServerDsc
Obtain the SQL Server 2017 Installation Media
Download the SQL Server 2017 installation media to the server. I downloaded SQL Server 2017 Enterprise from my Visual Studio subscription and copied the ISO to "C:en_sql_server_2017_enterprise_x64_dvd_11293666.iso".
Now the ISO must be extracted to a directory.
New-Item -Path C:SQL2017 -ItemType Directory $mountResult = Mount-DiskImage -ImagePath 'C:en_sql_server_2017_enterprise_x64_dvd_11293666.iso' -PassThru $volumeInfo = $mountResult | Get-Volume $driveInfo = Get-PSDrive -Name $volumeInfo.DriveLetter Copy-Item -Path ( Join-Path -Path $driveInfo.Root -ChildPath '*' ) -Destination C:SQL2017 -Recurse Dismount-DiskImage -ImagePath 'C:en_sql_server_2017_enterprise_x64_dvd_11293666.iso'
Create the Configuration
Configuration
Create the configuration function which will be called to generate the MOF document(s).
Configuration SQLInstall {...}
Modules
Import the modules into the current session. These tell the configuration document how to build the MOF document(s) and tells the DSC engine how to apply the MOF document(s) to the server.
Resources
.NET Framework
SQL Server relies on the .NET framework, therefore we need to ensure it is installed prior to installing SQL Server. In order to accomplish this, the WindowsFeature resource is utilized to install the Net-Framework-45-Core Windows Feature.
WindowsFeature 'NetFramework45' { Name = 'Net-Framework-45-Core' Ensure = 'Present' }
SqlSetup
The SqlSetup resource is used to tell DSC how to install SQL Server. The parameters required for a basic installation are:
- InstanceName - The name of the instance. Utilize MSSQLSERVER for a default instance.
- Features - The features to install. In this example I am only installing the SQLEngine feature.
- SourcePath - The path to the SQL installation media. In this example I stored the SQL installation media in "C:SQL2017". A network share can be utilized to minimize the space used on the server.
- SQLSysAdminAccounts - The users or groups who are to be a member of the sysadmin role. In this example I am granting the local Administrators group sysadmin access. Note: This configuration is not recommended in a high security environment.
A full list and description of the parameters available on SqlSetup are available on the SqlServerDsc GitHub respository.
The SqlSetup resource is odd because it only installs SQL and DOES NOT maintain the settings that are applied. For example, if the SQLSysAdminAccounts are specified at installation time, an admin could add or remove logins from to/from the sysadmin role and the SqlSetup resource wouldn't care. If it is desired that DSC enforces the membership of the sysadmin role, the SqlServerRole resource should be utilized.
Complete Configuration
Configuration SQLInstall { Import-DscResource -ModuleName SqlServerDsc node localhost { WindowsFeature 'NetFramework45' { Name = 'NET-Framework-45-Core' Ensure = 'Present' } SqlSetup 'InstallDefaultInstance' { InstanceName = 'MSSQLSERVER' Features = 'SQLENGINE' SourcePath = 'C:SQL2017' SQLSysAdminAccounts = @('Administrators') DependsOn = '[WindowsFeature]NetFramework45' } } }
Build and Deploy
Compile the Configuration
Dot-source the configuration script:
. .SQLInstallConfiguration.ps1
Execute the configuration function:
SQLInstall
A directory will be created in the working directory called "SQLInstall" and will contain a file call "localhost.mof". Examining the contents of the MOF will show the compiled DSC configuration.
Deploy the Configuration
To start the DSC deployment of SQL Server, call the Start-DscConfiguration cmdlet. The parameters provided to the cmdlet are:
- Path - The path to the folder containing the MOF documents to deploy. (eg. "C:SQLInstall")
- Wait - Wait for the configuration job to complete.
- Force - Override any existing DSC configurations.
- Verbose - Show the verbose output. This is handy when pushing a configuration for the first time to aid in troubleshooting.
Start-DscConfiguration -Path C:SQLInstall -Wait -Force -Verbose
As the configuration applies, the Verbose output will show you what is happening and give you a warm and fuzzy feeling that SOMETHING is happening. As long as no errors (red text) are thrown, when "Operation 'Invoke CimMethod' complete." is displayed on the screen, SQL should be installed.
Validate Installation
DSC
The Test-DscConfiguration cmdlets can be utilized to determine if the current state of the server, in this case the SQL installation, meets the desired state. The result of Test-DscConfiguration should be "True".
PS C:> Test-DscConfiguration True
Services
The services listing should now return SQL Server services
PS C:> Get-Service -Name *SQL* Status Name DisplayName ------ ---- ----------- Running MSSQLSERVER SQL Server (MSSQLSERVER) Stopped SQLBrowser SQL Server Browser Running SQLSERVERAGENT SQL Server Agent (MSSQLSERVER) Running SQLTELEMETRY SQL Server CEIP service (MSSQLSERVER) Running SQLWriter SQL Server VSS Writer
SQL Server
PS C:> & 'C:Program FilesMicrosoft SQL ServerClient SDKODBC130ToolsBinnSQLCMD.EXE' -S $env:COMPUTERNAME 1> SELECT @@SERVERNAME 2> GO 1> quit
That's a Wrap
And there you have it. Installing SQL Server in a consistent manner has never been easier.
GENERAL_READ_ENTITY_START
I was looking around for some understanding on the what this event within the request pipeline meant. As seen in Figure 1, you can find this event when you are capturing a Failed Request trace.
I wrote a lab about capturing Failed Request Traces here –> Lab 4: Install and configure Failed Request Tracing
Figure 1, What does GENERAL_READ_ENTITY_START mean
In Figure 1 you can see that the entry into that event is the last one in the log, so pre IIS 10 we must conclude that this is where the time taken rule of this trace has been triggered. The request could have completed successfully or timed out we don’t know, but what we do know is that it took some time to get to the GENERAL_READ_ENTITY_END event which was greater than the Failed Request rule configuration. In IIS 10 you can enable traceAllAfterTimeout and then you will get the trace for the entire request, I describe that here.
So what does it mean when the request takes time between the GENERAL_READ_ENTITY_START and GENERAL_READ_ENTITY_END event?
When I correlated this trace with IIS and HTTP logs I found an HTTP reason code of Timer_EntityBody which is discussed here. The description of that HTTP reason phrase is:
“The connection expired before the request entity body arrived. When a request clearly has an entity body, the HTTP API turns on the Timer_EntityBody timer. At first, the limit of this timer is set to the ConnectionTimeout value (typically, two minutes). Every time that another data indication is received on this request, the HTTP API resets the timer to give the connection two more minutes (or whatever is specified in ConnectionTimeout).”
And what I translate this to mean is that the client has made a connection and there is a problem with the client starting, continuing or completing the upload of the request body.
If you get this issue, you might want to see which kinds of clients are having this issue, is it a specific client software or browser? Look at the client IP in the IIS logs to see if the issue is coming from a specific geographic location, is that location far away from your server? Try a different client browser or connection methods.
As you can see in Figure 2, the request has entered and left the GENERAL_READ_ENTITY the first time pretty quick 15ms, but then the second time the request enters the GENERAL_READ_ENTITY_START event, the log is written because the time taken Failed Request rule threshold has been breached. This relates back to the description of the Timer_EntityBody reason in that “..Every time that another data indication is received on this request, the HTTP API resets the timer…” and in the case of Figure 2, this timer would have been reset. As already mentioned, pre IIS 10 we do not know how long it took to complete the request, but you can look at the IIS and HTTP logs to get more details now that you have a trace with a specific date and time stamp.
Figure 2, What does GENERAL_READ_ENTITY_START mean
Postmortem – Availability issues with Visual Studio Team Services on 16 November 2017
On 16 November 2017 we had a global incident with Visual Studio Team Services (VSTS) that had a serious impact on the availability of our service (https://blogs.msdn.microsoft.com/vsoservice/?p=15526). We apologize for the disruption. Below we describe the cause and the actions we are taking to address the issues.
Customer Impact
This was a global incident that caused performance issues and errors across all instances of VSTS, impacting many different scenarios. The incident occurred within Shared Platform Services (SPS), which contains identity, account, and commerce information for VSTS.
The incident started on 16 November at 12:15 UTC and ended at 12:40 UTC. The same issue occurred again the same day from 16:30 until 16:35 UTC.
The graph below shows the number of impacted users during the incident.
What Happened
The Commerce service in SPS is responsible for billing events. It’s the service in VSTS that interfaces with Azure Commerce to support purchasing extensions, VS subscriptions, pipelines, etc. The Commerce service is one of several services, including identity and account, which run as one service in SPS.
There was a change made to a stored procedure, used to fetch subscription information for an account, that resulted in high TempDB contention. The problem is a join condition using the OR operator yielding inefficient query plan. You can see the tempDB usage in the diagram below (Table Spool in the diagram). To fix it, we removed the OR and added hinting to force a good query plan.
Next Steps
Beyond the immediate fix to the SQL stored procedure, we are taking the following steps to prevent the issue going forward.
- We missed the OR in code review, so we are making sure engineers understand our SQL guidelines, including the use of UNION or UNION ALL rather than OR as well as a reminder to hint queries.
- We found and fixed three other stored procedures that we discovered to be suboptimal as part of the investigation of this incident.
- We had already begun work to pull the Commerce service out of the SPS service and separate it from identity and account. That work is well under way and will start going into production in January. This will ensure that an incident like this will be contained within the Commerce service and not affect critical operations like authentication.
- We are working on partitioning SPS. We currently have a dogfood instance in production, though the access pattern to trigger the issue was not present there (insufficient number of subscriptions). We have engineers dedicated to implementing a partitioned SPS service, which will allow for an incremental, ring-based deploy that limits the impact of issues. That is scheduled to begin deployment to production in early summer.
We again apologize for the disruption this caused you. We are fully committed to improving our service to be able to limit the damage from an issue like this and be able mitigate the issue more quickly if it occurs.
Sincerely,
Buck Hodges
Director of Engineering, VSTS
Surface Hubs on wired 802.1x may experience intermittent connectivity issues
During regular usage of the Surface Hub on a network that has 802.1x enabled but not enforced, you may observe intermittent SfB issues (mid-call media failures, or 100% packet loss for media sessions and then re-establishment of those sessions) or frequent DHCP requests originating from the Hub.
Depending on how the 802.1x network infrastructure is configured, it may request any capable device to authenticate on a regular basis (re-authentication interval). The Hub has been capable of 802.1x since the Windows 1703 update, but without further configuration, it will only try this authentication with the machine auth defaults (and no specific domain credentials/certificates) of the Wired AutoConfig service (dot3svc). When the authentication request fails, the Ethernet adapter then re-initializes and connects as an un-authenticated device. This reinit can take ~30 seconds, during which time the device will appear to have no connectivity and would interrupt any activities currently relying on that network connection.
As of the November 2017 Cumulative Update (KB4048954), the 802.1x behavior of a Surface Hub can be configured via MDM policy, by setting the OMA-URI ./Vendor/MSFT/SurfaceHub/Dot3/LanProfile to an XML file compliant with the LANProfile Schema. If configuring the Hub to authenticate as per the documentation is not desired, 802.1x can also be disabled via MDM policy, which would then set LanProfile equal to:
<?xml version="1.0" encoding="UTF-8"?>
<LANProfile xmlns="http://www.microsoft.com/networking/LAN/profile/v1">
<MSM>
<security>
<OneXEnforced>false</OneXEnforced>
<OneXEnabled>false</OneXEnabled>
</security>
</MSM>
</LANProfile>
The policy status in your MDM solution may report deployment errors, but the disabled 802.1x state can be verified from the Surface Hub's event logs and from observing the network behavior.
How To: Move your Azure Data Warehouse to a new region and/or subscription
This post applies to Azure SQL Data Warehouse
The functionality currently does not exist to move your Azure SQL Data Warehouse instance to a subscription other than the one it was created under. What you can do is move the server that contains the instance to another subscription or resource group. Moving a server will move all SQL Databases owned by that server to the destination subscription. The only limitation is both subscriptions must exist within the same Active Directory Tenant. The link below stats how to determine what tenant a resource belongs to.
The process will be slightly different depending on if you are simply moving an existing DW instance to a different subscription or if you wish to create a copy or if you wish to move across regions.
Process
- To move an existing server and Data Warehouse to a new subscription without changing region simply follow the instructions here
- To create a copy of an existing Data Warehouse or Move the Data Warehouse to a new region you will need to add in a geo-restore of the instance
- Restore the Data Warehouse to a new server in the same subscription in the desired region
- Move the new server owning the Data Warehouse to the desired subscription
If you have any issues with this approach feel free to contact us @ support any time!
Cognitive Services LUIS & Azure Bot Service GA!
Cognitive Services Language Understanding Intelligent Service (LUIS), natural language analysis model generator & analyzer API, goes GA. And also Azure Bot Service, chatbot develop & deploy service on Azure and connector to multiple communication tools, announced to go GA too.
Microsoft Azure Blog > Announcing the General Availability of Azure Bot Service and Language Understanding, enabling developers to build better conversational bots
Azure Team Blog > Conversational Bots Deep Dive – What’s new with the General Availability of Azure Bot Service and Language Understanding
Cognitive Services LUIS : Dec 2017 Update
GA & Price Plan
- F0 (Free Tier)
- ~ 10000 transaction / month
- S1 (Standard Tier: paid plan)
- about 150JPY / 1000 transaction (pay-as-you-go)
- text limited to 500 letters(incl 2-bytes characters) per transaction
You can create F0 and S1 from Azure Portal. There is no limit period for use (even free F0 Tier).
New Portal
LUIS had 2 function to creating LUIS App (= analysis model) to adopt each domain, and to analyze sentences via web API. There is GUI (Portal) to create, study and train LUIS App, which renewed on GA.
There are LUIS Programmatic API to configure LUIS App. You can create new LUIS App, study, train etc not using LUIS Portal. Those make possible to create custom management tools, or automate re-study and re-train process with your own.
Cognitive Services LUIS Programmatic API Doc
Evolving Intents & Entities Limitation much more than ever
Upper limits of Intents and Entities in preview elevated very much.
- Intent : 80 → 500
- Entity : 30 → 100
New Regions
Regions where deploy LUIS App and access as endpoint (web API URL) are formerly 5 and 7 are added as new ones.
- America
- West US, East US2, West Central US
- +South Central US, East US, West US 2, Brazil South
- Europe
- West Europe
- +North Europe
- Asia Pacific
- Southeast Asia
- +East Asia, Australia East
Technical Update : multilingual support
Several languages are newly supported where features below limited only in English.
-
- Prebuild Entities
- enable to add presets of frequent entities (datetime, geography, etc)
- new supported languages: French, Spanish, Portuguese
- Prebuild Domains
- enable to add presets of frequent domain (intents and entities)
- new supported language: Chinese
- Phrase List Suggestion
- available phrases "list" which can be attach to Entities to recognize; suggest available phrases when create & edit list
- new supported languages: : Chinese, Spanish, Japanese, French, Portuguese, German, Italian
- Prebuild Entities
Azure Bot Service : Dec 2017 Update
GA & New Price Plan
With GA, SLA is established also Azure Support available (*).
- F0 (Free Tier)
- DirectLine & WebChat : ~ 10000 messages/month
- (no limit for other channels)
- S1 (Standard Tier: Paid Plan)
- DirectLine & WebChat : about 50 JPY/1000 messages/month
- Azure Bot Service is deployed on Azure WebApp or Azure Function and it costs individually for Web App or Function (+Storage、AppInsight and other services) (You can attach free plan for those)
As same as used to, F0 and S1 can be created and managed from Azure Portal (no limit for usage period).
(*)99.9% SLA on DirectLine and WebChat 、able to use Azure Supports incidents
Deployed region
- America
- East US, West US, Brazil South
- Europe
- North Europe, West Europe
- Asia Pacific
- East Asia, Southeast Asia, Australia East, Australia Southeast
Enhancements/Changes as Bot Framework
- Terminate of Bot Framework State service
- Access to memory storage to keep conversation info
- able to add control and own conversation state storage
- Azure Bot Service only to registration to Bot Directory/Connector
- seems to be most impacted change for present users...
- Knowledge Graph Exchange registration required to use Cortana Channels due to Cortana Skills integration into it.
- Details about Knowledge Graph Exchange: https://aka.ms/CortanaSkillsDocs, https://aka.ms/CortanaSkillsBotConnectedAccount
Cognitive Services LUIS & Azure Bot Service 一般提供開始
自然言語の分析、判定を行うエンジン(分析モデル)作成と 分析 API を提供する Cognitive Services Language Understanding Intelligent Service (LUIS) の GA (一般提供) が開始となりました。また、Azure 上で Chatbot を構築、公開、各種チャネルへの接続を提供する Azure Bot Service の GA 開始がアナウンスされました。
Cognitive Services LUIS : Dec 2017 Update
GA & 価格体系
- F0 (Free Tier: 無償プラン)
- ~ 10000 トランザクション /月
- S1 (Standard Tier: 有償プラン)
- 150 / 1000 トランザクション (従量課金)
- 1 トランザクションでバインドできるテキストは 500文字(全角OK!)まで
F0 および S1 は Azure Portal からの申し込みになります。(※いずれも利用期間の制限はありません)
新 Portal 公開
LUIS は適用範囲 (ドメイン) に合わせてご自身で LUIS App (自然言語の分析モデル) を作成する必要があり、作成のための GUI (Portal)が用意されています。こちら GA に合わせて Portal がリニューアルされています。
なお、LUIS App の作成、編集については Programmatic API が用意されています。LUIS ポータルを使用せずに LUIS App の作成や編集、学習などを行うことができ、カスタムの操作ツールを作成したり、再学習などの自動化を構成することも可能です。
Cognitive Services LUIS Programmatic API Doc
Intent および Entity の上限引き上げ
これまでのプレビュー版での制限が大幅に緩和されました。
- Intent : 80 → 500
- Entity : 30 → 100
リージョンの追加
LUIS App および Endpoint (API アクセス先) のリージョンがこれまでの 5 か所 に加えて 7か所 追加されました。
- America
- West US, East US2, West Central US
- +South Central US, East US, West US 2, Brazil South
- Europe
- West Europe
- +North Europe
- Asia Pacific
- Southeast Asia
- +East Asia, Australia East
機能追加 (多言語対応)
英語のみ対応だった下記の機能が多言語対応となりました。
- Prebuild Entities
- 利用頻度の高い Entity (日付、時間、地名 など) のセットを追加可能
- 追加言語: フランス語、スペイン語、ポルトガル語
- Prebuild Domains
- ドメインごとにまとめた利用頻度の高い Intent および Entity のセット
- 追加言語: 中国語
- Phrase List Suggestion
-
- Entity として認識して欲しい用語をリスト化した Phrase List を作成、編集する際に同じリストに入ると推定される用語を提案
- 追加言語: 中国語、スペイン語、日本語、フランス語、ポルトガル語、ドイツ語、イタリア語
-
Azure Bot Service : Dec 2017 Update
GA & 新価格体系
GA に伴い、SLA が設定され、Azure サポート(*)が利用可能になります。
- F0 (Free Tier: 無償プラン) ※期間の制限はありません
- DirectLine および WebChat : ~ 10000 メッセージ/月
- (他のチャネルについては制限なし)
- S1 (Standard Tier: 有償プラン)
- DirectLine および WebChat : 50/1000 メッセージ/月
- Azure Bot Service は Azure WebApp or Azure Function 上に構築されるため、Web App または Function (+ストレージ、AppInsight などのサービス) の利用料金が別途かかります。(それらは無料プランを利用してもOK)
これまで通り、F0 および S1 は Azure Portal から作成して利用します。(※いずれも利用期間の制限はありません)
(*)DirectLine および WebChat の 99.9% SLA、Azure サポート対応
対応リージョン
- America
- East US, West US, Brazil South
- Europe
- North Europe, West Europe
- Asia Pacific
- East Asia, Southeast Asia, Australia East, Australia Southeast
Bot Framework としての機能強化/変更点
- Bot Framework State Service 終了
- conversation 内容を保持しておくためのメモリストレージ (へのアクセス)
- control や 独自の conversation state storage を追加可能に
- Bot Directory / Connector への登録が Azure Bot Service に限定
- これまで Bot Framework をお使いいただいていた方への一番の注意点になります。
- また、 Cortana Skills が Knowledge Graph Exchange へ統合されるのに伴い、Cortana Channel の利用には Knowledge Graph Exchange への登録が必要になります。
- Cortana Skills & Knowledge Graph Exchange について: https://aka.ms/CortanaSkillsDocs, https://aka.ms/CortanaSkillsBotConnectedAccount
Search failures for accounts hosted in the US region – 12/14 – Mitigating
Update: Friday, December 15th 2017 00:10 UTC
The Indexing process to deliver search results for new accounts continues to work at a faster pace. At this moment, accounts who have installed Search extension on or after 17:00 UTC on 12/13 will still notice the error message. We expect the indexing operation to complete in the next few hours. The optimizations we put in are helping expedite this process.
- Next Update: Before Friday, December 15th 2017 06:30 UTC
Sincerely,
Sri Harsha
Update: Thursday, December 14th 2017 21:14 UTC
After an initial set of mitigation have been applied, we notice that our search service was able to start indexing the accounts. Issue has been fixed for ~30 accounts. Search indexing is catching up for the remaining ~75 accounts. We are also taking additional steps to expedite mitigation and making sure no new accounts will run into the same issue.
- Next Update: Before Friday, December 15th 2017 00:30 UTC
Sincerely,
Sri Harsha
Update: Thursday, December 14th 2017 19:01 UTC
We continue to investigate this incident. Our telemetry shows ~100 accounts are currently impacted. This problem started on 12/12 11:00 UTC. This problem is limited to the accounts who have installed/re-installed Search extension after that time. Accounts which have been using this extension prior to that time should not see any issues.
Users in the impacted accounts might notice this error: We are not able to show results because one or more projects in your account are still being indexed
Various mitigation avenues are being explored at the moment. and we will provide an update as soon as possible.
- Next Update: Before Thursday, December 14th 2017 21:15 UTC
Sincerely,
Sri Harsha
Initial Update: Thursday, December 14th 2017 18:18 UTC
We're investigating Search failures for accounts that are hosted in US region.
This impacts most of the VSTS accounts located within any US region that have installed Search extension for the first time.
- Next Update: Before Thursday, December 14th 2017 18:55 UTC
Sincerely,
Manohar
Azure Update – November 2017
Azure Notification Hubs .NET SDK now compatible with .NET Standard 2.0
Clearer choice of CLI: Azure CLI 2.0 for Resource Manager
Public preview: Azure Automation watcher tasks
GUID migration: Security and Audit solution for Azure Government
Azure DevTest Labs: Post customized announcements to your lab
Manage payment methods in the Azure portal
General availability: Visual Studio App Center
VSTS available only in the Azure portal from November 30
Storage Service Encryption for Azure Backup data at rest
General availability: Bash in Azure Cloud Shell
Name changes: Azure Queue storage
Azure Service Health preview: New health alert creation and management
General availability: Azure Reserved VM Instances
Name changes: Azure Batch Rendering V-Ray
Name changes: Visual Studio Mobile Center
Static resource classes are now supported in Azure SQL Data Warehouse
Azure Advisor: New dashboard, downloadable reports, and configuration
Retiring Virtual Machines and Azure Cloud Services from the classic portal
Azure Media Services October 2017 updates
Disassemble Powershell CMDLets
Ever wondered how the Microsoft guys code their powershell CMDLets? Ever wanted to take a peek at the implementation of a Powershell command? "Then take the red pill, stay in wonderland and I show you how deep the rabbit hole goes. Remember: all I'm offering is the truth. Nothing more."
Everything starts with the Get-Command CMDlet. Using Get-Command you can figure out a lot of properties of the command you want to look behind. The most important thing we need to know about a command is its type. The way to get to the script code is different for each command type. There are a lot of command types but most Powershell commands are of type "CMDLet" or "Function". Another command type is "Alias". But this is just a refenrence to a "CMDLet" or "Function". So let's focus on these two for now.
For this walkthrough I picked the "GC" (Get-Content) command as example for a CMDLet and "Get-WindowsUpdateLog" as an example for a Function.
The CMDLet inside a .NET binary
Let's start to x-ray "Get-Content" by opening a Powershell window (doesn't need to be elevated) and typing the following command:
Get-Command gc | fl *
Using the short form "GC" with Get-Command, like we just did, shows "Alias" as CommandType property. To figure out the full name of the command, take a look at the "ResolvedCommandName" property and repeat Get-Command with its value. Using the long form shows the real command type, in this case a CMDLet.
Get-Command Get-Content | fl *
The DLL property of the output indicates where the implementation of this CMDLet can be found. Since this is a binary format (.DLL), we need a disassembler to continue. There are a lot of tools to disassemble a dotNet binary. I use ILSpy. It's fast and can be installed as a portable app (no installation necessary).
So let's open the file from the DLL property (Microsoft.PowerShell.Commands.Management.dll) in ILSpy. Most times the command implementation is found in the "Microsoft.PowerShell.Commands" namespace.
From now on a little knowledge about .NET programming is necessary but as a rough indication we can search for methods like "ProcessRecord" or "BeginProcessing" to get a starting point.
The Function way
It's different when we need to get the source code for a command implemented as a function. As mentioned at the beginning, we use the "Get-WindowsUpdateLog" as an example here. So let's have a look at its members.
Get-Command Get-WindowsUpdateLog | fl *
The source code of the function itself can be found in the "Definition" property. So it can easily be displayed using the command:
Get-Command Get-WindowsupdateLog | select -ExpandProperty Definition
However, in many cases, the function is using code parts or other functions from its parent module. To view the complete source code, we need to open the whole module in a text editor.
First, let's find the parent module:
Get-Command Get-WindowsUpdateLog | select -ExpandProperty ModuleName
It's part of the "WindowsUpdate" module. But where to find it? Well, that's easy:
Get-Module -Name WindowsUpdate | select -ExpandProperty Path
This displays the path to the module manifest or module definition file (*.psd1). Let's open the PSD1 file in a text editor.
Find the line "ModuleList" or "NestedModules". In both of them you can figure out the path to the module files (*.psm1). The module files normally reside in the same directory. One of those files contains the implementation of the function, found in the definition property of the Get-Command output. In our case it's easy because this psd1 only contains one module. So let's go and open the PSM1 file in our favorite text editor.
Here we have the complete implementation of our function.
You may have noticed, that I used the terms "normally, most times..." quite often in this tutorial. This is because the implementaion of CMDLets is not fully standardized. Finding the way back to other commands could slightly differ to the steps mentioned above. Anyway it's good starting point.
Decode PowerShell Command from Running Processes
There are times when I've found a PowerShell process running that is taking up a bunch of resources. Sometimes they've even been my own scripts running in the context of a SQL Server Agent Job with a PowerShell job step. Because I can have multiple PowerShell job steps running at a time, sometimes it's difficult to know what the offending job is. The following method can be used to decode a script block that a PowerShell process is currently running.
Create a Long Running Process
To demonstrate this capability, create the following Agent Job. It executes a PowerShell job step that outputs a number every minute for 10 minutes.
USE [msdb] GO DECLARE @jobId BINARY(16) EXEC msdb.dbo.sp_add_job @job_name=N'PowerShell Job', @enabled=1, @notify_level_eventlog=0, @notify_level_email=2, @notify_level_page=2, @delete_level=0, @category_name=N'[Uncategorized (Local)]', @owner_login_name=N'sa' GO EXEC msdb.dbo.sp_add_jobserver @job_name=N'PowerShell Job', @server_name = @@SERVERNAME GO EXEC msdb.dbo.sp_add_jobstep @job_name=N'PowerShell Job', @step_name=N'PowerShell', @step_id=1, @cmdexec_success_code=0, @on_success_action=1, @on_fail_action=2, @retry_attempts=0, @retry_interval=0, @os_run_priority=0, @subsystem=N'PowerShell', @command=N' powershell.exe -Command { $i = 1 while ( $i -le 10 ) { Write-Output -InputObject $i Start-Sleep -Seconds 60 $i++ } } ', @database_name=N'master', @flags=0 GO EXEC msdb.dbo.sp_update_job @job_name=N'PowerShell Job', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @notify_level_email=2, @notify_level_page=2, @delete_level=0, @description=N'', @category_name=N'[Uncategorized (Local)]', @owner_login_name=N'sa', @notify_email_operator_name=N'', @notify_page_operator_name=N'' GO
Execute the Agent Job that was just created.
USE msdb; GO EXEC dbo.sp_start_job N'PowerShell Job'; GO
View the Process
Using Task Manager
- Launch the Task Manager and select the Details tab.
- Scroll down the list to powershell.exe.
- Right-click the column headers in Task Manager and click Select Columns.
- Check Command Line in the Select columns dialogue and click OK.
- Now the parameters that were passed into powershell.exe are visible. However, the command is still obfuscated as an encoded command.
Using PowerShell
- Start PowerShell as Administrator. It is vital that PowerShell is running as administrator, otherwise no results will be returned when querying the running processes.
- Execute the following command to obtain all of the PowerShell processes that have an encoded command:
$powerShellProcesses = Get-CimInstance -ClassName Win32_Process -Filter 'CommandLine LIKE "%EncodedCommand%"'
- The following command creates a custom PowerShell object that contains the process ID and the encoded command.
$commandDetails = $powerShellProcesses | Select-Object -Property ProcessId, @{ name = 'EncodedCommand' expression = { if ( $_.CommandLine -match 'encodedCommand (.*) -inputFormat' ) { return $matches[1] } } }
- Now the encoded command can be decoded. The following snippet iterates over the command details object, decodes the encoded command, and adds the decoded command back to the object for further investigation.
$commandDetails | ForEach-Object -Process { $currentProcess = $_ $decodedCommand = [System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($currentProcess.EncodedCommand)) $commandDetails | Where-Object -FilterScript { $_.ProcessId -eq $_.ProcessId } | Add-Member -MemberType NoteProperty -Name DecodedCommand -Value $decodedCommand }
- The decoded command can now be reviewed by selecting the decoded command property.
$commandDetails[0].DecodedCommand
There it is! Just another neat trick to add to the toolbox.
About: Office 365 Management Activity API
The Office 365 Management Activity API pull information on Exchange and non-Exchange activity across Office 365. From MSDN: The Office 365 Management Activity API provides information about various user, admin, system, and policy actions and events from Office 365 and Azure Active Directory activity logs. Customers and partners can use this information to create new or enhance existing operations, security, and compliance-monitoring solutions for the enterprise.
Documentation:
Office 365 Management APIs Overview
https://msdn.microsoft.com/en-us/office-365/office-365-managment-apis-overview
Get started with Office 365 Management APIs
https://msdn.microsoft.com/en-us/office-365/get-started-with-office-365-management-apis
Office 365 Management Activity API reference
https://msdn.microsoft.com/en-us/office-365/office-365-management-activity-api-reference
Office 365 Management Activity API schema
https://msdn.microsoft.com/en-us/office-365/office-365-management-activity-api-schema
Set up your Office 365 development environment
https://msdn.microsoft.com/office/office365/howto/setup-development-environment
Samples:
Office365APIEditor
https://github.com/Microsoft/Office365APIEditor
O365-InvestigationTooling
https://github.com/OfficeDev/O365-InvestigationTooling
To pull the reports from the UI:
Search the audit log in the Office 365 Security & Compliance Center
https://support.office.com/en-us/article/Search-the-audit-log-in-the-Office-365-Security-Compliance-Center-0d4d0f35-390b-4518-800e-0c7ec95e946c?ui=en-US&rs=en-US&ad=US
Additional references:
Office 365 API reference
https://msdn.microsoft.com/office/office365/api/api-catalog
Application Insights – Advisory 12/14
As part of planned migration work, we are adding multiple new agents to the pool of VMs, which is used for Application Insights Availability Monitoring feature. Customers, who whitelist IPs for availability tests, would need to add the following new IPs to their firewall rules:
FR : Paris
52.143.140.242
52.143.140.246
52.143.140.247
52.143.140.249
CH : Zurich
52.136.140.221
52.136.140.222
52.136.140.223
52.136.140.226
RU : Moscow
51.140.79.229
51.140.84.172
51.140.87.211
51.140.105.74
SE : Stockholm
51.141.25.219
51.141.32.101
51.141.35.167
51.141.54.177
US : FL-Miami
52.165.130.58
52.173.142.229
52.173.147.190
52.173.17.41
52.173.204.247
52.173.244.190
52.173.36.222
52.176.1.226
Please refer to the following documentation for any addition details/information regarding Application Insights public endpoints: https://docs.microsoft.com/en-us/azure/application-insights/app-insights-ip-addresses
-Deepesh
About: Exchange Reporting Services
Below is information on accessing the c. Note that you will need to do a GET and a property formatted URL with an admin account which can pull the reports. If there are issues you can check the report by running it in ECP and also by pulling the report with Excel. Be sure to read the documentation on the options for the report parameters. Code wise your code will need to handle reading the data in a chunked format. To get back an XML response use "format=Atom" at the end and for a Json response use "format=Json". Basic authentication can be used against Exchange Online for these reports. If you have any issues with the API you should test to see if the issue reproduces from 365 to see if the issue reproduces – if it reproduces then the issue is no with your code. You can do a test run of these reports also by using the EWS POST window in EWSEditor, setting the Verb to GET, using Basic Authentication and setting the URL to what is needed to pull the report – examples are below. Please note that I've included most of the information in this blog post in a sample in EWSEditor's EWS POST window under the name "Office365ReportingServices.xml".
Exchange reports:
Office 365 Reporting web service
https://msdn.microsoft.com/en-us/library/office/jj984325.aspx
Exchange reports available in Office 365 Reporting web service
https://msdn.microsoft.com/en-us/library/office/jj984342.aspx
Office 365 Admin portal
https://portal.office.com/AdminPortal/Home#/homepage
Office 365 Reporting web service and Windows PowerShell cmdlets
https://msdn.microsoft.com/en-us/library/office/jj984326.aspx
Example report URLs:
List reports - see the possible reports to run
https://reports.office365.com/ecp/reportingwebservice/reporting.svc
General weekly report:
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/CsActiveUserWeekly
Monthly activity:
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/CsActiveUserMonthly
Mailbox usage:
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MailboxUsage
Mailbox daily activity report – top 20:
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MailboxActivityDaily?$select=Date,TotalNumberOfActiveMailboxes&$top=20&$orderby=Date%20desc&$format=Atom
Connections by client type:
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/ConnectionbyClientTypeDetailDaily
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/ConnectionbyClientTypeDetailWeekly
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/ConnectionbyClientTypeDetailYearly
Daily connection report
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/ConnectionbyClientTypeDetailDaily?$select=Date,ClientType,Count,Date,UserName,WindowsLiveID&$format=Atom
Daily connection report during a time range:
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/ConnectionbyClientTypeDetailDaily?$select=ClientType,Date,Count,UserName,WindowsLiveID&$filter=Date%20ge%20datetime'2017-01-01T00:00:00'%20and%20Date%20le%20datetime'2017-01-14T00:00:00'&$orderby=ClientType,Date&$format=Atom
Daily connection report for a specific mailbox during a time range:
https://reports.office365.com/ecp/reportingwebservice/reporting.svc/ConnectionbyClientTypeDetailDaily?$select=ClientType,Date,Count,UserName,WindowsLiveID&$filter=Date%20ge%20datetime'2017-01-01T00:00:00'%20and%20Date%20le%20datetime'2017-01-14T00:00:00'%20and%20WindowsLiveID %20eq%20'myuser@contoso.onmicrosoft.com'&$orderby=ClientType,Date&$format=Atom
Below is a partial sample for pulling a report with a GET. Note that the code for pulling the reports needs to handle a chunked response. For a full sample look at the HttpHelper.cs file in EwsEditor's code.
if (sVerb == "GET")
{
byte[] bData = new byte[1028];
string sData = string.Empty;
StringBuilder sbFullData = new StringBuilder();
oHttpWebResponse = (HttpWebResponse)oHttpWebRequest.GetResponse();
Stream oStream = oHttpWebResponse.GetResponseStream();
int bytesRead = 0;
//while ((bytesRead = await result.Result.ReadAsync(data, 0, data.Length)) > 0)
while ((bytesRead = oStream.Read(bData, 0, bData.Length)) > 0)
{
sData = System.Text.Encoding.UTF8.GetString(bData, 0, bytesRead);
sbFullData.Append(sData);
}
oStream.Close();
sResult = sbFullData.ToString();
}
Bing Maps Launches new Fleet Management APIs
The Bing Maps team just announced three new fleet management API’s; Truck Routing, Isochrones, and Snap to Road. This is in addition to the Distance Matrix API they released in October. Read the full announcement on the Bing Maps blog here: https://blogs.bing.com/maps/2017-12/bing-maps-launches-three-new-fleet-management-apis
以前のバージョンの Visual Studio 2017 インストーラーについて
こんにちは、Visual Studio サポート チームです。
Visual Studio 2017 も先日バージョン 15.5 が公開されました! (※2017/12現在)
弊社としては、不具合の修正やセキュリティの強化が適用されている最新版のご利用を推奨しており、製品サポートも基本的には最新版にて承っています。
しかしながら、様々なご事情で以前のバージョンを入手したいというお問い合わせもいただいており、このようなご要望を受け、その時点でリリースされている「最新バージョンのひとつ前のバージョン」をご利用いただけるように以下のページで公開しています。
Installing an earlier release of Visual Studio 2017
https://www.visualstudio.com/en-us/productinfo/installing-an-earlier-release-of-vs2017
※ 新しいバージョンがリリースされた場合には公開バージョンが変更されますのでご留意ください。
なお、各バージョンのインストール パッケージを保存される場合には注意が必要です!
Visual Studio 2017 のインストーラーは、インターネット上から最新版のパッケージをダウンロードしてインストールする仕組みとなっています。このため、以前のバージョンのインストーラーを保存してあっても、インストール実行時には最新版のパッケージが取得されます。
これを防ぐためには、インストーラーのみを保存しておくのではなく、その時点のバージョンのオフライン パッケージを作成しておく必要があります。お手数とはなりますが、特定バージョンの Visual Studio 2017 インストーラーを保存されたい場合には、予め以下の手順でオフライン インストーラーを作成してくださいますようお願いいたします。
特定バージョンのオフラインインストーラーの作成手順
オフライン インストーラーの作成手順は以下のページでもご案内しています。
Visual Studio 2017 のネットワーク インストールを作成する
https://docs.microsoft.com/ja-jp/visualstudio/install/create-a-network-installation-of-visual-studio#how-to-create-a-layout-for-a-previous-visual-studio-2017-release
※「以前の Visual Studio 2017 リリースのレイアウトを作成する方法」をご参考ください。
手順
1) 該当バージョンのセットアップファイル (例 vs_professional.exe) を入手します。
2) インストール イメージを保存するフォルダーを作成し、以下のコマンドでイメージをダウンロードします。
コマンド例) Visual Studio Professional を c:vs2017offlineに展開する場合
vs_professional.exe --layout c:vs2017offline
作成したインストーラーにバージョン名を付与するなどして保存してください。
インストール時の事前準備
オフラインなどで Visual Studio 2017 のインストールに必要な証明書がダウンロードできない環境の場合は事前に以下の手順で適用してください。
オフライン環境での Visual Studio のインストールに関する特別な考慮事項
https://docs.microsoft.com/ja-jp/visualstudio/install/install-visual-studio-in-offline-environment
※ 「Visual Studio オフライン インストールに必要な証明書をインストールする」をご参考ください。
証明書をインストールまたは更新するには 3 つのオプションがありますが、個人のお客様はオプション 1 の手順が簡易です。
オプション 1 - レイアウト フォルダーから手動で証明書をインストールする
オプション 2 - エンタープライズ環境で信頼されたルート証明書を配布する
オプション 3 - Visual Studio のスクリプト化された展開の一部として証明書をインストールする
本稿が、特定バージョンの Visual Studio 2017 のインストーラーを保存する必要がある開発者様のお役に立てましたら幸いです。