Quantcast
Channel: MSDN Blogs
Viewing all 5308 articles
Browse latest View live

Servicing Update for ODBC Driver 13.1 for Linux and macOS Released

$
0
0

Hi all, we are delighted to share the servicing update of the Microsoft ODBC Driver 13.1 for Linux and macOS. The new driver enables access to SQL Server, Azure SQL Database and Azure SQL DW from any C/C++ application on Linux. 

Added

  • Ubuntu 14.04, 17.04, Debian Jessie and SUSE Enterprise Linux 11 SP4 support
    • Use apt-get, and zypper to install the driver
  • Azure Active Directory Authentication support (username/password) for Linux and macOS
    • Get started with Azure AD by checking out the docs here
  • Configurable driver location on Linux
    • Simply copy the whole driver install directory to the location of your choice after normal installation.

Fixed

  • Issue with Floating-point numbers formatted incorrectly in French locale
  • Issue with SQLBindParameter input/output parameters truncating prematurely
  • Implicit cursor conversion due to comments
  • Buffer overrun upon UTF-8 character conversion
  • Symbolic link creation for the RPM package

Install the ODBC Driver for Linux on Debian Jessie 8

[snippet slug=odbc-driver-debian-jessie-servicing-release lang=bsh]

Install the ODBC Driver for Linux on Ubuntu 14.04

[snippet slug=odbc-driver-14-04-13-1-servicing-release lang=bsh]

Install the ODBC Driver for Linux on Ubuntu 16.04

[snippet slug=odbc-driver-16-04-13-1-servicing-release lang=bsh]

Install the ODBC Driver for Linux on Ubuntu 16.10

[snippet slug=odbc-driver-16-10-13-1-servicing-release lang=bsh]

Install the ODBC Driver for Linux on Ubuntu 17.04

[snippet slug=odbc-driver-17-04-13-1-servicing-release lang=bsh]

Install the ODBC Driver for Linux on RedHat 6

[snippet slug=odbc-driver-rhel-6-13-1-servicing-update lang=bsh]

Install the ODBC Driver for Linux on RedHat 7

[snippet slug=odbc-driver-rhel-7-13-1-servicing-release lang=bsh]

Install the ODBC Driver for SLES 11

[snippet slug=odbc-driver-suse11-13-1-servicing-release lang=bsh]

Install the ODBC Driver for SLES 12

[snippet slug=odbc-driver-suse12-13-1-servicing-release lang=bsh]

Try our Sample

Once you install the driver that runs on a supported Linux distro, you can use this C sample to connect to SQL Server/Azure SQL DB/Azure SQL DW. To download the sample and get started, follow these steps:

[snippet slug=odbc-c-sample lang=bsh]

If you installed the driver using the manual instructions found here, you will have to manually uninstall the ODBC Driver and the unixODBC Driver Manager to use the deb/rpm packages. If you have any questions on how to manually uninstall, feel free to leave a comment below. 

Please fill bugs/questions/issues on our Issues page. We welcome contributions/questions/issues of any kind. Happy programming!

Meet Bhagdev (meetb@microsoft.com)

sql-loves-linux_2_twitter-002


Release Candidate for Microsoft Drivers v4.3.0 for PHP for SQL Server Released!

$
0
0

Hi all,

We are excited to announce the Release Candidate for SQLSRV and PDO_SQLSRV drivers. The drivers now support Debian Jessie. In addition, starting with this release, we support sql_variant type with limitations.

Notable items about 4.3.0-preview release:

Added

  • Transparent Network IP Resolution (TNIR) feature.

Fixed

  • Fixed a memory leak in closing connection resources.
  • Fixed load ordering issue in MacOS (issue #417)

Limitation

  • No support for input / output params when using sql_variant type

Known Issues

  • User defined data types
  • When pooling is enabled in Linux or macOS
    • unixODBC <= 2.3.4 (Linux and macOS) might not return proper diagnostics information, such as error messages, warnings and informative messages
    • due to this unixODBC bug, fetch large data (such as xml, binary) as streams as a workaround. See the examples here

Survey

Let us know how we are doing and how you use our driver by taking our pulse survey: https://www.surveybuilder.com/s/neQnb

Get Started

Getting Drivers for PHP5 and older runtimes

You can download the Microsoft PHP Drivers for SQL Server for PHP 5.4, 5.5 and 5.6 from the download center: https://www.microsoft.com/en-us/download/details.aspx?id=20098. Version 3.0 supports PHP 5.4, version 3.1 supports PHP 5.4 and PHP 5.5 and version 3.2 supports PHP 5.4, 5.5 and 5.6.

PHP Driver Version Supported
v3.2 PHP 5.6, 5.5, 5.4
v3.1 PHP 5.5, 5.4
v3.1 PHP 5.4

Meet Bhagdev (meetb@microsoft.com)

MSFTlovesPHP

Embedding Learning Library

$
0
0

Ha habido poca actividad en el blog los últimos días, estoy reuniendo las piezas para poder implementar integración continua con Arduino y aun debo resolver algunos problemas; sin embargo, ayer hicimos un anuncio interesante en el campo de Inteligencia Artificial que quería compartir con ustedes!

Embedding Learning Library es una tecnología novedosa que abre campo a todo un mundo de posibilidades, básicamente permite correr modelos de inteligencia artificial en plataformas embebidas, la idea es poder crear modelos de machine-learning que no requieran de conectividad y que puedan ejecutarse en dispositivos pequeños. Desean probarlo? El siguiente tutorial les muestra cómo correr un modelo de reconocimiento visual en una Raspberry Pi.

https://github.com/Microsoft/ELL/tree/master/tutorials/vision/gettingStarted

Para este tutorial se necesita una Raspberry Pi, una pantalla donde visualizar la información (puede ser un monitor o televisor) y una webcam; una vez configurada, la aplicación desplegará descripciones de lo que está siendo enfocado en la webcam como en el siguiente caso:

Intenté realizar el tutorial pero estoy teniendo un problema con una dependencia, en cuanto lo resuelva les traeré otro avance de esta interesante librería.

Hasta la próxima!
–Rp

Azure Network Security Whitepaper/Article Released

$
0
0

imageOne of the biggest challenges we have when learning about a new cloud service provider is trying to figure out what that provider has.image

In fact, it’s hard even if you’re already using that cloud service provider!

For example, “what does Azure have that’s related to my network security concerns?”

The answers would come pretty slowly if you had to sift though dozens or hundreds of articles on your own – nobody has time for that!

That’s why we created the new Azure Network Security whitepaper/article.

In this article we discuss:

  • Basic network connectivity options and capabilities
  • Hybrid connectivity and security issues
  • Security controls to consider through Azure networking
  • Network security validation through logging and auditing

We hope you find the Azure Network Security whitepaper/article useful – please let us know if there are areas that need more coverage or if we need to include new topics. We’re living in the age of the agile cloud, so continuous improvements and updates define how we do docs now! Help us, help you!

Thanks!

Tom

Tom Shinder
Program Manager, Azure Security
@tshinder | Facebook | LinkedIn | Email | Web | Bing me! | GOOG me

Guidelines for Tier 1 Dev, Test, and Build environments deployed in Microsoft’s subscription

$
0
0

Customers who purchase a Dynamics 365 for Operations or Dynamics 365 for Finance and Operations, Enterprise Edition subscription can deploy a Tier 1 Dev/Test or Build environment in the LCS Implementation project which is provisioned for their tenant. Though the customer has admin access to the Tier 1 environments, these environments are deployed in Microsoft’s subscription because Dynamics 365 for Finance and Operations is a service that is managed by Microsoft. The following guidelines ensure that the environments remain secure.  

  • On all Tier 1 environments, Automatic Windows Update should be enabled by default and should always remain enabled. This ensures that anytime Microsoft pushes security or critical infrastructure updates to your environment, your environment receives the latest set of patches. 
  • All Tier 1 environments must be patched each month with the OS patches that Microsoft releases.  
  • Changing the admin passwords on these environments is NOT allowed. Environments that have admin passwords changed will be flagged by Microsoft. Microsoft reserves the right to reset the password.  

The security of your environments is our highest priority.  

What can I do in .NET Core that I can’t in the full .NET Framework?

$
0
0

This post is provided by Senior App Dev Managers, Keith Beller and John Abele who ask the question, “What can I do in .NET Core that I can’t do in the full .NET Framework?”


Microsoft’s mission to empower every individual and organization is manifested in .NET Core, the most transformative .NET framework ever. Rebuilt from the ground up, .NET Core is tailor-made for modern development workloads such as cloud apps, microservices and containers. The philosophy behind the framework is also to extend the reach of Microsoft development technologies beyond traditional boundaries and offer an unprecedented level of flexibility and choice to a new generation of developers. Let’s take a quick look at the new possibilities enabled by the .NET Core framework.

Use the tooling you know and love

Microsoft’s desire to meet developers where they are, means enabling them to use the tooling they know and love on their platform of choice. While Visual Studio is a full featured IDE, it can take considerable time to download and resources to run and isn’t fully available to Linux users.

In addition to the release of Visual Studio Code, a lightweight, cross-platform, open-source editor, the .NET Core command line interface (CLI) enables developers to quickly build great .NET apps using any code editor across macOS and Linux platforms.

clip_image002

Build and Run anywhere

The ability to decide which platform you’d like to run .NET Core on such as Windows, Mac or Linux has many implications for the application development lifecycle.  Scenarios like developing a .NET Core web app on your MacBook using your favorite IDE and deploying to a Docker image from Red Hat’s container registry are supported.  With .NET core you are not locked into a specific operating system.

Side-by-side deployments

The pace of change continues to accelerate which introduces interesting challenges for development teams attempting framework versioning.  The ability to deploy multiple applications targeting different versions of .NET core on the same machine allows you to make versioning decisions on an application level.

In addition to Framework-dependent deployments (FDD) .NET core also supports Self-contained deployments (SCD) which remain completely isolated from other .NET Core applications.   Want to run apps targeting .NET core versions 1.0 and 2.0 on the same machine?  Sure, you can do that.

Go faster with .NET Core

Some of the .NET Core team’s stated goals for the framework is to build it with high quality, reliability and compelling performance.  Kestrel, a new managed web server introduced with .NET Core, is by far the fastest available .NET server. According to benchmarks, about six times faster than .NET 4.6 and three times faster than NodeJS.

clip_image004

Microsoft’s dedication to .NET Core performance was also on display at TechEmpower’s 13th round of web framework benchmarking which recognized the ASP.NET team for the most dramatic performance improvement ever seen, making ASP.NET Core a top performer among web frameworks.

Become a part of the .NET story

October 3, 2007 Scott Guthrie’s team announced they were providing download and browse access to the source code for the .NET Framework Libraries.  While the code was not open sourced the development community was genuinely excited at the news, still many hoped for more.   Fast forward seven years to November 12, 2014 the .NET team invited developers to join the conversation making .NET Core open source.  This event marked a dramatic change in Microsoft’s approach to developing its next generation cloud ready, platform agnostic .NET Framework.  Today, .NET Core has drawn a vibrant community of incredibly talented developers that have contributed impactful improvements to the framework.  In fact, 40% of the performance improvements made to .NET Core were provided by the community. 

Get started by checking out the source code hosted at github.com where you will find the project roadmap, contribution details and documentation. Also, join an ASP.NET community standups which are streamed live and archived for those unable to attend.  Scott Hanselman, Damian Edwards, Jon Galloway plus many others discuss everything from coding challenges to what’s new and upcoming while frequently spotlighting community involvement.

.NET Core enables developers to create fast, resource efficient apps with choice and flexibility and deploy them anywhere. Getting started with .NET Core takes only minutes, grab your preferred editor and the SDK for your OS from the .NET Core page.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Delighting students with Microsoft Hour of Code and Minecraft

$
0
0

This post is provided by Senior ADM, Cissy Ho, who spotlights an “Hour of Code” to inspire others through technology and shares information on how you can too.


As a part of working in Premier Developers and Microsoft Canada, I am honored  to share my technology expertise with Enterprise developers, and even broader communities with students and young in-college settings.

Last November I taught an “Hour of Code” in two schools and coded Minecraft tutorials in 3 classes.

My goals were to bring awareness that “Coding can be a part of Everything we do”.  I have related to the students how coding is all about solving problems, working with friends and using your own creativities!   

We used the Microsoft Minecraft tutorial (https://code.org/minecraft).  The tutorial contains block programming techniques to conduct adventures such as coding the Iron Golem to chase out and hit zombies!

minecraft

At the beginning of session, a student came and told me that Minecraft is his favorite game.  I told him that it is fun to play in Minecraft, but it is even cooler to program your characters and ‘code’ your own adventures.  His face lit up and he was ready to learn.  

I worked with a Grade 9 after school boys program, a Grade 2 English class and a Grade 2 French Immersion class.  Despite age and language differences, all the common experience was that students loves to learn, and collaborate ‘organically’ to solve challenges.

My hope was to show them that logic programming is not difficult and provide experience at creating with technologies–not being an end user of technologies.

For more information about Hour of code: Please visit https://hourofcode.com. Anyone can participate and give back to the community through a self-organized event or work. The tutorials are from age 4-104.

For more information on Microsoft participation in the Hour of code event listed here: https://news.microsoft.com/2016/11/15/microsoft-and-code-org-announce-free-minecraft-hour-of-code-tutorial-for-computer-science-education-week-dec-5-11/#SCXlyuIXG50oh1p2.97

On a related topic, here is a great resource to turn S.T.E.M (Science, Technology, Engineering and Mathematics) passion into actions for everyone.  http://makewhatsnext.com/careers/


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

SQL Updates Newsletter – June 2017

$
0
0

Recent Releases and Announcements

 

 

Issue Alert

  • Critical: Do NOT delete files from the Windows Installer folder. C:windowsInstaller is not a temporary folder and files in it should not be deleted. If you do it on machines on which you have SQL Server installed, you may have to rebuild the operating system and reinstall SQL Server.
  • Critical: Please be aware of a critical Microsoft Visual C++ 2013 runtime pre-requisite update that may be required on machines where SQL Server 2016 will be, or has been, installed.
    • https://blogs.msdn.microsoft.com/sqlcat/2016/07/28/installing-sql-server-2016-rtm-you-must-do-this/
    • If KB3164398 or KB3138367 are installed, then no further action is necessary. To check, run the following from a command prompt:
      • powershell get-hotfix KB3164398
      • powershell get-hotfix KB3138367
    • If the version of %SystemRoot%system32msvcr120.dll is 12.0.40649.5 or later, then no further action is necessary. To check, run the following from a command prompt:
      • powershell “get-item %systemroot%system32msvcr120.dll | select versioninfo | fl”
  • Important: If the Update Cache folder or some patches are removed from this folder, you can no longer uninstall an update to your SQL Server instance and then revert to an earlier update build.
    • In that situation, Add/Remove Programs entries point to non-existing binaries, and therefore the uninstall process does not work. Therefore, Microsoft strongly encourages you to keep the folder and its contents intact.
    • https://support.microsoft.com/en-us/kb/3196535
  • Important: You must precede all Unicode strings with a prefix N when you deal with Unicode string constants in SQL Server
  • Important: Default auto statistics update threshold change for SQL Server 2016
  • Performance impact of memory grants on data loads into Columnstore tables
    • Problem: We found that only at the beginning of the run, there was contention on memory grants (RESOURCE_SEMAPHORE waits), for a short period of time. After that and later into the process, we could see some latch contention on regular data pages, which we didn’t expect as each thread was supposed to insert into its own row group.
    • Cause: For every bulk insert we first determine whether it can go into a compressed row group directly based on batch size. If it can, we request a memory grant with a timeout of 25 seconds. If we cannot acquire the memory grant in 25 seconds, that bulk insert reverts to the delta store instead of compressed row group.
    • Solution: We created and used a resource governor workload group that reduced the grant percent parameter to allow greater concurrency during data load
    • https://blogs.msdn.microsoft.com/sqlcat/2017/06/02/performance-impact-of-memory-grants-on-data-loads-into-columnstore-tables/
  • You may see “out of user memory quota” message in errorlog when you use In-Memory OLTP feature

 

Recent Blog Posts and Articles

  • Azure Marketplace Test Drive
    • One feature in Azure Marketplace that is especially useful for learning about products is “Test Drive.”
    • Test Drives are ready to go environments that allow you to experience a product for free without needing an Azure subscription. An additional benefit with a Test Drive is that it is pre-provisioned – you don’t have to download, set up or configure the product and can instead spend your time on evaluating the user experience, key features, and benefits of the product.
    • https://azure.microsoft.com/en-us/blog/azure-marketplace-test-drive/
  •  Building an Azure Analysis Services Model on Top of Azure Blob Storage—Part 2 + 3
  •  Use WITH clause in OPENJSON to improve parsing performance
  • Smart Transaction log backup, monitoring and diagnostics with SQL Server 2017
    • New DMF sys.dm_db_log_stats which we released in SQL Server 2017 CTP 2.1 will enable DBAs and SQL Server community to build scripts and solutions that performs smart backups, monitoring and diagnostics of transaction log.
    • Column log_since_last_log_backup_mb can be used in your backup script to trigger a transaction log backup when log generated since last backup exceeds [an activity] threshold value.
    • We have exposed columns for VLF monitoring total_vlf_count and active_vlfs allowing you to monitor and alert if the total number of VLFs of the transaction log file exceeds a threshold value.
    • New log_truncation_holdup_reason column to understand the cause of the log truncation holdup
    • If a log truncation doesn’t happen and active_vlfs approaches total_vlfs, it would lead to autogrow causing total_vlfs to increase.
    • The log_backup_time column in sys.dm_db_log_stats can be used to determine the last transaction log backup and can be used to alert a DBA and trigger a backup in response to the alert. The last log backup time can also be derived from msdb database but one of the advantage of using log_backup_time column in sys.dm_db_log_stats is it also accounts for the transaction log backup completed on secondary replica if the database is configured in Availability groups.
    • For long running transaction in killedrollback state scenario, a DBA can look at recovery_vlf_count and log_recovery_size_mb to understand the number of vlfs to recover and log size to recover if the database is restarted.
    • https://blogs.msdn.microsoft.com/sql_server_team/smart-transaction-log-backup-monitoring-and-diagnostics-with-sql-server-2017/
  • What is plan regression in SQL Server?
    • Plan regression happens when SQL Server starts using sub-optimal plan, which increases CPU time and duration.
    • One way to mitigate this is to recompile query with OPTION(RECOMPILE) if you find this problem. Do not clear procedure cache on production system because it will affect all queries!
    • Another option would be to use automatic plan choice correction in SQL Server 2017 that will look at the history of plans and force SQL Server to use last known good plan if plan regression is detected.
    • https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/06/09/what-is-plan-regression-in-sql-server/
  • Columnstore Index: How do I find tables that can benefit from Clustered Columnstore Index
  • Azure SQL databases in logical servers, elastic pools, and managed instances
    • Logical servers enable you to perform administrative tasks across multiple databases – including specifying regions, login information, firewall rules, auditing, threat detection, and failover groups. Databases cannot share resources, and each database has guaranteed, and predicable performance defined by its own service tier. Some server-level specific features such as cross-database querying, linked servers, SQL Agent, Service Broker, or CLR are not supported in Azure SQL databases placed in logical server.
    • Elastic Pools: Databases that need to share resources (CPU, IO, memory) can be stored in elastic pools instead of a logical server. Databases within the elastic pools cannot have different service tiers because they share resources that are assigned to entire pool.
    • Managed instances (In private preview): In May 2017, the concept of a managed instance was announced. With a managed instance, features like SQL CLR, SQL Server Agent, and cross-database querying will be fully supported. Furthermore, a managed instance will have the current capabilities of managed databases, including automatic backups, built-in high-availability, and continuous improvement and release of features in the Microsoft cloud-first development model.
    • You may sign-up for the limited preview here: https://sqldatabase-migrationpreview.azurewebsites.net/
    • https://azure.microsoft.com/en-us/blog/new-options-to-modernize-your-application-with-azure-sql-database/
    • https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/06/13/azure-sql-databases-in-logical-servers-elastic-pools-and-managed-instances/
  • Indirect Checkpoint and tempdb – the good, the bad and the non-yielding scheduler
    • In SQL Server 2016, indirect checkpoint is ON by default with target_recovery_time set to 60 seconds for model database.
    • With indirect checkpoint, the engine maintains partitioned dirty page lists per database to track the number of dirty pages in the buffer pool for that database from each transaction.
    • DIRTY_PAGE_POLL is a new system waittype introduced to support timed wait of recovery writer thread. If you see it as one of the high wait events in sys.dm_os_wait_stats, you can safely ignore it.
    • One of the scenarios where skewed distribution of dirty pages in the DPList is common is tempdb.
    • If recovery writer starts falling behind resulting into long DPLists, the individual worker threads running on the scheduler starts playing the role of recovery writer.
    • In scenarios when the DPList has grown very long, the recovery writer may produce a non-yielding scheduler dump…If you are running SQL Server 2016, we recommend monitoring the output of sys.dm_os_spinlock stats DMV for DP_LIST spinlock to establish a baseline and detect spinlock contention on DP_List.
    • For tempdb, …. indirect checkpoint is still important to smoothen the IO burst activity from automatic checkpoint and to ensure the dirty pages in tempdb do not continue to take away buffer pool pages from user database workload. In this scenario, we recommend tuning the target_recovery_interval to higher value (2-5 mins) for recovery writer to be less aggressive and strike a balance between large IO burst and DPList spinlock contention.
    • https://blogs.msdn.microsoft.com/sql_server_team/indirect-checkpoint-and-tempdb-the-good-the-bad-and-the-non-yielding-scheduler/
  • SQL Server: Large RAM and DB Checkpointing
    • The legacy implementation of database checkpointing, which we’ll call FlushCache, needs to scan the entirety of SQL Server’s Buffer Pool for the checkpoint of any given Database.
    • The new Indirect Checkpoint option relies […] on Dirty Page Managers (DPM), and […] doesn’t rely on the Buffer Pool scan anymore.
    • SQL Server 2016 will by default create new databases with Indirect Checkpoint enabled, and therefore uses the DPM approach [for checkpoints]. [However] there are other places in SQL Server 2016, such as Restore Log which still rely on the FlushCache implementation and will still hit the scan delay on large RAM servers.
    • To activate DPM logic in SQL 2014:  Ensure SQL is at the required SQL build level, activate Indirect Checkpoint for the relevant databases, and enable trace flag 3449.
    • https://blogs.msdn.microsoft.com/psssql/2017/06/29/sql-server-large-ram-and-db-checkpointing/

 

Recent Training and Technical Guides

 

Monthly Script and Tool Tips

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services


Are we functional (part deux)?

$
0
0

 It’s been over seven years since Windows 7 launched. Inside Microsoft, one of the more controversial elements of that launch was the change Steven Sinofsky made to the Windows organization at the start of that product’s planning and development. Coming over from Office, Steve switched Windows’ structure from having many product unit managers (PUMs running lots of small businesses) to having a functional org structure (separate disciplines reporting to a single cross-disciplinary business leader).

Windows 7 was a huge success, so many big groups at Microsoft were quick to switch to a functional org structure. (We called it, “getting Sinofskied.”) As I wrote back then in Are we functional?, “let’s not get too hasty. Knee-jerk nitwits who act before they think are doomed to repeat old failures in new ways.” Well, it’s been seven years. Was switching to a functional org structure the answer, or are we better off as a loosely coupled collection of individual small businesses? Neither.

Eric Aside

Historians might point out that shortly after Steven Sinofsky shipped Windows 8, he was seeking work outside of Microsoft, as were nearly all his leadership team members following the release of Windows 8.1.

Also, previously Sinofskied organizations (including Windows) are now a mix of functional and product organizations, instead of being purely functional below a single leader. PUMs haven’t returned, but there are combined engineering teams (almost no pure testers), as well as separate groups of program managers (PMs), data scientists, AI folks, and designers. These are interesting data points, but don’t directly speak to the efficacy of a functional org structure.

A blast from the past

Here’s my take on Windows from my column back in 2010: “At what level should an organization switch from a product structure to a functional structure? At the business owner. Windows is a single business—we don’t sell the product in pieces. It makes sense to be functional below Sinofsky.” I rationalized this choice as follows: “Functional organizations share one business plan, so large functional organizations can’t radically change their business plans overnight. That’s prudent for a large product like Windows, but may not make sense for rapidly changing areas.”

Life in the software industry shifts quickly. Later that same year (2010), I published There’s no place like production, describing how services can release software multiple times a day, using exposure control to test and experiment in production. Today, Windows uses exposure control with preview releases of the operating system. Windows also now ships on Xbox, phone, IoT, server, Hololens, desktop, and tablet, voiding my assertions that Windows is a single business that can’t change plans overnight. Instead, Windows has become an agile business that broadly releases previews every week or two on multiple platforms. It’s a phenomenal change that bodes well for the company, but it also means we must rethink the org structure.

Windows is still a large collection of groups that must plan together and align to do big things. However, each team must stay small to be agile and responsive to the market. At a high level (VPs and directors), we’re coordinated and aligned, and at a low level (feature teams), we’re agile and responsive. The bridge between the two levels is made up of group managers, who run their small businesses aligned to the larger Windows business. It all sounds good in theory, but how are we doing in practice? Meh.

Eric Aside

Even when Steve originally Sinofskied Windows, he had general managers in charge of hardware and Internet Explorer, which were separate businesses from Windows.

What are your overheads?

As I uncover in Span sanity—ideal feature teams, the least expensive and most productive feature team is six people: one PM, one engineering lead, and four engineers. Even if you think feature teams should be a bit larger or smaller, there’s no doubt that they are the focal point of productivity and responsiveness in our new world, where even Windows is a service. They need to make quick decisions daily about the priorities and direction of their shared component within the framework of divisionwide plans and priorities. Self-directed feature teams are the smallest cross-functional unit.

However, even self-directed feature teams need higher-level direction from time to time. People issues, plan changes, and cross-team collaboration often escalate to group managers. How many group managers are needed per feature team to handle these escalations? One engineering manager (EM) should do.

There’s an argument that adding a group program manager (GPM) might be better. The GPM would support and grow the feature team PMs, provide a counterpoint for the EM, and halve the EM’s management responsibilities. Since many EMs have little or no experience growing PMs, and might mishandle or ignore PMs in their charge, the extra GPMs might be worth the $450K per year they each cost. Personally, I’d first try training and upgrading EMs in order to invest the half-million elsewhere.

Unfortunately, most feature teams default to having two or even three group managers supporting them. That’s as much as a million dollars a year of excess overhead per group of feature teams. What’s even worse is the substantial make-work that excessive group managers inevitably cause. We can and should do better.

Eric Aside

For more on group manager escalation, read Escalation acceleration and A manager’s manager.

Get in line

If you decrease the number of group managers, you also reduce the need for even pricier directors and VPs. Naturally, this can be overdone. We need a diverse collection of experienced people to plan, design, and drive our major initiatives and scenarios. They just don’t all need to directly manage feature teams.

The fewer group managers we have, the easier it becomes to gain alignment (fewer cooks, fewer captains). It’s true that unruly group managers might cause the same kinds of trouble that unruly PUMs did years ago, but there are two differences.

  • PUMs were often tasked with running a somewhat independent business, even involving business managers and business development. Group managers are tasked with running an aligned business within the direction of their division and Microsoft.
  • PUMs added a layer of management between directors and group managers. Reducing group managers drops that layer of indirection, decreasing the game of telephone up and down the org.

There’s a longstanding myth that adding more people increases output. When it comes to feature team size and overhead, the opposite is often true. If you want more results, and better aligned results, it’s frequently best to reduce the number of people involved. Instead, spend that money and those talents empowering additional people and organizations on the planet to achieve more. We certainly have more than enough problems to solve and customers to serve.

Eric Aside

Where do data scientists, AI folks, designers, and specialized testers fit? Just like marketing, business development, artists, content publishers, and other specialized disciplines, not every feature team needs these specialists. Instead, they can report to a group manager, director, or VP, depending on that org’s scale of need.

World of tomorrow

Seven years after the release of Windows 7 and the Microsoft era of being Sinofskied, divisions and groups are beginning to find more nuanced ways of organizing. Many groups have combined development, test, and operations into DevOps teams. Agile development is now a given, instead of a threat. Designers and design are now associated with success. Software is more componentized; open source is more welcome; and data, data scientists, experimentation, and AI are more essential to running our business.

It’s an exciting time to be a software engineer. Streamlining our org structure to reduce levels of hierarchy, drop excess overhead, and right-size agile feature teams is more important than ever to respond quickly to changing markets and technology. Yes, this means moving folks around, but that can expand our reach and enable our customers to achieve more.

With so much change, some longtime leaders and experienced engineers will resist altering how they work and how they are organized. It’s natural to feel uneasy, but holding back can lead to org and execution constipation—nobody wants that. Take a laxative, welcome all these wonderful advances, and embrace the change. Let’s make ourselves better prepared to face a fruitful future.

Configuring Reporting Services 2016 with ARR

$
0
0

Recently, i was working on a deployment where we need to configure SQL Server Reporting Services 2016 with ARR so that reporting services is exposed over the internet without exposing the machine itself. If you are looking for such implementations, then this guide is for you. Please do note that I’ve tested this against Reporting Services 2016. But this should be fairly the same for prior versions or Reporting Services 2017 as well.

  1. Make sure that you’ve installed the ARR module on the machine which is hosting IIS.
  2. Open the INETMGR and click on the Server name.
  3. In the right hand side, you should see Application Request Routing. Double click to open it.
  4. Under Actions tab in the right hand side, click on “Server Proxy Settings…”
  5. Click on Enable proxy and close the INETMGR.
  6. Open the “applicationHost.config” located under “C:WindowsSystem32inetsrvconfig”
  7. Make sure you keep a back up of the file.
  8. Search for the <rewrite> element within the file. That should be located underneath “<proxy enabled=”true” />” element.
  9. Replace the <rewrite> </rewrite> element with the following:
    <rewrite>

    <globalRules>

    <rule name=”ARR_server_proxy_SSL” enabled=”true” patternSyntax=”Wildcard” stopProcessing=”true”>

    <match url=”*” />

    <conditions>

    <add input=”{HTTPS}” pattern=”on” />

    </conditions>

    <action type=”Rewrite” url=”https://xxxxxxxxxxxx/{R:0}” />

    <serverVariables>

    <set name=”HTTP_X_ORIGINAL_ACCEPT_ENCODING” value=”{HTTP_ACCEPT_ENCODING}” />

    <set name=”HTTP_ACCEPT_ENCODING” value=”” />

    </serverVariables>

    </rule>

    <rule name=”ARR_server_proxy” enabled=”true” patternSyntax=”Wildcard” stopProcessing=”true”>

    <match url=”*” />

    <action type=”Rewrite” url=”http://xxxxxxxxxxxxx/{R:0}” />

    <serverVariables>

    <set name=”HTTP_X_ORIGINAL_ACCEPT_ENCODING” value=”{HTTP_ACCEPT_ENCODING}” />

    <set name=”HTTP_ACCEPT_ENCODING” value=”” />

    </serverVariables>

    </rule>

    </globalRules>

    <outboundRules>

    <clear />

    <rule name=”handleCompression” preCondition=”Encoding”>

    <match serverVariable=”HTTP_ACCEPT_ENCODING” pattern=”^(.*)” />

    <conditions logicalGrouping=”MatchAll” trackAllCaptures=”true” />

    <action type=”Rewrite” value=”{HTTP_X_ORIGINAL_ACCEPT_ENCODING}” />

    </rule>

    <rule name=”Outbound” preCondition=”json” stopProcessing=”true”>

    <match filterByTags=”None” pattern=”(.*)//xxxxxxxxxxxxx /(.*)” />

    <conditions logicalGrouping=”MatchAll” trackAllCaptures=”true”>

    </conditions>

    <action type=”Rewrite” value=”{R:1}//xxxxxxxxxxxxx /{R:2}” />

    </rule>

    <preConditions>

    <preCondition name=”json”>

    <add input=”{RESPONSE_CONTENT_TYPE}” pattern=”application/json.*” />

    </preCondition>

    <preCondition name=”Encoding”>

    <add input=”{HTTP_X_ORIGINAL_ACCEPT_ENCODING}” pattern=”.*” />

    <add input=”{RESPONSE_CONTENT_TYPE}” pattern=”application/json.*” />

    </preCondition>

    </preConditions>

    </outboundRules>

    <allowedServerVariables>

    <add name=”HTTP_ACCEPT_ENCODING” />

    <add name=”HTTP_X_ORIGINAL_ACCEPT_ENCODING” />

    </allowedServerVariables>

    </rewrite>

  10. Do ensure that the hostname values are changed to the appropriate server URL that hosts Reporting services under the Global rules as shown below:

    <rule name=”ARR_server_proxy_SSL” enabled=”true” patternSyntax=”Wildcard” stopProcessing=”true”>

    <match url=”*” />

    <conditions>

    <add input=”{HTTPS}” pattern=”on” />

    </conditions>

    <action type=”Rewrite” url=”https://SSRS Server name from URL /{R:0}” />

    <rule name=”ARR_server_proxy” enabled=”true” patternSyntax=”Wildcard” stopProcessing=”true”>

    <match url=”*” />

    <action type=”Rewrite” url=”http://SSRS Server name from URL /{R:0}” />

  11. . Under the Outbound rules, replace the pattern with the Report server URL name and the rewrite action with the ARR server URL name as shown below:

    pattern=”(.*)// SSRS Server name from URL /(.*)”

    <action type=”Rewrite” value=”{R:1}//ARR Server URL name/{R:2}” />

  12. Restart IIS using the command IISRESET.

Do note that this rule is written for both HTTP as well as HTTPS. So, if you use either one of them, it should continue to work. The authentication that we enabled for ARR server was Anonymous and for Reporting Services we used Basic authentication (RSWindowsBasic).

 

Hope this helps!

Selva.

[All posts are AS-IS with no warranty and support]

Write your own REST Web Server using C++ using CPP REST SDK (casablanca)

$
0
0

To start, you have to download the C++ REST SDK from Github. It enables to build the library. Just open the sln file in the root folder, restore nuget packages and  and that’s all. Build.

The compilation results in a 4,9 MB file. Now you can build your own web server. You just have to include some headers.

  • #include “cpprest/json.h”
  • #include “cpprest/http_listener.h”
  • #include “cpprest/uri.h”
  • #include “cpprest/asyncrt_utils.h”

The namespaces to use are:

  • using namespace web;
  • using namespace http;
  • using namespace utility;
  • using namespace http::experimental::listener;

The first thing is to declare is a uri_builder object. In my case, this is “http://localhost:34568/MyServer/Action/”.  Now we can create a MyServer class that exposes end points.

The class contains an http_listener element.

Here the implementation of the server. First, in the constructor, we expose all the end-points. In this implementation, the end-points return OK. But you can return JSON data, it is supported by the C++ REST SDK. The missing part is the wmain who instanciate the MyServer class…

As you can see, it is very simple. One handler for each verb and you are are on line.

The source code is available here.

URL access with rc:Toolbar command fails with rsRenderingExtensionNotFound

$
0
0

Recently i was working on a scenario where Reporting Services 2016 is configured for SharePoint (Applicable to both 2013 and 2016) integrated mode. Now, when you use URL access with the rc:Toolbar command, the reports would fail with the following error:

You have attempted to use a rendering extension that is either not registered for this report server or it is not supported in this edition of Reporting Services. (rsRenderingExtensionNotFound)

Upon further investigation, we found that when we use rc:Toolbar command, Reporting Services would attempt to render the report explicitly in HTML5 format as opposed to RPL format. The HTML5 rendering extension should have been configured by default. Since it was not, we’ve to manually do that as shown below. I suspect the reason could be either the integration is with  SharePoint 2013 or SharePoint 2013 has been upgraded to SharePoint 2016.

  1. Open the SharePoint management shell as an Administrator.
  2. Run the command: $app=get-sprsserviceapplication –Name “xxxxxxxxxxxx”
  3. Note that the name to use in the above step could be taken from SharePoint central admin -> Manage service application under Application management and look for “SQL Server Reporting Services Service Application” as shown below:
  4. Followed by the command in #2, run the following command:
    New-SPRSExtension -identity $app -ExtensionType “Render” -name “HTML5” -TypeName “Microsoft.ReportingServices.Rendering.HtmlRenderer.Html5RenderingExtension,Microsoft.ReportingServices.HtmlRendering” -ExtensionConfiguration “<DeviceInfo><DataVisualizationFitSizing>Approximate</DataVisualizationFitSizing></DeviceInfo>”
  5. This should give you an output as shown below:
  6. Do an IISRESET from a command prompt in admin mode.
  7. Now render the Report with the command rc:Toolbar. The URL should resemble the following:
    http://xxxxx/_vti_bin/reportserver?http://xxxxx/Shared Documents/HelloWorld.rdl&rs:Command=Render&rc:ToolBar=False

That should ensure that the reporting services 2016 in SharePoint mode is now capable of using the rs:Toolbar command. Do that this issue would not surface when you’re using the URL against the Reporting Services 2016 in Native mode.

If you would like to remove this extension or any other extension in the future, use the following command preceded by #2:

Remove-SPRSExtension  -identity $app -ExtensionType “Render” -name “HTML5”

In order to get the correct Extension name to use in the cmdlets, use the following PowerShell script to list all the Rendering extensions registered with Reporting Services instance.

$apps = Get-SPRSServiceApplication
foreach ($app in $apps)
{
Write-host -ForegroundColor “yellow” Service App Name $app.Name
Get-SPRSExtension -identity $app -ExtensionType “Render” | select name,extensiontype | Format-Table -AutoSize
}

Similarly, you can use the ExtensionType parameter value as “Data” instead of “Render” in the above script to list all the registered Data extensions.

Ref: PowerShell cmdlets for Reporting Services SharePoint Mode

Hope this helps!

Selva.

[All posts are AS-IS with no warranty and support]

 

Today We Kick Off Our First Annual Award Cycle. Congratulations to all Renewing MVPs!

$
0
0

The day has finally arrived! We’re excited to recognize nearly 1400 renewing and 36 new Microsoft Most Valuable Professionals for their contributions to the Microsoft community.

These technical experts have received the MVP Award for their deep commitment to technology, and for eagerly sharing their knowledge of Microsoft solutions with the world. Through speaking engagements, providing expert feedback, contributing content, organizing events and more, MVPs play a huge role in pushing the global tech community forward.

Since February 2017, the MVP Program has award and announced new MVPs every month – it’s what we call ‘near real time.’ However today marks the first time we’re celebrating all renewing MVPs with just one annual award cycle! This change aims to unify the program and provide a more consistent experience to our MVPs.

MVPs are nominated by Microsoft team members, other community members, and sometimes themselves. They are evaluated based on technical expertise, leadership, and their generous community contributions. Check out some MVP Twitter buzz below!

Want to become an MVP, or know someone who’d be a good fit? Learn what it takes to become one here, and then head to our nomination page. If you haven’t been renewed as an MVP this time around, don’t forget to check out MVP ReconnectMicrosoft’s program to bring together former MVPs.

macOS で Node.js のチャットボット開発準備(前提環境の整備)

$
0
0

はじめに(いいわけ)

以前、Node.js で cogbot をつくるっていうのを途中まで書いてたんですが、お仕事との兼ね合いとかいろいろあって4ヶ月経った今も最後まで書き終えられていなくて…

そんな中で、ハンズオンラボのお仕事をしていたら、自分で書くよりもいいコンテンツの翻訳に注釈を加えることのほうが効率がよかったというわけで、サポートコンテンツに徹した形で(要するにいいわけです。)情報書いていきます。

 

前提環境(確認環境)

それ以外の環境は、確認できていませんので動作するかしないかはわかりませんが、手順の参考にはなるかと思います。

 

前提環境の構築

いよいよ、前提環境をセットアップします。macOS には管理者権限でログインしておくとスムーズです。

Node.js のインストール

  1. https://nodejs.org/en/download/ にアクセスします。
  2. 最新の Node.js と npm のセットをダウンロードします。
    LTS を選択し、りんごマークを選択すると自動でダウンロードがはじまります。
  3. インストーラーを起動し、ウィザードに従ってインストール作業をすすめます。
     

     



     

     



    以上で Node.js のインストールは終わりです。

Visual Studio Code のインストール

  1. https://code.visualstudio.com/download にアクセスします。
  2. 最新の Visual Studio Code をダウンロードします。
  3. ダウンロードした zip を解凍し、任意の場所にコピーします。(Applications など)Visual Studio Code.app を起動します。(特別なインストール作業は必要ではありません。)初回起動時のみ下記の警告が表示されます。

    後は好みになるのですが、Terminal 等で呼び出したい場合は、Path や alias を切っておくと便利です。

Bot Framework Emulator のインストール

  1. https://github.com/Microsoft/BotFramework-Emulator/releases/tag/v3.5.29 にアクセスします。
  2. botframework-emulator-3.5.29-mac.zip をダウンロードします。
  3. ダウンロードした zip を解凍し、任意の場所にコピーします。(Applications など)botframework-emulator.app を起動します。(特別なインストール作業は必要ではありません。)初回起動時のみ下記の警告が表示されます。
    後は好みになるのですが、Terminal 等で呼び出したい場合は、Path や alias を切っておくと便利です。
    ハンズオン演習の後半で複数起動を行う際に Terminal から open コマンドで起動することもあります。

参考資料

 

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

An overview of SSDT

$
0
0

This post is provided by Senior App Dev Manager, Fadi Andari who provides some history around Data Tools and a practical walk-through of SSDT.


I am the first to admit that the SQL Server/Visual Studio relation has been very confusing for the past few years especially when it comes to Data Tools and Source Control. In this post, I will focus on the tools while leaving Source Control, Continues Integration, and deployment to future posts.

The first emergence of SQL Server Data Tools was during the release of Visual Studio 2008 and SQL Server 2008. The tool was called “Business Intelligence Developer Studio” (BIDS), which was a Visual Studio IDE shell that only served Business Intelligence projects such as Analysis Services, Integration Services, and Reporting Services. It also provided the ability to integrate with source control as well as providing tools that allowed Schema and Data comparison, unit testing, and test data generation.

The same functionality was added to the full version of Visual Studio 2008 under “SQL Server Development Tools”, also known as “Data Dude.”

Visual Studio 2010 on the other hand did not provide Business Inelegance tools, as a result the developer still needed Visual Studio 2008 to develop any BI projects, such as Reporting Services. After a period of uncertainty regarding Visual Studio 2010 support for BI development the new “SQL Server Data Tools” (SSDT) was announced as the replacement for “Data Dude.”

SQL Server 2012 included SSDT as an optional component which, if selected, installed a Visual Studio 2010 shell. On the Other hand the full version of Visual Studio 2010 provided SSDT as a downloadable feature.

Visual Studio 2012 still supported the stand alone integrated shell and the full VS versions through updates/

Visual Studio 2013 has SQL Server Tooling built in and shipped as part of the code product. The SQL Server Tooling is also built in VS Express for Web and Express for Windows Desktop.

Visual Studio 2015 and 2017 – Since SQL Server tooling is included in Visual Studio, the updates will be pushed through Visual Studio Update and users will be prompted to do so when Visual Studio is open. If you’d like to check for updates manually, open Visual Studio 2015 and choose the Tools > Extensions and Updates menu. SQL Server tooling updates will appear in the Updates list.

Other Data Tools in Visual Studio

  • Redgate Data Tools – now included in Visual Studio Enterprise 2017 and available to developers as a part of Visual Studio 2017 installation at no additional cost.
  • Redgate’s SQL Search is now available across all Visual Studio 2017 editions, including Visual Studio 2017 Community and Professional.
  • Redgate’s ReadyRoll Core and SQL Prompt Core are available for Visual Studio 2017 Enterprise subscribers.

Difference between Redgate’s ReadyRoll and SSDT:

SSDT is a state-based approach. For every version, the definition of each object in the database is

stored in source control. At deployment time, the target database is compared to the state in source

control (via a DacPac) and a deployment script is generated to update the target database to that

specific version.

ReadyRoll Core is a migrations-based approach. ReadyRoll Core generates a migration script for each

change at development time. It can be edited so that developers have complete control over what will

happen at deployment time. The migration scripts are stored in source control. At deployment time,

the migration scripts are stitched together to generate the deployment script. Each migration script is

only run against a target environment once.

Using SSDT in Visual Studio to manage a SQL Database project.

In this scenario both source and destination databases are located on an instance of SQL Server installed on the local computer.

CREATE TABLE [dbo].[Orders](

[OrderID] [int] NOT NULL,

[OrderDate] [date] NULL

) ON [PRIMARY]

GO

REATE TABLE [dbo].[OrderDetails](

[OrderDetailID] [int] NOT NULL,

[OrderID] [int] NULL,

[ProductID] [int] NULL,

[Quantity] [int] NULL,

[UnitPrice] [money] NULL

) ON [PRIMARY]

GO

CREATE VIEW [dbo].[View_AllOrders]

AS

SELECT dbo.Orders.OrderID, dbo.Orders.OrderDate, dbo.OrderDetails.OrderDetailID, dbo.OrderDetails.ProductID, dbo.OrderDetails.Quantity, dbo.OrderDetails.UnitPrice,

dbo.OrderDetails.Quantity * dbo.OrderDetails.UnitPrice AS Total

FROM dbo.Orders INNER JOIN

dbo.OrderDetails ON dbo.Orders.OrderID = dbo.OrderDetails.OrderID

GO

CREATE PROCEDURE [dbo].[Proc_ListOrders]

AS

BEGIN

SET NOCOUNT ON;

SELECT * FROM View_AllOrders

END

GO

Add data to the Orders and OrderDetails talbes:

INSERT INTO Orders (OrderID, OrderDate)

Values (1, ‘1/1/2017’) ,

(2, ‘1/3/2017’),

(3, ‘1/5/2017’),

(4, ‘1/11/2017’),

(5, ‘1/19/2017’)

GO

INSERT INTO OrderDetails (OrderDetailID, OrderID, ProductID, Quantity, UnitPrice)

Values (1,1,1,2,13.10),

(1,1,11,2,4.3),

(2,1,15,1,7.10),

(3,1,12,3,9.80),

(4,2,1,2,6.16),

(5,2,3,12,2.3),

(6,2,5,4,9.10),

(7,2,7,2,16.23),

(8,3,19,1,6.47),

(9,3,10,1,4.99),

(10,3,11,8,13.78),

(11,4,8,2,7.23),

(12,4,21,3,3.43),

(13,4,25,4,23.0),

(14,4,18,1,43.0),

(15,5,9,1,2.0),

(16,5,15,2,31.0)

GO

In Visual Studio:

Create a new Database Project and name it “Database1

ssdt1

ssdt2

Right click the project name then select Import –> Database

ssdt3

Create a connection to SQL Server and select the “SSDT Test” database.

Click on “Select Connection”

ssdt4

Click on “Browse”

Type “localhost” for Server Name

Select “SSDT Test” for Database Name

ssdt5

Click on Connect

ssdt6

Click on Start

ssdt7

Once the import is complete Click on Finish

ssdt8

You can create a snapshot for establishing a baseline at this point.

ssdt9

ssdt10

You can also add the solution to source control for continues deployment and integration, to be discussed in another blog.

Introducing the first change and publishing the solution to our destination database:

In Solution Explorer click on “Proc_ListOrders” to open it.

ssdt11

Replace the content of the stored procedure with the following:

CREATE PROCEDURE Proc_ListOrders

@OrderID INT = NULL

AS

BEGIN

SET NOCOUNT ON;

IF @OrderID is NULL

SELECT * FROM View_AllOrders ORDER BY OrderID

ELSE

SELECT * FROM View_AllOrders WHERE OrderID = @OrderID ORDER BY OrderID

END

Create a deployment profile for the source both the source and destination databases:

Right click on the project name in Solution Explorer and select “Build”

ssdt12

Right click on the project name in Solution Explorer and select “Publish”

ssdt13

Click Edit

ssdt14

Select “Browse”

Type the server name

Click “OK”

ssdt15

Type “SSDT Test” for the database name

Type “Database-Source.sql” for the Publish Script Name

Click “Save Profile As”

ssdt16

Type “Database-Source.publish.xml” for the File name” and click “Save”

ssdt17

Create a profile for the destination Database

Right click on the project name in Solution Explorer and select “Publish”

ssdt18

Click Edit

ssdt19

Select “Browse”

Type the server name

Click “OK”

ssdt20

Type “SSDT Test1” for the database name

Type “Database-Destination.sql” for the Publish Script Name

Click “Save Profile As”

ssdt21

Type “Database-Destination.publish.xml” for the File name and click “Save”

ssdt22

At this point you should have two publishing profiles

ssdt23

Double click the “Database-Source.Publish.xml to update SSDT Test with changes to the Stored Procedure

ssdt24

Click on “Publish”

Double click the “Database-Destination.Publish.xml to update SSDT Test1 with all changes in the source database

ssdt25

From this point forward you can publish any changes to both the source and destination databases by publishing the databases as above or by using command line tools such as “sqlpackage.exe” as below:

sqlpackage.exe /Action:Publish /SourceFile: C:TmpDatabase1Database1binDebug Database1.dacpac /Profile: C:TmpDatabase1Database1Database-Source.publish.xml

sqlpackage.exe /Action:Publish /SourceFile: C:TmpDatabase1Database1binDebug Database1.dacpac /Profile: C:TmpDatabase1Database1 Database-Destination.publish.xml


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.


Migrating to SAP HANA on Azure

$
0
0

The S/4HANA platform on the SAP HANA DBMS provides many functional improvements over SAP Business Suite 7, and additions to business processes that should provide customers with a compelling reason to migrate to SAP S/4HANA with SAP HANA as the underlying DBMS. Another reason to migrate to S/4 HANA is that support for all SAP Business Suite 7 applications based on the ABAP stack will cease at the end of 2025, as detailed in SAP Note #1648480 – “Maintenance for SAP Business Suite 7 Software”. This SAP Note details support for all SAP Business Suite 7 applications and maintenance dates for the SAP Java Stack versions.

Note: some SAP references linked in this article may require an SAP account.

HANA migration strategy

SAP HANA is sold by SAP as a high-performance in-memory database. You can run most of the existing SAP NetWeaver-based applications (for example, SAP ERP, or SAP BW), on SAP HANA. Functionally this is hardly different from running those NetWeaver applications on top of any other SAP supported DBMS (for example, SQL Server, Oracle, DB2). Please refer to the SAP Product Availability Matrix for details.

The next generation of SAP platforms, S/4HANA and BW/4HANA, are built specifically on HANA and take full advantage of the SAP HANA DBMS. For customers who want to migrate to S/4HANA, there are two main strategies:

In discussions about migrating to HANA, it is important to determine which strategy to follow. The impact and benefit of each option is quite different, from the perspective of SAP, the customer, and Azure. The initial step of the second option is only a technical migration with very limited benefit from a business process point a view. Whereas the migration to S/4HANA (either directly or as a second step), will involve a functional migration. A functional migration has more impact to the business and business processes, and as such takes more effort. SAP S/4HANA usually comes with significant changes to the mapping of business processes. Therefore, most S/4HANA projects we are pursuing with our global system integrators require rethinking the complete mapping of business processes into different SAP and LOB products, and the usage of SaaS services.

HANA + cloud

Besides initiating a rearchitecting of the business process mapping and integration based on S/4HANA, looking at S/4HANA and/or SAP HANA DBMS prompts discussions about moving SAP workloads into public clouds, like Microsoft Azure. Leveraging Azure usually minimizes migration cost and increases the flexibility of the SAP environment. The fact that SAP HANA needs to keep most data in memory, usually increases costs for the server infrastructure compared to the server hardware customers have been using.

Azure is an ideal public cloud to host SAP HANA-based workloads with a great TCO. Azure provides the flexibility to engage and disengage resources which reduces costs. For example, in a multitier environment like SAP, you could increase and decrease the number of SAP application instances in a SAP system based on workload. And with the latest announcements, Microsoft Azure offers the largest server SKUs available in the public cloud tailored to SAP HANA.

Current Azure SAP HANA capabilities

The diagram below shows the Azure certifications that run SAP HANA.

HANA large instances provide a bare metal solution to run large HANA workloads. A HANA environment can currently be scaled up to 32 TB using multiple units of HANA large instances, with the potential to move up to 60TB as soon as the newly announced SKUs are available. HANA large instances can be purchased with a 1-year or 3-year commitment, depending on large instance size. With a 3-year commitment, customers get a significant discount providing high performance at a very competitive price. Because HANA large instances are a bare metal solution, the ordering process differs from ordering/deploying an Azure Virtual Machine (VM). You can just create a VM in the Azure Portal and have it available in minutes. Once you order a HANA large instance unit it can take up to several days before you can use it. To learn about HANA large instances, please check out SAP HANA (large instances) overview and architecture on Azure documentation.

To order HANA large instances, fill out the SAP HANA on Azure information request.

The above diagram shows that not all Azure SKUs are certified to run all SAP workloads. Only larger VM SKUs are certified to run HANA workloads. For dev/test workloads you can use smaller VMs sizes such as DS13v2 and DS14v2. For the highest memory demands, customers seeking to migrate their existing SAP landscape to HANA on Azure will need to use HANA large instances.

The new Azure VM sizes were announced in Jason Zander’s blog post. The certification for those, as well as some existing VM sizes, are on the Azure roadmap. These new VM sizes will allow more flexibility for customers moving their SAP HANA, S/4HANA and BW/4HANA instances to Azure. You can check for the latest certification information on the SAP HANA on Azure page.

Combining multiple databases on one large instance

Azure is a very good platform for running SAP and SAP HANA systems. Using Azure, customers can save costs compared to an on-premises or hosted solution, while having more flexibility and robust disaster recovery. We’ve already discussed the benefits for large HANA databases, but what if you have smaller HANA databases?

Smaller HANA databases, common to small and midsize customers or departmental systems, can be combined on a single instance, taking advantage of the power and cost reductions that large instances provide. SAP HANA provides two options:

  • MCOS – Multiple components in one system
  • MDC – Multitenant database containers

The differences are detailed in the Multitenancy article on the SAP website. Please refer to SAP notes #1681092, #1826100, and #2096000 for more details on these multitenant options.

MCOS could be used with single customers. SAP hosting partners could use MDC to share HANA large instances between multiple customers.

Customers that want to run SAP Business Suite (OLTP) on SAP HANA can host the SAP HANA part on HANA large instances. The SAP application layer would be hosted on native Azure VMs and benefit from the flexibility they provide. Once M-series VMs are available, the SAP HANA part can be hosted in a VM for even more flexibility.

Momentum of SAP workload moving to Azure

Azure is enjoying great popularity with customers from various industries using it to run their SAP workloads. Although Azure is an ideal platform for SAP HANA, the majority of customers will still start by moving their SAP NetWeaver systems to Azure. This isn’t restricted to lift & shift scenarios running Oracle, SQL Server, DB2, or SAP ASE. Some customers move from proprietary on-premises UNIX-based systems to Windows/SQL Server, Windows/Oracle, or Linux/DB2-driven SAP systems hosted in Azure.

Many system integrators we are working with observe that the number of these customers is increasing. The strategy of most customers is to skip the migration of SAP Business Suite 7 applications to SAP HANA, and instead fully focus on the long term move to S/4HANA. This strategy can be summarized in the following steps:

  1. Short term: focus on cost savings by moving the SAP landscape to industry standard OS platforms on Azure.
  2. Short to mid-term: test and develop S/4HANA implementations in Azure, leveraging the flexibility of Azure to create (and destroy) proof of concept and development environments quickly without hardware procurement.
  3. Mid to long-term: deploy production S/4HANA based SAP applications in Azure.

Automate Preservation/Retention for OneDrive for Business sites using Office 365 Complaince Center PowerShell

$
0
0

I worked on a small project a few months back where we had to automate retention/preservation for OneDrive for Business sites in Office 365. Here is what the high level requirements were for the automation:

  1. An attribute of the user object in the On-premise Active Directory will determine whether the user’s OneDrive site should be placed on an indefinite hold . The attribute being used in this case was “extensionAttribute13”.
  2. The “extensionAttribute13” for each user in the On-Premise Active Directory can either be empty/null or it can have a value “DONOTPURGE”. A value of “DONOTPURGE” indicates that the user’s OneDrive for Business site has to be put on legal hold. An empty/null value for the attribute means that the user’s OneDrive for Business site should NOT be put on hold, and the hold should be released if the site was previously on hold.

I decided to use Office 365 Compliance Center PowerShell to implement the automation. However, there were two major challenges in this implementation:

  1. We had to use a “specific-location” retention policy instead of an org-wide policy. As documented here, a location-specific policy can only contain 100 SharePoint/OneDrive sites. The number of users with retention hold could be several hundred so a single policy wouldn’t be enough to hold all the sites. The script had to be intelligent enough to dynamically create new policies if no empty “slots” were found in the existing policies. I mention “slots” here since we could have a situation where a policy was previously full which could have resulted in creation of new policies, however, some users in that policy had their hold released afterwards and now the policy has empty “slots” which should be reused.
  2. We had to dynamically determine the user’s OneDrive for Business site Url in Office 365 based on the user object in Active Directory. One approach that we could have used is documented here. However, this approach wouldn’t work if let’s say the script was run after the user’s profile had been marked for deletion because they have left the company, but the user’s OneDrive site was still not deleted as there is a default 30-day retention period, and there was a requirement to place an indefinite retention hold on that site. See this article to learn more about the default preservation/retention for OneDrive for Business sites. To address this challenge, I decided to use the user’s UPN which is also stored in the on-premise Active Directory to determine their OneDrive site dynamically. This is not the best approach, as it’s not very reliable, but this was the best one we could use.

The first part of the script is to initialize all the configuration variables which would need to be adjusted for every environment. The script also assumes that the initial retention policy (“test” in this example) has already been created. Here is the initialization section of the script:

############################## Begin Configuration #####################################################
#The logFileLocation is a folder where logs will be created and it must already exist
$logFileLocation = "C:RetentionLogs"
#The Url of the SPO Service for this tenant
$spoTenant = "https://therazas-admin.sharepoint.com"
#OneDrive site root
$odbSiteRoot = "https://therazas-my.sharepoint.com/personal"
#The UPN of the tenant admin
$tenantAdmin = "admin@therazas.onmicrosoft.com"
#The encrypted password will be stored here. If we don't find the file, we will simply prompt for password and save it here
$passwordFile = [System.Environment]::GetFolderPath('ApplicationData') + "AdminPassword.txt"
#The Complaince center connection Uri. This won't need to be changed in most cases and its global
$complainceCenterUri = "https://ps.compliance.protection.outlook.com/powershell-liveid/"
#Name of the preservation policy
$policyName = "test"
#Maximum sites per policy
$MaxSitesPerPolicy = 100
#The AD search root for users
$searchRoot = "LDAP://dc=tmr,dc=com"
#Attribute Name that we will query AD for
$attributeName = "extensionAttribute13"
#Value of attributeName that will put the ODB Site on hold
$litigationHoldValue = "DONOTPURGE"
############################## End Configuration #####################################################

Here are some of the functions in the script:

  1. Write-LogFile: A function that simply logs messages in a log file for troubleshooting and understanding of what the script is doing.
  2. Retrieve-Credentials: Saves/retrieves the password for the $tenantAdmin account to/from an encrypted file on the local computer. The file is stored at the $passwordFile path.
  3. Ensure-Policies: This function is responsible for creating additional retention policies if needed to ensure that one policy does not contain more than 100 sites. The new policies will be based on on the initial policy name defined in $policyName variable. The new policies will have a number attached to them (1, 2, 3… and so on).
  4. Get-SitesInPolicy: Returns all sites that are added to one of the preservation policies. We have to retrieve this information to determine which users should have their retention hold released if the attribute has been cleared.
  5. Add-SiteToPolicy: Finds an open “slot” in the policies that exist and adds the site to the first available slot.
  6. Remove-SiteFromPolicy: Removes the specified site from the policy. There could be multiple policies to check so we loop through each policy until we find the site.

The complete script is provided below. Don’t forget to update the configuration section and install the latest version of SharePoint Online management shell before running the script.

Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking

############################## Begin Configuration #####################################################
#The logFileLocation is a folder where logs will be created and it must already exist
$logFileLocation = "C:RetentionLogs"
#The Url of the SPO Service for this tenant
$spoTenant = "https://therazas-admin.sharepoint.com"
#OneDrive site root
$odbSiteRoot = "https://therazas-my.sharepoint.com/personal"
#The UPN of the tenant admin
$tenantAdmin = "admin@therazas.onmicrosoft.com"
#The encrypted password will be stored here. If we don't find the file, we will simply prompt for password and save it here
$passwordFile = [System.Environment]::GetFolderPath('ApplicationData') + "AdminPassword.txt"
#The Complaince center connection Uri. This won't need to be changed in most cases and its global
$complainceCenterUri = "https://ps.compliance.protection.outlook.com/powershell-liveid/"
#Name of the preservation policy
$policyName = "test"
#Maximum sites per policy
$MaxSitesPerPolicy = 100
#The AD search root for users
$searchRoot = "LDAP://dc=tmr,dc=com"
#Attribute Name that we will query AD for
$attributeName = "extensionAttribute13"
#Value of attributeName that will put the ODB Site on hold
$litigationHoldValue = "DONOTPURGE"
############################## End Configuration #####################################################
$Session = $null
#Logs the desired message to the log file location
Function Write-LogFile ([String]$Message)
{
	$Message = "[" + [DateTime]::Now.ToString() + "] " + $Message
	if (Test-Path $logFileLocation)
	{
		$fileName = $logFileLocation + "" +  [DateTime]::Now.ToShortDateString().Replace("/", "_") + ".txt"
		$Message | Out-File $fileName -Append -Force
	}
	else
	{
		$errorM = "Log File Location " + $logFileLocation + " not found..."
		Write-Output $errorM
		Write-Output $Message
	}
}

Function Retrieve-Credentials ([Boolean]$SPO=$false)
{
	   if (Test-Path $passwordFile)
      {
            $securePassword = Get-Content -Path $passwordFile | ConvertTo-SecureString
      }
      else
      {
            $securePassword = Read-Host -Prompt "Enter password" -AsSecureString
            $securePassword | ConvertFrom-SecureString | Out-File -FilePath $passwordFile -Force
      }
	  if ($SPO -eq $true)
	  {
	  	$creds = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($tenantAdmin, $securePassword)
	  }
	  else
	  {
	  	$creds = New-Object System.Management.Automation.PSCredential($tenantAdmin, $securePassword)
	  }
	  return $creds
}

#This function accepts $TotalSites as a parameter which is the total number of sites in the environment that have a preservation hold for ODB sites
Function Ensure-Policies ([Int]$TotalSites)
{
	$currentPolicyName = $policyName
	[Int]$reqNumber = $TotalSites/$MaxSitesPerPolicy
	if ($TotalSites%$MaxSitesPerPolicy -eq 0)
	{
		$reqNumber = $reqNumber - 1
	}
	#Let's make sure the "base" policy exists to avoid unintentional policy creation. This will throw an exception if policy does not exist
	$policy = $null
	try
	{
		$policy = Get-RetentionCompliancePolicy -Identity $policyName
	}
	catch
	{
		$policy = $null
	}
	if ($policy -eq $null)
	{
		throw [System.InvalidOperationException] "Complaince policy $policyName does not exist. Please check the name and try again."
	}
	#We need to ensure that we have $reqNumber of polcies created with the naming format $policyName1, $policyName2 and so on...
	for($i = 1; $i -le $reqNumber; $i++)
	{
		$currentPolicyName = $policyName + $i
		$policy = $null
		try
		{
			$policy = Get-RetentionCompliancePolicy -Identity $currentPolicyName
		}
		catch
		{
			$policy = $null
		}
		if ($policy -eq $null)
		{
			#This means the policy does not exist. We need to create it
			$message = "Creating preservation policy $currentPolicyName with indefinite hold..."
			Write-LogFile $message
			$policy = New-RetentionCompliancePolicy -Name $currentPolicyName
			$rule = New-RetentionComplianceRule -Name "Indefinite Hold for $currentPolicyName" -Policy $currentPolicyName -PreservationDuration Unlimited
		}
	}
}

#Returns an ArrayList of all sites in preservation policies)
Function Get-SitesInPolicy
{
	$sitesList = New-Object System.Collections.ArrayList
	$currentPolicyName = $policyName
	$i = 0
	while ($true)
	{
		$policy = $null
		try
		{
			$policy = Get-RetentionCompliancePolicy -Identity $currentPolicyName
		}
		catch
		{
			$policy = $null
		}
		if ($policy -eq $null)
		{
			break
		}
		foreach ($location in $policy.SharePointLocation)
		{
			$x = $sitesList.Add($location.Name)
		}
		$i = $i + 1
		$currentPolicyName = $policyName + $i
	}
	return ,$sitesList
}

#Adds the specified site to an availalbe "slot" in the policies.
Function Add-SiteToPolicy ([String]$SiteUrl)
{
	$currentPolicyName = $policyName
	$i = 0
	while ($true)
	{
		$policy = Get-RetentionCompliancePolicy -Identity $currentPolicyName
		if ($policy.SharePointLocation.Count -lt $MaxSitesPerPolicy)
		{
			$message = "Adding site " + $SiteUrl + " to the preservation policy " + $currentPolicyName
			Write-LogFile $message
			Set-RetentionCompliancePolicy -Identity $currentPolicyName -AddSharePointLocation $SiteUrl
			break
		}
		$i = $i + 1
		$currentPolicyName = $policyName + $i
	}
}

#Removes the specified site from the list of policies
Function Remove-SiteFromPolicy ([String]$SiteUrl)
{
	$currentPolicyName = $policyName
	$i = 0
	while ($true)
	{
		$policy = Get-RetentionCompliancePolicy -Identity $currentPolicyName
		foreach ($location in $policy.SharePointLocation)
		{
			if ($location.Name.Equals($SiteUrl))
			{
				$message = "Removing site " + $SiteUrl + " from the policy " + $currentPolicyName
				Write-LogFile $message
				Set-RetentionCompliancePolicy -Identity $currentPolicyName -RemoveSharePointLocation $SiteUrl
				return
			}
		}
		$i = $i + 1
		$currentPolicyName = $policyName + $i
	}
}
#Query the AD
try
{
	$old_ErrorActionPreference = $ErrorActionPreference
	$ErrorActionPreference = 'Stop'
	Write-LogFile "******************************Starting Script Execution******************************"
	$message = "Querying Active Directory for accounts on legal hold at path " + $searchRoot
	Write-LogFile $message
	$objOU = New-Object System.DirectoryServices.DirectoryEntry($searchRoot)
	$strFilter = "(&(objectCategory=User)("+$attributeName+"=" + $litigationHoldValue + "))"
	$objSearcher = New-Object System.DirectoryServices.DirectorySearcher
	$objSearcher.SearchRoot = $objOU
	$objSearcher.PageSize = 1000
	$objSearcher.Filter = $strFilter
	$objSearcher.SearchScope = "Subtree"
	$colProplist = "userPrincipalName"
	foreach ($i in $colPropList){$x = $objSearcher.PropertiesToLoad.Add($i)}
	$colResults = $objSearcher.FindAll()
	$message = "Found " + $colResults.Count + " result(s) from the AD search query"
	Write-LogFile $message
	$message = "Connecting to SPO service at url " + $spoTenant + " for validating ODB Sites"
	Write-LogFile $message
	$creds = Retrieve-Credentials
	Connect-SPOService -Url $spoTenant -Credential $creds
	$userTable = New-Object System.Collections.Hashtable($colResults.Count)
	foreach ($objResult in $colResults)
	{
		#Updated 4/15 to handle the null UPN scenario
		$upnValue = $objResult.Properties["userPrincipalName"][0]
		if ($upnValue -eq $null)
		{
			#User does not have UPN Set. Continue
			Write-LogFile "Found user without UPN set. Skipping..."
			continue
		}
		$upn = $upnValue.ToString()
		#Calculate the ODB Site Url
		$odbSite = $upn.Replace("@", "_")
		$odbSite = $odbSite.Replace(".", "_")
		$odbSite = $odbSite.Replace("-", "_")
		$odbSite = $odbSiteRoot + "/" + $odbSite
		$message = "Checking if ODB site for user " + $upn + " exists at " + $odbSite
		Write-LogFile $message
		try
		{
			$site = Get-SPOSite $odbSite
		}
		catch
		{
			$site = $null
		}
		if ($site -eq $null)
		{
			Write-LogFile "Site Not Found, Skipping this user..."
			continue
		}
		Write-LogFile "Site Found!"
		$userTable.Add($upn, $odbSite)
	}
	#$userTable now contains all users with valid ODB Sites.
	#Let's make sure we have the required number of policies created (1 Policy = 100 sites)
	#Connect to the complaince center
	$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri $complainceCenterUri -Credential $creds -Authentication Basic -AllowRedirection -WarningAction Ignore
	$x = Import-PSSession $Session
	Ensure-Policies -TotalSites $userTable.Count
	#Next, we need to remove all ODB Sites that are already in the preservation policy
	$sitesList = Get-SitesInPolicy
	#Now lets determine which sites we need to add to policy
	$allUsers = $userTable.Keys.GetEnumerator()
	$addCount = 0
	while ($allUsers.MoveNext())
	{
		$currentUser = $allusers.Current
		$currentSite = $userTable[$currentUser]
		#Check if this site is already in the policy
		if ($sitesList.Contains($currentSite))
		{
			$message = "ODB Site for user " + $currentUser + " is already in the preservation policy. Skipping"
			Write-LogFile $message
			#Remove this site from the list so we can release the hold on the remaining sites
			$sitesList.Remove($currentSite)
		}
		else
		{
			try
			{
				Add-SiteToPolicy -SiteUrl $currentSite
				#Set-RetentionCompliancePolicy -Identity $policyName -AddSharePointLocation $currentSite
				$addCount = $addCount + 1
			}
			catch
			{
				Write-LogFile "Error Ocurred while adding site to policy."
				Write-LogFile $_.Exception.Message
			}
		}
	}
	$message = "Sucessfully Added " + $addCount + " site(s) to the preservation policy"
	Write-LogFile $message
	#Now let's remove the sites from the hold. Everything left in $sitesList needs to be removed. The user either does not exist in AD or does not has $litigationHoldValue set.
	$removeCount = 0
	foreach ($siteUrl in $sitesList)
	{
		try
		{
			#Set-RetentionCompliancePolicy -Identity $policyName -RemoveSharePointLocation $siteUrl
			Remove-SiteFromPolicy -SiteUrl $siteUrl
			$removeCount = $removeCount + 1
		}
		catch
		{
			Write-LogFile "Error Ocurred while removing site from policy."
			Write-LogFile $_.Exception.Message
		}
	}
	$message = "Sucessfully Removed " + $removeCount + " site(s) from the preservation policy"
	Write-LogFile $message
	Write-LogFile "******************************Ending Script Execution******************************"
}
catch
{
	Write-LogFile "Error Ocurred."
	Write-LogFile $_.Exception.Message
	#Updated 4/15 to handle the null UPN scenario
	Write-LogFile $_.ScriptStackTrace

}
finally
{
	#Clear the complaince center session
	if ($Session -ne $null)
	{
		Remove-PSSession $Session
	}
	$ErrorActionPreference = $old_ErrorActionPreference
}

Happy SharePointing!

Returning JSON data from a REST Web Server

$
0
0

To return data on a rest web server, you have to provide JSON data file format. To illustrate the sample (see my previous post), I have decided to return a simple structure:

  • std::vector<People> peoples
  • utility::string_t job

The People structure contains just:

  • utility::string_t name
  • double age

I have made a little modification on my PUT handler to return data:

The last piece of code is the code that produce the AsJSON() function call.

For each structure, there is a AsJSON and a FromJSON

 

 

You can see the result on the console output of the server. On the client side, ask the method called ‘leave’.

You can download the C++ client and server.

How to Develop and Host a Proof-of-Concept Prototype on Azure App Services for Web Apps

$
0
0


Guest blog from Microsoft Student Partner Fangfang Hu from Imperial College London.

Fangfang Hu is an Electrical and Electronic Engineering student at Imperial College London

clip_image002

Introduction

I will be starting my second year in fall. I am from Singapore, where I had worked with a few research institutions and corporations in areas relating to hardware and software. Working with different types of technologies on various platforms has always been of great interest to me, and over the years I have since been involved in using technology for purposes ranging from commercial to research and social entrepreneurship.

LinkedIn: https://www.linkedin.com/in/fangfanghu/

GitHub: https://github.com/ffhu1

In this tutorial, you will be introduced to creating a proof-of-concept prototype for a web app, specifically a static HTML site, and subsequently hosting it on Azure App Services.

We will be using Visual Studio Community 2017 as our editor.

Setting Up Your Project

Before you begin, ensure that you sign into your Azure account so as to be able to host your web app on Azure.

Start by creating a new project in Visual Studio.

clip_image004

Under Visual C#, select Web, then ASP.NET Web Application. Name your project and proceed by clicking OK.

clip_image006

For our purposes, we will be creating a static HTML page that utilises basic HTML, CSS and JavaScript through Bootstrap, hence an Empty template would suffice.

Bootstrap

Bootstrap is an open-source front-end web framework for developing responsive web sites, utilising HTML, CSS and JavaScript. It is ideal for designing and creating websites and web applications that can serve as proof-of-concept prototypes for presenting your idea.

In this tutorial, we will be creating our web site using Bootstrap in Visual Studio.

In the Solution Explorer, select your project (which is named MyWebApp in this tutorial).

Under the menus, under Project, select Manage NuGet Packages.

clip_image008

In the NuGet tab, go under Browse and search bootstrap. Select bootstrap and click Install.

clip_image010

After installation, you will see now files added for Bootstrap under Content. You will be referencing these files when as you create the pages for your web site.

Creating a Page

clip_image012

Right-click your project and choose Add, then New Item. Select HTML Page and click OK.

Under the <head> element, we need include these tags.

<meta charset="utf-8"/>

<meta name="viewport" content="width=device-width, initial-scale=1"/>

<link rel="stylesheet" href="Content/bootstrap.min.css"/>

<script src="Scripts/bootstrap.min.js"></script>

<link rel="stylesheet" href="Content/bootstrap-theme.min.css"/>


The first link tag loads in the Bootstrap CSS files, while the script tag loads in the JavaScript elements.

The last link tag retrieves the Bootstrap default theme. This can be changed depending on the design you have in mind for you page.

The title

clip_image014

This is how your <head> element should look like which will allow you to refer to Bootstrap files when creating your page.

Designing Your Page

Bootstrap allows for responsive web design, which means that your page will change depending on the size of the browser used to view the page.

This is done by dividing the page into 12 columns where text or other elements can appear. For more details about the various components you can use in your HTML page through Bootstrap, visit the link below.

Bootstrap components: http://getbootstrap.com/components/


Example

Here is an example of a static HTML page created using Bootstrap.

clip_image016

clip_image018

clip_image020

Source code: https://github.com/ffhu1/DONACO

Publishing Your Web App on Azure App Service (Visual Studio)

First, ensure that you are signed in to your Azure account.

Under your project name, go to Connected Services, then Publish.

clip_image022

Select Microsoft Azure App Service and Create New, then click Publish.

clip_image024

Here, you can deploy your web app on Azure App Services.

clip_image026

Additional resources offered through Azure are stated under the Services tab. For our purpose in this tutorial whereby we are deploying a simple static HTML page, this is not required.

Click Create once you are finished.

The URL of your web app deployed through Azure App Services will be:

http://[name of your app].azurewebsites.net/

Deploying Your Web App (Azure Portal – GitHub)

The Azure portal also allows for the deployment of your Web App through a number of ways, including through your GitHub repository.

For this tutorial, we will now look at deploying an ASP.NET app in a GitHub repository on Azure App Services (previously, we looked at direct deployment through Visual Studio).

Under App Services, select Add. Choose Web App.

clip_image028

clip_image030

Click Create.

clip_image032

Enter the name of your web app, and click Create.

clip_image034

You will then be able to access an overview of your App Service on the Azure portal. Under Deployment, go to Quickstart. Here, you will be able to choose which development stack to use to deploy your app.

For our tutorial, we select ASP.NET.

clip_image036

Here, we can automatically deploy our app (which in this case is an ASP.NET app) from GitHub (or any other such providers) by selecting Cloud Based Source Control.

clip_image038

As can be seen, we can select GitHub under Deployment Option.

clip_image040

Set up authorization for your GitHub account, then you will be able to choose the repository which you would like to deploy your web app from.

Conclusion

In this tutorial, we looked at using Bootstrap to create a simple proof-of-concept prototype for a web app as well as how to host your web app on Azure App Services, either through Visual Studio or through the Azure portal. The variety of Azure services available for web apps and other applications allows for the creation of simple yet comprehensive solutions. Coupled with the comparatively low cost of said services, it is thus easy for one to deploy one’s own solutions on the cloud.

References

Learn more about Azure Web Apps:

https://docs.microsoft.com/en-us/azure/app-service-web/

Get your FREE Student Azure Account https://imagine.microsoft.com/en-us/Catalog/Product/99 

Develop in the cloud at no cost with Azure App Services, Notification Hubs, SQL database, and more.

Activate your Visual Studio Dev Essentials Account https://www.visualstudio.com/dev-essentials/

Free access to cloud services such as compute and storage, backend services for your mobile or web apps, services for IoT, machine learning, and analytics.
  • Azure credit ($300/year)*
  • Visual Studio Team Services account with five users*
  • App Service free tier
  • HockeyApp free tier
  • Application Insights free tier

Troubleshooting SCCM Database Replication Service (DRS)

$
0
0

Problem: SCCM Database Replication Service is not working

Solution: Service Broker had been disabled on MSDB, causing SCCM DRM to fail

I was recently called in to assist with a large System Center Configuration Manager (SCCM) environment where the Database Replication Service (DRS) was not processing messages. As a SQL Server engineer, I’ve supported customer environments running SCCM for several years now and have helped to troubleshoot DRS issues in the past, but it is certainly not something I do on a regular basis. We were able to resolve it pretty quickly, but this was a new issue (to me) and one that wasn’t easily discoverable through internet searching by the customer.

Step 1 – Run RLA (Replication Link Analyzer) to find any errors

Interestingly, RLA reported “Issues Detected”, but all of the steps succeeded and reported success.

Replication Link Analyzer

Step 2 – Run “spDiagDRS” to get a thorough description of DRS activity/components

When we ran spDiagDRS, I noticed that no messages had been processed in the past hour and that there were outgoing messages in the queue.

SPDiagDRS

Step 3 – Given these results, I decided to run Profiler on two of the endpoints to capture the Service Broker events.

SQL Server Profiler

Profiler immediately highlighted the issue that “The broker is disabled in msdb”

Step 4 – Check if Service Broker is enabled in MSDB

Sure enough, Service Broker had been disabled in MSDB. In this case, it had been disabled when the customer restored MSDB from a backup the previous evening. Since Service Broker is disabled upon restore by default, this was the root cause of the DRS issue.

MSDB Properties

Step 5 – Run spDiagDRS again to verify success

After enabling Service Broker in MSDB, we ran spDiagDRS again and saw that the outgoing messages had been processed and everything was functioning as expected again.

SPDiagDRS SCCM

This one turned out to be a pretty easy investigation, but one that I hadn’t experienced before. If you end up stuck on DRS issues, the combo of spDiagDRS and Profiler capturing Service Broker events is a great start.

Sam Lester (MSFT)

Viewing all 5308 articles
Browse latest View live