Category Archives: Tech

SQL Server’s Binary Collations

I was reading up on database engine Always Encrypted feature while preparing for 70-473 exam, and bump into these binary collations which I somehow never heard of before.

While configure Always Encrypted one of the choices you have to made is whether to use Deterministic or Randomized encryption (and you have to know the differences very well for the exam). One of the caveats when using Deterministic encryption is that it have to use a column collation with a binary2 sort order for character columns. More specifically documentation states that: Deterministic encryption requires a column to have one of the binary2 collations. If you will be using SSMS Encrypt Columns wizard it will be converting your column collation into binary2 case sensitive collation.

These statement required me to investigate the topic of binary collation a little bit.

First of all you may have different collation settings on a SQL Server instance level (i.e. on its system databases) on your databases and on specific columns and expression level.

To list all the collations available on your instance of SQL Server you can issue the following SQL statement:

On Azure SQL Database you will get back 3955 rows or possible collations. Understanding collation requires you to understand set of related terms, such as Collation, Locale, Code page, Sort order.  You should also know  that there are three major sets of collations available to you:

  • Windows collations
  • Binary collations
  • SQL Server collations

These collation groups sort data differently. In the past my standard answer/explanation about Windows VS SQL collation was that Windows one is more frequently updated, more compatible and hence more preferable over SQL one. Technically speaking it is more about how sorting works, but as per MSFT documentation: “SQL Server supports supports a limited number (<80) of collations called SQL Server collations which were developed before SQL Server supported Windows collations. SQL Server collations are still supported for backward compatibility, but should not be used for new development work.” So what I was saying/writing seems to be correct.

I won’t be covering all the details and differences related to these sets of collations as I only want to focus here on Binary collations which are requirement for Always Encrypted Deterministic encryption.

Binary collations sort data based on the sequence of coded values that are defined by the locale and data type. They are case sensitive. A binary collation in SQL Server defines the locale and the ANSI code page that is used. This enforces a binary sort order. Because they are relatively simple, binary collations help improve application performance. For non-Unicode data types, data comparisons are based on the code points that are defined in the ANSI code page. For Unicode data types, data comparisons are based on the Unicode code points. For binary collations on Unicode data types, the locale is not considered in data sorts. For example, Latin_1_General_BIN and Japanese_BIN yield identical sorting results when they are used on Unicode data.

There are two types of binary collations in SQL Server; the older BIN collations and the newer BIN2 collations. In a BIN2 collation all characters are sorted according to their code points. In a BIN collation only the first character is sorted according to the code point, and remaining characters are sorted according to their byte values. (Because the Intel platform is a little endian architecture, Unicode code characters are always stored byte-swapped.)

So for  Always Encrypted Deterministic encryption any of the collations returned by the query below will do:

This leaves you with 133 collations to choose from 🙂 Generally speaking BIN and BIN2 collations use different sorting algorithms and BIN2 is more preferable in general, not only for Always Encrypted Deterministic encryption columns. Another interesting question is why we have BIN/BIN2 collations for different languages? Like Arabic_BIN2, French_BIN2 etc. The reason is that each of those uses different code page for encoding the characters sorting in the varchar type so linguistic collation type is very important and comes into play only for varchar data as this will be sorted based on the language selected (this is not applicable to nvarchar where it has no difference).

All collations which are not binary collation are linguistic collations. For example, Latin1_General_CI_AS is a linguistic collation and it uses a sorting algorithm compatible with several of English language and many Western European languages. Please don’t be confused by the name of Latin1_General, as it actually can sort all Unicode characters defined in Unicode 3.2 characters set and it can also sort many other languages correctly as well (if the language has no sorting conflict with the latin1_general sorting rule).

Binary collations have better performance than linguistic collations, and that is the main advantage of using then. A binary collation is always case sensitive and accent sensitive. BIN2 collation is generally more preferable than BIN collation.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to enable TDE (SQL 2017/Azure SQL Database)

TDE is a SQL Server feature which encrypts your data at rest, i.e. your database files. When TDE is enabled encryption of the database file is performed at the page level. The pages in an encrypted database are encrypted before they are written to disk and decrypted when read into memory. TDE does not increase the size of the encrypted database. Here is TDE architecture schema from MSFT documentation:

Transparent Data Encryption Architecture

Transparent Data Encryption Architecture

This blog post explains how to enable Transparent Data Encryption (TDE) for SQL Database (on-premise/Azure).

Scenario 1. On-premise SQL Server 2017 (this will also work for SQL Server in a Azure VM). You can use the following SQL script to enable TDE:

Be sure to replace ‘K2’ with your target database name and adjust password value. Script uses IF clauses to avid creating things which already exist (which are missing in the sample script you can find in MSFT documentation). Once TDE is enabled you can confirm this in the database properties using SSMS GUI:

Scenario 2. Azure SQL Database. Script mentioned above won’t work here. Easiest/default approach to enable TDE for Azure SQL Database is to do so from Azure Portal:

This approach called service-managed transparent data encryption and by default database encryption key is protected by a built-in server certificate. All newly created SQL databases are encrypted by default by using service-managed transparent data encryption.

Other approach called Bring Your Own Key and requires use of Azure Key Vault.

TDE can also be managed with PowerShell, Transact-SQL and REST API. PowerShell contains number of cmdlets for that:

 

And using T-SQL you can use ALTER DATABASE (Azure SQL Database) SET ENCRYPTION ON/OFF command (encrypts or decrypts a database) and two dynamic management views:

  • databasesys.dm_database_encryption_keys which returns information about the encryption state of a database and its associated database encryption keys
  • sys.dm_pdw_nodes_database_encryption_keys which returns information about the encryption state of each data warehouse node and its associated database encryption keys

Once TDE has been enabled there is also options to check whether it is enabled or not using T-SQL:

For further information refer to official MSFT documentation:

Transparent Data Encryption (TDE)

The SQL Server Security Blog on TDE with FAQ

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to: Connect to Azure SQL from Visual Studio

This specific day is already described by some people as GDPRmageddon based on amount of emails people receive from all companies they ever dealt with at one or another point about their policies updates and so on. I’m not going to talk about this today, instead I decided to write this tiny post on how to connect to Azure SQL from Visual Studio.

Actually short answer to this is just fire off Visual Studio and select View > SQL Server Object Explorer, the rest is just “follow the wizard” thing, and it is all documented by MSFT of course. But I’m quite well aware about widespread allergy to official documentation (no matter how good it is) and also wanted to try this my self for the very first time recently, and this is how this post came about.

QUESTION: How to connect to Azure SQL from Visual Studio?

ANSWER:

1) First things first. Assuming you don’t have Visual Studio installed on your machine, you should download and install it. There is a free version named Visual Studio Community which can be downloaded here: https://www.visualstudio.com/vs/community

2) Once tiny web installer downloaded run it and click continue to kick off installation process:

Tiny installer will start fetching data from the internet before installing components on your  machine:

3) Once data downloaded you can opt in for default installation which will require 597 MB of disk space:

4) Once installation is complete Visual Studio will suggest you Sign up or Sign In to use additional services, but you can avoid that by clicking “Not now, maybe later” link:

5) Of course you must select Dark theme otherwise nothing will work and click on Start Visual Studio button 🙂

Of course other themes work too but you would look suspiciously in the developers crowd 🙂

6) Once Visual Studio opened you supposed to go to View > SQL Server Object Explorer, and… And if you followed previous steps exactly/selected default Visual Studio installation configuration SQL Server Object Explorer won’t be available in View menu:

7) To address this, re-run Visual Studio installer, open Individual components tab and mark SQL Server Data Tools component – it will automatically select bunch of dependencies upping disk space requirements from 597 MB to 2,52 GB:

8) Once installation complete, re-run Visual Studio and click SQL Server Object Explorer from View menu:

9) Once it opened, right click on SQL Server node in SQL Server Object Explorer tree and select “Add SQL Server…” option:

10) It will start Connect wizard where you can specify your Azure SQL server name and credentials (yes even for Azure SQL database you are connecting through “SQL server” which is logical entity you must have to connect to Azure SQL database(s) and should not be confused with SQL Server in Azure VM 🙂 ):

 

11) Both SSMS and Visual Studio are smart enough to detect if your server name and credentials are good but you don’t have firewall rule created to allow connection, and if you provide your Azure credentials these applications will be able to create required firewall rule for you:

To provide credentials you supposed to click on “Add an account…” under Azure account as shown above. It will open standard Microsoft credentials prompt dialog:

After you provided valid credentials you should be able to click OK in the previous window to create firewall rule:

If you navigate to Azure portal you will be able to see your public IP added to Azure SQL server firewall rules there.

12) And with valid credentials and firewall rule in place you will be able to connect and browse through your databases and objects and execute SQL queries more or less in the same way as in SSMS:

I hope now you are clear on how to connect to Azure SQL from Visual Studio 🙂 But in case you still have any questions just leave them in comments.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Exam Prep Resources for Microsoft Azure 70-473 Design and Implement Cloud Data Platform Solutions

I’m currently preparing for 70-473  Design and Implement Cloud Data Platform Solutions exam, so I’ve decided to compile a list of resources which may be useful to prepare for this exam. I’m going to append it with additional materials as I keep working on my preparation and I hope it may be useful to other test takers.

As with any MSFT exam your starting point has to be MSFT exam description page which contains run down of all exam topics as well as links to additional resources, so here it is – Exam 70-473 Designing and Implementing Cloud Data Platform Solutions. You should keep in mind that though this exam has been released in December 2015, it is being updated quarterly, so once in a while you need to check exam page to see if any new topics were added there. At the moment last update to this exam was made in June 2017 and changes are explained in exam 70-473 change document.

Paid resources:

70-473 Cloud Data Platform Solutions course by SoftwareArchitect.ca – this is an affordable (25$) online course which I bought and used during my preparation – good overview of all concepts at a fair price, and when I searched it was only 70-473 specific course from online training vendors which I was able to find. Author goes through all the “skills measured” topics as they stated in exam description. What I dislike about this course is amount of typos and some little issues like mismatch between numbering and naming  of videos in course navigation pane and inside of the videos themselves. One exactly the same video even inserted/listed twice there. So I would describe it as lack of QA/editing problem. My other complain would be lack of hands-on demos, there are some of them in the course but I wanted more. 🙂 Only after completion of the course I found that it is also available on Udemy and there it was priced 9,99$ with discount when I checked – so check both locations and compare prices if you want to try it.

Free resources and video recordings:

Certification Exam Overview: 70-473: Designing and Implementing Cloud Data Platform Solutions MVA course

Cert Exam Prep: Exam 70-473: Cloud Data Platform Solutions – exam overview video by MCT James Herring

Second link is YouTube video, looks like both of these links cover more or less the same material and delivered by the same person, yet YouTube session has newer slides, it seems, and they are not absolutely identical – so watch both of them.

Channel 9 – Keeping Sensitive Data Secure with Always Encrypted

YouTube – Secure your data in Azure SQL Database and SQL Data Warehouse

MSFT documentation:

Resolving Transact-SQL differences during migration to SQL Database

This article covers things which will work in SQL queries run on on-prem SQL Server while won’t work while run against Azure SQL DB. For example things you probably discovery very quickly is that USE statement is not supported.

Azure SQL Database – Controlling and granting database access

Article explains unrestricted administrative accounts, server-level administrative roles and non-administrator users + “access paths”.

Sizes for Windows virtual machines in Azure

General purpose virtual machine sizes

High performance compute VM sizes

You may expect questions around VM sizing based on given requirements so need to remember which series has premium storage and which not along with some other things which you can learn from the articles above.

Securing your SQL Database

Always Encrypted (Database Engine)

Always Encrypted Wizard

This article explains 2 very important things you should be aware of: key storage options and Always Encrypted Terms.

SQL Database dynamic data masking

Azure Blog – Microsoft Azure SQL Database provides unparalleled data security in the cloud with Always Encrypted

Azure SQL has loads of security features and you supposed to know them all 🙂 At least when to use, along with requirements and limitations.

Azure Cosmos DB: SQL API getting started tutorial

Get started with Azure Table storage and the Azure Cosmos DB Table API using .NET

ADO.NET Overview

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Simple walkthrough: Using K2 Database Consolidation Tool

Purpose of this blog post is to outline K2 databases consolidation process using K2 Database Consolidation Tool.

When you may need it? For older K2 deployments when initial installer used to create 14 separate databases instead of one “K2” database we expect to see with current K2 versions. Such environments even after upgrades to newer versions carry on to have these 14 databases and only starting from K2 4.7 databases consolidation is enforced and you cannot upgrade till you consolidate your databases into one. So you can still see non-consolidated K2 database en environments which run any version of K2 up to 4.6.11 including.

To perform consolidation of these 14 K2 databases into one you need to obtain appropriate version of K2 Database Consolidation Tool from K2 support. Below you can see basic steps you need to perform while performing K2 databases consolidation using this tool.

1) First we need to check collation of your existing K2 databases, this is necessary because consolidation tool won’t handle conversions from one locale to another and consolidation will fail. You can run this script to see collation of your non-consolidated K2 DBs:

As you can see on the screenshot below output of this script shows that my non-consolidated databases have Ukrainian_100_CI_AS collation:

2) Make sure that your target SQL Server service instance has the same collation as your databases either via GUI:

or script:

and copy your non-consolidated databases to the target server which will be hosting consolidated database (unless it is not the same server which was hosting them initially).

2) Obtain K2 Database Consolidation Tool from K2 support, extract it on your SQL server which hosts your K2 databases and launch SourceCode.Database.Consolidator.exe, once you start it you will be presented with the following UI:

3) It will detect your non-consolidated K2 DBs (<No Instance> in the Instance drop down means that you are connecting to default, not named SQL Server instance) and here you need to select your target DB – just select <New Database>, specify “Create Database Name” (I’m using default name used by K2 installer which is K2) and click Create:

4) Once you click Create, database K2 will be created in the same collation as your SQL Server instance (your target DB will contain all the required tables and structure but no data) and Start button become available to you so that you can start consolidation process:

5) Before clicking on Start make sure  your K2 service is stopped. Despite we just created our target “K2” database we still getting warning that all data in target DB will be truncated and we have to click Yes to start consolidation process:

Once you clicked on next you will have to wait for a while till consolidation completes (in the bottom of the tool window in its “status line” you will see current operations which are being performed during databases consolidation process. Time which is necessary to complete this process is heavily depends on your server performance and volume of data in your source databases.

In some scenarios (e.g. source and destination collations have different locale IDs or you moved source databases to another SQL server without re-creating their master key) consolidation process may fail leaving your non-consolidated databases databases in read-only state:

In such scenario you need to review consolidation log to identify and address errors and once done. Switch your source databases back to RW mode (as explained here), delete your target database and start again from step (2). When consolidation completes successfully source non-consolidated databases also stay in read-only mode.

If consolidation completes without errors you will get a message confirming this and also informing you that ReconfigureServer.ps1 script has been created:

You can also click on Log Directory link which will open consolidation log file location – as usual you can open it and make sure than neither ‘Logged Warning’ or ‘Logged Error’ can be found anywhere in this log beyond Legend section in the beginning.

6) In the directory which contains K2 Database Consolidation Tool you will need to take ReconfigureServer.ps1 script and copy it over to your K2 server. This script fires off K2 blackpearl Setup Manager while instructing it to connect to your new consolidated DB:

Here is this script code which you can copy/paste:

Once you run this script on K2 server it will start K2 Setup Manager where you need to go through all pages of “Configure K2 blackpearl” process:

You will see on the database configuration step of the wizard that thanks to PS script we already targeting our new consolidated DB:

Once reconfiguration process is completes (without errors and warnings) you can start testing how your K2 environments behaves after K2 consolidation process.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

When code and operations collide :)

I’ve just seen CBT Nuggets video on YouTube entitled “How to Transition to DevOps” and though I cancelled their subscription quite some time ago it sparked my interest and made it very tempting to subscribe again (if only not my financial and time budget constraints).

I really like expressive quotes and explanations which use analogy and one from this video which I really liked can be found below. Along with some basic theory on what is and how to approach DevOps in this video Shawn Powers shows little demo which demonstrates how to use Chef recipe for configuration management, and next goes the following conclusion:

“…configuration automation is awesome example of how DevOps is kind of taking two different worlds the world of installing packages and uploading files and code which allows us to programmatically solve problems and put them together kind of like peanut butter and chocolate goes together to make a Reese’s Cup and it’s you know awesome it’s better than the sum of its parts…”

Nice. And I also need to try these Reese’s Peanut Butter Cups now even if it a bit violates healthy diet 🙂 Think it goes well with coffee and IT training videos (if consumed in limited amounts).

I just looked at DevOps courses available at CBT Nuggets at the moment and though it seems there is no DevOps overview/general course available so far they already have courses on specific tools (Puppet, Chef, Docker, Ansible).

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

SQL script to attach detached non-consolidated K2 DBs

I keep playing with SQL and non-consolidated K2 DBs and in previous post I covered bringing “these 14” back online, now I realized that other case where SSMS requires way too many click is attaching “these 14” back (let’s say after you rebuild your SQL instance system DBs to change instance collation).

Quick google allowed me to find relevant question on dba.stackexchange.com where I took script which generates CREATE DATABASE FOR ATTACH for all existing user databases. Next having my 14 non consolidated K2 DBs I generated the following script to attach them back in bulk:

You can either use this CREATE DATABASE FOR ATTACH for all existing user databases script while your K2 databases are still attached, of it they are not just replace paths in script listed above and execute modified script to attach them quickly.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

SQL Script to switch all currently RO databases to RW mode

I was doing some testing of K2 databases consolidation process which required me to re-run database consolidation process more than once to re-try it. Unfortunately K2 Database Consolidation Tool leaves all databases in read-only mode if something fails during consolidation process. If you remember K2 used to have 14 separate data bases prior to consolidated DB was introduced (see picture below).

Typing 14 statements manually to bring all these database to read-write mode is a bit time consuming so I came up with the following script:

Essentially it will select all databases currently in RO state and will output bunch of statements to bring all of them to RW state as an output:

Just copy-paste this script output into new query window of SSMS and press F5 🙂

It may be useful for you once in a while (and if not for this specific use case, then as an example of generating some repetitive statements which contain select statement results inside).

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 5.0 unable to read CORS settings from SmO when using the Workflow REST API in JavaScript

If you are trying to use K2 Workflow REST API in JavaScript (as described in product documentation) you may see the issue described below (and you may want to upgrade to 5.1 to resolve it 😉 ).

You have CORS settings configured correctly for domain which hosts your JavaSript, i.e. you have settings similar to these:

Workflow REST API Settings

Screenshot above assumes that your JS resides within js.denallix.com domain, upon attempt to execute JS code you will be getting errors.

Using Chrome you will be getting the following error:

1 Failed to load https://k2.denallix.com/Api/Workflow/preview/workflows/?_=1523444398270: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘https://js.denallix.com’ is therefore not allowed access. The response had HTTP status code 400.

IE will also give you an error but less clear one:

SCRIPT7002: XMLHttpRequest: Network Error 0x80070005, Access is denied.

Here is the screenshot of error message from Chrome browser:

And here is what you can see in Fiddler:

In case you want to reproduce this you may use sample code which returns a list of Workflows either owned or startable for the user credentials supplied from K2 documentation).

So you would expect CORS settings configured for Workflow REST API supposed to ensure we have it working fine, but it does not work. What’s wrong here?

If you enable Workflow API logging you can see the following in this log:

w3wp.exe Warning: 0 : Failed to retrieve CORS settings.
System.InvalidOperationException: Failed to determine user principal name
at SourceCode.Forms.AppFramework.ConnectionClass.HandleIdentityImpersonation(Boolean asAppPool, Action action)
at SourceCode.Forms.AppFramework.ConnectionClass.TryCredentialToken(BaseAPIConnection connection, String credentialToken, Boolean asAppPool)
at SourceCode.Forms.AppFramework.ConnectionClass.GetPoolConnection(Boolean asAppPool, Boolean& tokenApplied, String& credentialToken)
at SourceCode.Forms.AppFramework.ConnectionClass.Connect(BaseAPI baseAPI, Boolean asAppPool)
at SourceCode.Web.Api.Common.DataContexts.K2DataContext.EnsureConnectionIsOpen[T](T api)
at SourceCode.Web.Api.Common.DataContexts.K2DataContext.GetCorsSettings()

This means that when you send the request, the workflow API tries to retrieve the CORS settings from the SmartObject. When it does that, it makes a connection to host server. For some reason the connection is failing with an error “Failed to determine user principal”.

Because of this exception CORS settings are not retrieved and the list of allowed origin is empty on the web API side and this lead to the error mentioned above (The response had HTTP status code 400). In K2 5.0 something in the stack is not parsing/decoding the authentication credentials (from the state it was coming from the AJAX call) correctly and thus the identity isn’t recognized causing a failure in the connection.

If you are still on K2 Five (knowing how many people running older version I’m not very comfortable with this wording 🙂 ) your workaround for this issue is to remove the authorization header from the AJAX call and let the browser prompt you for username and password. Here is sample HTML code for that (essentially we just removing var username and war password in $.ajaxSetup):

But in case you running new and shiny 5.1 or ready to upgrade your 5.0 environment to this version, it will work just fine there without need to employ any workarounds.

And here is the ling to official KB covering the same issue: “Failed to load….No Access Control Allow Origin.”

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Reading list: K2 Authentication and Authorization

This is a list of links to K2 documentation which covers K2 Authentication and Authorization topics. In case you have some time to read something for fun 🙂

Authentication

Authentication and Authorization in K2

Claims-based Authentication in K2

Outbound Authorization and OAuth in K2

About K2Trust

Troubleshooting Claims-based Authentication Issues

Identity and Data Security in K2 Cloud for SharePoint

SharePoint Hybrid, Multiple Identity Providers & K2

AAD Multi-Factor Authentication Considerations

Enabling AAD Multi-Factor Authentication Requires Changes in K2 4.7

Authentication Modes

Authentication (in Management)

Integrating with Salesforce

Azure Active Directory Management (Read/Write to AAD)

Claims and OAuth Configuration for SharePoint 2013

Standard SmartForms Authentication

Multi-Authentication Providers

Consolidation to Multi-Auth

IIS Authentication

Authorization

Authorization Framework Overview

Outbound Authorization and OAuth in K2

REST Broker

Resources for Working with the REST Service Broker

REST Swagger File Reference Format

REST Broker and Swagger Descriptor Overview (video)

Endpoints REST Service Type

OData Broker

Using the OData Service Broker (including Serialization and Deserialization)

Endpoints OData Service Type

Workflow and SmartObject APIs

APIs (in Management)

Configuring the Workflow REST API

Configuring the SmartObject OData API

How to Use the K2 Workflow REST Feed with Microsoft Flow to Redirect a K2 Task

How to Use the K2 Workflow REST Feed with Microsoft Flow to Start a Workflow

How to: Use the K2 OData Feed with Microsoft Excel

How to: Use the K2 OData Feed with Microsoft Power BI

Facebooktwittergoogle_plusredditpinterestlinkedinmail