Latest K2 versions and IE8 support

For customers using K2 smatforms it is not a revelation that IE8 is not something fully supported. In fact IE8 support has been imited to end-user runtime execution only – K2 smartforms runtime, K2 workspace (Home Page, Reports Runtime, Worklist and Single Sign-On) and K2 web parts since K2 smartforms 1.0. And IE8 support was dropped entirely starting from K2 smartforms 4.6.9. It seems to be that it was high time to do so as investments in maintaining compatibility with a piece of software originally released back in March 2009 does not seem to be justified.

IE8 About

Those shops where for one or another reason IE8 is still being used would be interested in freshly published K2 KB entitled “Known issues when running Forms or the Forms Viewer web part in Internet Explorer 8 or IE8 Compatibility mode” which gives detailed overview of issues you may encounter when using latest versions of K2 smartforms (4.6.9, 4.6.10) and IE8.

K2 4.6.10 released

So yesterday (24/06/2015) K2 4.6.10 release was revealed for general public and it is now available to all K2 clients in K2 blackpearl downloads sections at portal.k2.com. You may familiarize yourself with product release notes (KB001700) which contains consolidated information about all parts of K2 product suite by contrast with former practice of having separate release notes for individual components (blackpearl, smartforms, control pack etc.). K2 blackpearl 4.6 Compatibility Matrix is updated too.

This is a second release which uses unified versioning for all components and I still think that this change has tremendous impact of simplifying life both for clients and for K2 support. Former versioning system was a bit messy and not often less consumer friendly, but also was a source of frequent confusion and cases when incompatible or poorly compatible components were mixed in one environment. Now it’s plain and simple you have to have 4.6.10 version for all of your components. Period. Really like this – nice and simple how it should be. :)

K2 4.6.10 Prerequisites

Seveneves 05 book spine

Signed first edition of Seveneves

Last friday I received my copy of signed first edition of Neal Stephenson’s latest book Seveneves, delivered from Barns and Noble. I was almost in the middle with my reading of it on Kindle as I obviously get ebook faster (both things were preordered before release). Pleasure was somewhat spoiled by the fact that nice paper book was damaged in transit – looks like package was exposed to water and book I got is not in perfect condition to say the least :(

Package

Some pictures can be found below.

Front cover:

Seveneves 01 Front cover

Neal’s signature:Seveneves 02 Neal's Signature

Picture of Izzy on the back of front cover:
Seveneves 03 Picture behind the front cover

SIDE NOTE/IRRELEVANT DETAILS: By the way once I reached above picture I spend some time pondering what is the word which describes this part of the book, and end up referring to this as “pictures on the back of the book cover”, posting a question on english.stackexchange.com – “Which word can I use to refer to pictures on the backside of the book covers?”  in parallel. Question was answered and it seems that proper word here is “endpaper” :) Though this term doesn’t imply that this part of bock has an illustration on it so it is necessary to use of something like “front endpaper illustration” and “back endpaper illustration”.

Picture of something else on the back of back cover:Seveneves 04 Picture behind the back cover

Book spine:
Seveneves 05 book spine

I will refrain from any comment on the book itself until I done with my reading, but definitely write something afterwards.

K2 blackpearl best practices

Using K2 management APIs within process solutions

I’m currently reading splendid piece of writing entitled “K2 blackpearl Best Practices“, which despite being published very long time ago still not lost its relevance. Just wanted to note one of the points contained in this document:

Refrain from using K2 management APIs within process solutions as the use of these requires that the identity of the user executing the code has administration rights. In particular this includes the management APIs contained within the SourceCode.Workflow.Management, SourceCode.ManagementAPI, SourceCode.SmartObjects.Services.Management, and SourceCode.SmartObjects.Services.SmartBox.Management assemblies, but any assembly with ‘Management’ in the name typically requires permissions on the server that a typical user will not have. Occasionally use of these APIs is required but it should be kept to a minimum.

Just highlighting this as important point as I saw cases where K2 clients tried to use this and expected those things to work without Administrative rights. Just jotting this down along with primary source which mentions this caveat.

Installing SQL Server instance for K2 blackpearl

One of prerequisites for K2 is a SQL Server instance and in this blog post I am just going to walk you through the process of setting up this important part of your K2 deployment.

As with any other product before rushing into installation, you should take your time and do some planning. Good starting point for this is to familiarize yourself with prerequisites and check K2 blackpearl Compatibility Matrix, and in case installation of K2 smartforms is also on your least, do not fail to check with K2 smartforms Compatibility Matrix. As we talking about SQL server part we have special interest in what SQL Server versions are compatible with K2 – and you can find it Microsoft SQL Server section of K2 blackpearl 4.6 Compatibility Matrix. If you have a luxury of choice for SQL Server version (as a software assurance subscriber maybe) I would always recommend to go for latest version of SQL Server officially supported by K2, and starting from 4.6.8 this is SQL Server 2014 RTM. It is strange to see Azure SQL Server mentioned in K2 compatibility matrix without single one check-mark in respective column. Another thing of note is R2 editions, whenever you see K2 supporting both R2 and non R2 version of Microsoft products you should realize that while both are supported most of the testing being is being done on R2 versions so those are preferable to use. Why use the latest supported version of SQL? Well this is just a common sense, to have access to the newest feature and avoid pain of forced migration due to end of support cycle for older SQL version whether it is coming from Microsoft side or maybe from K2 or being dictated by your corporate standards.

Irrespective of environment type you are going to build I assume that you going for dedicated SQL server machine option. Even for tests you want to have an environment which is close to real world, and thus SQL Server as resource intensive application always being placed on separate machine.

With SQL, especially if we talk about non-clustered single server installation you can get away with “spousal” installation (one where you always say YES/NEXT/OK), but there is some caveats related with K2. K2 requires very specific collation and you want to make sure that you selected it for your SQL Server instance during setup process.

Basically if you check K2 blackpearl Installation and Configuration Guide Technology Requirements section you will be able to find Microsoft SQL Server & Reporting Services requirements where it is stated that Latin1_General_CI_AS is a required collation. Once again, quoting documentation:

Case sensitive databases are NOT supported.
The following collation setting is required: Latin1_General_CI_AS

Do not fail to interpret this required word correctly: using any other collation for your K2 database effectively means running unsupported configuration. This page does not mention that you have to have this collation on SQL Server instance level, but this is a requirement too, and K2 documentation expected to be updated to reflect this.

There is a way to change SQL Server collation without reinstalling SQL server, but what it does – it just rebuild system databases with new collation. And in case you run K2 setup manager and it created your K2 database with wrong collation which was configured on your SQL Server Instance at the time of installation, there is no easy way to change this. You will have to re-create your database with new collation and move all your data into it, this is not supported operation and also not a something you can do easily in a few clicks.

So the most important point is to have your K2 SQL Server instance configured with required collation before you start with K2 installation, as K2 Setup Manager will automatically create your K2 database using collation configured for your SQL Server instance. Note that there is no way to tell K2 Setup Manager to use collation different from collation set SQL Server instance (this is for a reason – K2 extensively uses temp database which inherit collation from SQL Server Instance and those have to match), and even if your selected SQL Service instance has unsupported collation Setup Manager won’t warn you about this.

I am not going to mention each and every step in SQL Server installation here, but mention those which are important. First I would recommend you to create named instance for K2. Yes it mean that you will have to specify instance name added to SQL server name whereas with default instance you can get away with server name only. But having meaningful instance name is more convenient, especially when you have multiple instances under your management, and given the fact that server names in enterprise environments are normally follow some weird naming conventions which are not always obvious/tell you much about what this box supports on application level. So you may want to go for named instance:

SQL Server Creating Named Instance

After that goes important collation settings step which you don’t want to miss or leave unverified. Remember for K2 we have to use only one collation – Latin1_General_CI_AS. See respective SQL Server installation wizard screens below.

Once you reached Server Configuration stage where you need to specify Service Accounts don’t forget to switch to Collation tab and configure it to use K2 required collation. When you switch to collation tab, you will see that bu default there is some collation selected and this selection is based on your base OS language settings. We need to change this by clicking Customize button.

SQL Server Collation 01You should also note that K2 required collation does not have “SQL_” prefix. And it mean that K2 requires use of Windows collation, so once you clicked Customize button on the previous screen you have to do the following selections:

SQL Server Collation 02Once all is selected as on the picture above, click OK and you have set your SQL Server instance to use Latin1_General_CI_AS collation:

SQL Server Collation 03Why Windows instead of SQL collation? “SQL_” collations use SQL’s own proprietary code pages, whereas Windows  collations based on the Windows OS code pages. Windows keeps its collations up to date more often, and compatibility is better for the client applications. Hence the best practice is to use Windows collations, and K2 follows this best practice.

After this you may proceed towards completion of your SQL Server installation.

Assuming you installed your SQL Server instance on separate box you have to configure Windows Firewall rules to allow external connectivity, so that K2 server may access this SQL server instance over the network. This involves configuring 3 rules in Windows Firewall.

First of all you have to take a note of random TCP port which was assigned to your SQL server instance during installation phase. In order to do this you need to go to SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for your SQL Server instances (MSSQLK2 in this example) > TCP/IP and select Properties:

SQL Check Random Instance Port

Take a note of TCP Dynamic Ports value – you will need this to create one of required Windows Firewall rules.

First you have to create a rule for your instance executable:

SQL Firewall Rules 1 Instance Executable

In this inbound rule you allow a program, and specify a path to your SQL Server instance executable – sqlservr.exe. For SQL Server 2014 default location is following: “%Program Files%\Microsoft SQL Server\MSSQL12.YOUR_INSTANCE_NAME\Binn”.

Second rule allows inbound access to your instance TCP port, which you noted earlier:

SQL Firewall Rules 2 Random TCP Instance Port

And third rule which you need to create will allow inbound access SQL Server Browser service, which by default uses UDP port 1434:

SQL Firewall Rules 3 SQL Server Browser UDP 1434This is it for installing SQL Server service instance for K2. Additionally I may recommend you to create SQL Server alias on K2 server which will enable you to connect over short, nice and meaningful alias. And in case you have to change SQL Server machine or instance name it will be absolutely transparent for your K2 installation – you will only need to reconfigure your alias properties, nothing more. Apart from this you may want to adjust/verify SQL Server memory allocation settings and you are ready to go.

Once I get some time I will write up a guide on installing SQL Server Cluster for K2, but as you can see some of aforementioned recommendations are universal for SQL Server instance installation in general, and applicable for example when you installing SQL Server instance for SharePoint (don’t forget that it has its own compatibility with SQL Server versions and collation requirements, though) or any other application.

If you want to know more about SQL Server 2014 installation in general, including such an interesting option as installing it on Server Core refer to Install SQL Server 2014 documentation on TechNet. I guess this is desirable option for lab environments as it saves resources, and in case of production deployment it also means less resources and smaller attack surface for your SQL Server box.

Microsoft TechDay 2015 – Windows Server vNext

Some notes from Microsoft TechDay 2015 dedicated to Windows Server vNext which took place yesterday at co-working space Nagatino-Club. Note those are just quick and crude notes accompanied by some pictures from event, nothing more.
UPDATE (25.06.2015): Video recordings of this event have been published and can be found on Channel 9 – Channel 9 – Microsoft TechDay 2015.
01 - vNext Event Banner
Empty hall before beginning:
02 - Hall before beginning
Just a picture taken out of the window of Nagatino-Club coworking. If you look under red arrow carefully enough you may see towers of Moscow City commercial district under red arrow. :) I used to work there for a while and IMHO it has a striking similarity to Canary Wharf in London, though you won’t notice this looking at the picture below.
03

Session 1 – What’s new in Hyper-V (Mikhail Komarov, Hyper-V MVP)

Virtual TPM which allows use of BitLocker in VMs. Though it’s a bit unclear how useful it is in case we can steal VM files and this vTPM is included inside. Just an extra password/key?
2012 R2 allowed capping of IOPS per disk (set max allowed number) for VM, 2016 allows to do this per VM. Also we can do this via policy on a failover cluster level. MS calls this stuff Storage Quality (think of network QoS, now you have similar stuff for storage subsystem).
Cluster which uses 2012 R2 can be upgraded to 2016 CTP without downtime, though this may not be the case for RTM. This something interesting to try in a lab environment.
Better handling of storage subsystem failures – VMs not being moved immediately to other cluster node, but rather wait if delay was actually down to network latency (temporary NIC or switch overload for example). So the overal tendency is to maximize usage of commodity hardware and building redundant and resilient systems out of it with set of Windows Server technologies. Potentially you can save on expensive servers and SAN devices, and pay for commodity hardware and Microsoft licenses – in certain scenarios it will be significantly cheaper. There was certain skepticism from some community members that such solution will provide data safety and recoverability of the same grade as expensive hardware storage systems (e.g. devices from EMC or IBM).
VHDX can be resized interactively.
Memory can be added without VM downtime, and the same is possible for vNICs (MS implemented this based on high demand from Hyper-V technical community).
Snapshots. Though those are not recommended for most of production scenarios people use it anyway. MS improved snapshot process in response to this with “production checkpoint” option which uses VSS and some other technics for Linux guests to make snapshots more appropriate for production VMs.
Ability to connect with PowerShell into guest OS running inside of VM without need for network connectivity.
Hyper-V backup evolution. New architecture, changes tracking now included into platform and does not require file system level filters which were used before.
Hyper-V platform enhancements. New Hypervisor power management modes.
RemoteFX improvements. OpenGL 4.4 and OpenCL 1.1 API support. Larger size of video RAM which can be allocated and configured.
SLAT is a requirement for new Hyper-V both for client and server versions.
Session 2 – Nano Server:  future of Windows Server starts now (Georgiy Gadzhiev)
Session 2 Gadzhiev 1
Problems: numerous reboots, large server OS images, high demand for resources. Answer – Nano Server. Sounds like something we already heard with introduction of Server Core? Nano Server takes this idea one step further leaving only core and what is necessary for particular role/workload only, i.e. not only GUI removed but all unnecessary roles and features removed too giving you sort of container for particular role only.
So 2008 introduced Core/Full Server idea, 2012 added Minimal Server Interface, 2016 – Nano Server
Because of small size of Nano Server VMs they moving faster between cluster nodes.
Azure/Cloud Platform System
Nano Server allowed removal of previously tightly-coupled components. It includes ASP .NET core. Now we have containers for runnin of apps which isolated from base OS in a way somewhat similar to what we have with App-V containers.
We can configure Nano Server with PowerShell Desired State Configuration. Remote management with Core PowerShell and WMI. It can be integrated into DevOps tool set.
PowerShell replaces local management tools for Nano Server.
Support for cloud apps: Win32 subset; CoreCLR, PaaS & ASP.NET 5
Development. Windows SDK and Visual Studio 2015 oriented towards Nano Server. Toolset can be loaded from VS collection.
Reverse Forwarders / Backward Compatibility. Nano Server can forward calls to old/removed DLLs to their available substitutes (reverse forwarders) and call can be processed successfully if it lies within Nano Server SDK.
Even less amount of updates combined with ability to avoid Nano Server restart by restarting individual components.
Nano Server in Windows Server vNext. Distributive contains folder NanoServer, no option for it in setup. Drivers have to be added using DISM. We can import Server Core drivers package into Nano Server.
Important thrend: move to DevOps affects infrastructure specialists more and more and they now have to serve developers allowing rapid provisioning of containers and providing them with self service, and enevitably they have to learn how to code to some extent.
Session 3 – Containers in Windows Server (Mikhail Voitko)
Docker project it is open source world interpretation of containers idea.
Data Center evolution: hardware servers, separate server for each workload; next virtualization arrived allowing consolidation of workloads within one hardware server, next SAN and network virtualization entered the scene; on top of this cloud idea solidified/come to fruition with publicly available services (IaaS, PaaS, SaaS) we can use without knowledge of what actually happening on back end.
One of use cases for containers – DevOps.
Project Docker exists from around 2008. Environment for Windows Containers execution could be Windows Server or Linux.
Containers Images allow for code injection into image which can be further saved into images repository.
Container execution environment:
Old model:
Physical node
OS
Virtualization layer
Guest OSs
Apps inside of VMs
New one (containers):
Physical node
OS
Containers – apps within container
Virtualization layer
Guest OSs
Apps inside of VMs
Apps for containers can be commercially distributed.
Container provisioning is very quick and it makes it useful for development scenarios.
Cloud integration enables storing repository and/or containers in the cloud.
So main idea: container is a sort of middle ground between full OS and VMs – it has VM’s isolation, but it has better portability and speed of provisioning. For IT Pro is just an extra option in addition to virtualization with faster provisioning times. Some software has too much of separate parts which makes allocation of dedicated VMs for all of them is not justified and containers can be good for this usecase.
 
Containers use cases: application testing, resource management and isolation on container level, rapid provisioning, tiering with containers as additional layer with final target to improve stability; distributed processing, scalability, web apps.
Container OS environments:
Server Core / traditional apps
Nano Server / cloud
Container Windows Server
Container Hyper-V
Container management: Docker, PowerShell, Other
Microservices structure.
PID contains part which identifies container.
Session 4 – Storage Replica in Windows Server vNext (Mikhail Komarov, Hyper-V MVP)
Volume level replication
Block level volume replication
Synchronous replication/Asynchronous replication
Transport SMB 3.1.1
Geo-stretched cluster
Server-Server
DFSR replicates on a file level and unable to replicate file locked by apps, block level replication ignores app locks.
Synchronous replication uses journal on source and destination servers for which you can allocate SSD drive. Network can be a bottleneck.
Asynchronous can work over slow network.
SMB 3.1.1
Scalability and performance
SMB multichannel – allows use of RDMA adapters which cannot be teamed
SMB Direct (RDMA)
Requirements:
Datacenter edition
AD
Drives: GPT, not MBR. SAS JBOD arrays supported. Windows allows use of storage tiering, placing frequently accessed data on SSD drives within JBOD array.
Performance factors for synchronous synchronization:
Packet switching latency
Journal (flash drives – SSD, NVRAM)
Storage Replica is not:
a solution for “shared nothing” cluster
a solution for back up
substitute for DFSR
PowerShell module for Storage Replica.
Azure Site Recovery is supported for Storage Replica.
Session 5 – Storage System based on servers with local disks (Sergey Gruzdov)
Session 5 Gruzdov 2
Storage Spaces Direct
Works only AD DS environment
Simple scalability by adding new nodes
Minimum 4, maximum 12 nodes
Demo steps – creating cluster with storage spaces 
test-cluster cmdlet
Minimum requirement – 2 available disks apart from system disk
new-cluster cmdlet: specify name, nodes and static IP
get-storageenclosure
(get-cluster clustername).DASModeEnabled=1
(get-cluster clustername).DASModeOptimization=1
Get-StorageSubsystem -name clustername | get-physicaldisk |? CanPool -eq $true
New-StoragePool -StorageSubSystemName clustername -friendlyname poolname -writecachesizedefault 0
New-Volume
ReFS – speed and efficiency
Effective checkpoints and backups for VMs
Quick fixed disk creation (VHD/VHDX)
In ReFS we operate with meta-data and this speed up certain operations (snapshot deletion, fixed disk creation almost instant operations).
Session 6 – RDP enhancements (Dmitry Razbornov)
Session 6 Razbornov 1
 
GPU virtualization:
Software GPU WARP
vGPU RemoteFX
RemoteFX vGPU evolution started from technology acquisition from Calista Technologies.
vNext:
OpenGL and OpenCL API
1 GB of dedicated video memory independently of monitor and resolution
4K support
Compliance with industry standards and recommendations
Codec investments – H.264/AVC
QHD (2550×1440), 4K (3840×2160)
Hardware offloading for H.264/AVC available on different devices
Problem here is that this codec built for video, and static pictures/text quality can suffer. This was improved.
Updated RDP client to v10 in Windows 8 is not enough, you need Windows 10 to use vGPU RemoteFX improvements.

70-689 exam – another attempt

So last weekend I failed my second shot attempt for 70-689 exam. I knew that I’m slightly missing the mark with my preparation but tried to make it before 31st of May to participate in that “Step up to Windows 10″ challenge. I’m failed and it means that 697 exam won’t be free of charge for me. I guess something like 1 more week for preparation would be enough for me to pass. How you can knew in advance that you are not ready for exam? Let’s say you have some preparation questions/test (I used one from MeasureUP) and your results in exam simulation mode look something like this:

70-689 MeasureUP 01

This is clear sign that you are not ready – you want to have 100% here to be on a safe side. :)

So my results from first and second attempt looks as follows:

70-689

Exam attempt 2-2

So I improved my results (from 781 to 836 in configuring domain, and from 630 to 665 in supporting domain), and pleased with it, but alas again slightly missed the passing threshold of 700 for support domain. :( Anyhow I will make it eventually, I know :)