You know, sometimes need of creating 10 groups using ADUC groups for quick test is enough to fire off Windows PowerShell ISE and compose PS script… Below you can find little script to create any number of AD DS group you want, thanks to its compactness it may also serve you as an example of implementing WHILE cycle in PowerShell, so I’ll just leave it here.
In certain scenarios (for example, when you changed your K2 administrative accounts) you may see the following error when trying to add or remove Environment Field in Environment Library:
This may happen even for user which has been assigned K2 Administrator role in Setup Manager when custom security was configured on Environment Library and it didn’t include this specific account.
To resolve this (providing you have account with administrative rights) just look into Security settings available under list of variables themselves when you navigate to Environment Library > %Environment Library Name%:
Just add required user assigning him Modify rights to resolve this issue.
It used to be somewhat confusing with two mobile apps (K2 Workspace and K2 Mobile) for two platforms (iOS and Android), but recently updated K2 Mobile Applications help landing page makes things clear right off the bat making it easy for you to navigate to the right information:
Really good job on K2 documentation team side 🙂 I really see that product documentation becomes better and easier to use.
You may observe the following error on Windows Server 2016 immediately after OS startup:
This “has stopped working” part tells us that some unhandled exception occurred, so we can switch over to Event Viewer to find some more details about it:
Exception details are the following:
Faulting application name: svchost.exe_CDPUserSvc_65df7, version: 10.0.14393.0, time stamp: 0x57899b1c
Faulting module name: cdp.dll, version: 10.0.14393.1715, time stamp: 0x59b0d38c
Exception code: 0xc0000005
Fault offset: 0x0000000000193cf5
Faulting process id: 0x1b14
After quick research I found out that this error was introduced with some Microsoft updates and to resolve it on Windows Server 2016 14393.1884 you just need to apply another update 🙂 More specifically you need to install KB4053579, which can be downloaded from Windows Update Catalog. Applying this update resolves this error.
Recently I was doing installation of K2 5.2 on Azure VMs with SQL server named instance hosted on separate Azure VM. I’ve created SQL Server alias on K2 VM but then run into issue – neither K2 Setup Manager nor SSMS were able to connect to SQL through alias. I next tried direct connection via server\instance name which also failed. SSMS showed me the following error:
I first focused on network connectivity between VM:
- Confirmed that I can ping SQL Server VM from K2 Server VM
- Confirmed that no firewall enabled on VM and Azure VMs on the same network with nothing blocking connectivity between them
- I tried to use telnet to test port 1433 – it failed
This is what kept me focused on network connectivity layer a bit longer then necessary. But after confirming that SQL Server not listening on port 1433 using netstat -na | find “1433” it became quite clear that focus should be on SQL Server configuration. First of all – by default named instance listen on dynamic port, and you actually need to have SQL Server Browser Service enabled to ensure you can connect to named instance without specifying port while using dynamic ports. But in my case it was not that as in SQL Server configuration there was explicitly specified custom port (SQL Server Configuration Manager > Protocols for %INSTANCE_NAME% > TCP/IP Properties > TCP Dynamic Ports – if you have anything other than 0 in IPAll section fir this setting you are not using dynamic ports). When your problem is dynamic ports and disabled SQL Server Browser Service error message from SSMS looks as follows:
As you can see error message explicitly tells you “Error Locating Server/Instance Specified. To fix this either set 0 for TCP Dynamic Ports setting and enable SQL Server Browser Service or specify some port number there. You sort of choosing your dependency here – either browser service (may fail to start) or custom port (may be hijacked by other service). It seems that browser service is better approach.
So in my case I was confused by expecting named instance to listen on default port which was, to put it simply, wrong expectation. Here is how you can check on which port your instance is listening:
But obviously having access to SQL Server you can get this data from SQL Server Configuration manager too: SQL Server Configuration Manager > Protocols for %INSTANCE_NAME% > TCP/IP Properties. Just keep in mind that you need to check TCP Dynamic Ports value both for specific address and for IPAll section. But like I said in my case, the problem was not about ports. Once I found out instance port I noticed that I still cannot connect to it using telnet, just because IP address was not enabled in SQL Server Configuration Manager > Protocols for %INSTANCE_NAME% > TCP/IP Properties (meaning it had Enabled=0). I corrected that and telnet connectivity test succeeded.
Still, when I get back to SSMS I was getting the same error – “Could not open a connection to SQL Server. Microsoft SQL Server, Error: 53”. Reason? With SQL Server 2016 and latest versions of SQL Server, I keep forgetting that the latest and greatest version of SSMS still reads alias settings from x86 registry hive (meaning you need to configure SQL alias using cliconfg.exe from C:\Windows\SysWOW64) – I have a hard time getting use to it. Interestingly fully missing x86 alias triggers error message “Could not open a connection to SQL Server. Microsoft SQL Server, Error: 53” while one we configure with non existing server or instance name will give you “SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified”.
Anyhow your key takeaways from this post should be:
- Know your instance port
- Make sure that IP address is enabled
- We still need to configure alias twice (x86/x64) to avoid unpleasant surprises from apps reading setting from non-configured location
I hope this post may save some troubleshooting time for someone.
I guess I’m a bit late for writing posts of the “looking back at 2018” and “new year resolutions for 2019” type as through the relevant time period I was busy migrating my blog from premium shared hosting provider to cloud hosting. The reason for the move was former provider inflexibility with payment options (I was OK with high price tag but was not OK with their desire of receiving it all upfront). Migration process involved some silly mistakes and forced WordPress internals learning, but I finally managed to resolve all issues and get my blog up and running (now with HTTPS 🙂 ).
I also keep writing blog posts for StarWind Blog, and recent one was about SharePoint 2019 installation. But something which may qualify for bigger of my NY resolutions for 2019 is a new blog about K2 which I’m going to do completely in Spanish. I don’t plan to put huge amount of content there very fast and probably will be also translating some of my old K2 related posts into Spanish. You can already bookmark new site address – k2bpm.es and stay tuned for new posts which will arrive as soon as I write them 🙂
Recently I bumped into a problem which was super obvious in retrospective, yet took me some time to untangle it. K2 environment was upgraded from 4.6.11 to 4.7 and K2 installation path was changed in the process (drive letter). After upgrade was completed without warnings or errors, we did some more testing and found that one of the forms which was using Oracle Service Instance based SmartObject started to throw an error similar to this one:
Essentially it was very clear from the error message that Oracle Service instance keep looking for related assembly in old installation location (wrong drive letter). We switched to SmartObjects Services Tool only to see that there we are unable to edit or create new service instance of this service type. At this point I looked at old cases mentioning similar error message and surprisingly large amount of them was proposing workarounds and things not quite related with the root cause. We spend some time addressing missing prerequisite for this service type – 64-bit Oracle Data Access Components (ODAC) version 188.8.131.52 or higher, which mentioned as such in 4.7 user guide (_) and checking some related settings and so on.
But I next paid attention to the fact that environment had 2 service type for Oracle one of them was working, while another one does not. I next dropped assembly mentioned in error message in old installation location and restarted K2 service – it then fixed first Oracle service instance, but broken another one – it started to say that assembly SourceCode.SmartObjects.Services.Oracle.dll has been already loaded from another location, and this brought my focus back to the real problem – somehow one of the Oracle service types was not updated by K2 Setup Manager to use new installation path. Probably it was somehow “custom” and somehow was skipped by installer because of that. Anyhow my next step was finding where this path is defined. As soon as I confirmed that I cannot see/edit Service Type definition XML from SmartObjects Services Tool I switched to K2 database to check it there.
Necessary word of warning: Backup your K2 database before attempting any direct manipulations in it, and make sure you understand what you are doing before starting doing that 🙂
Service type definitions live in the follow [SmartBroker].[ServiceType] table, so I located “problematic” service type to check on its XML which is stored in ServiceTypeXML column. Here is the sample query to quickly search for service instance definition based on its Display Name:
Than will return you XML column value, on which you can click to view it as a formatted XML, here is an example of how it looks like:
As you can easily service type definition contains assembly path parameter in its XML. So now it is only a question of updating it with correct value. Here is sample script to do that:
That will iron out problem with misbehaving service type. I don’t think that it can be very frequent problem as normally installer updates all the assembly paths definition with new path. But, especially if you have some custom service type, you may want to scan your service types definitions for any vestiges of old installation path. Here is a sample script which will display all Service Instances definitions which contain old drive letter reference (my example uses “D:\%” as a search criteria):
I hope that this blog post may help someone who may bump into similar error in K2 and if not, then maybe you can make use of SQL script samples which use filtering based on values within XML columns.
P.S. Note that all scripts mentioned above are for K2 4.7. In K2 Five (5.x) structure of the [SmartBroker].[ServiceType] table has been changed – it no longer has XML column named [ServiceTypeXML] and assembly path is stored in dedicated text column [AssemblyLocation] instead.
Sometimes while looking at somebody’s else ADDS environment you may want to know some basics about it – things such as total number of users, or in which OU this specific server is hiding. What surprises me a lot is that how frequently you can see people telling you that they don’t have right consoles here on this server (while their just in one PoSh line from all they need), or they not sure if they have permissions (which they usually do have). If you are lucky you just spend some time waiting for a person switching over to some other machine or directly to DC (yes to DC, just because ADUC console lives there 🙂 ), or in some other cases you will be dragged through multiple redirects / additions of people to the call only to end up explaining final person in that chain exact steps to be performed to get your questions answered (which you were perfectly able to do without switching servers and involving other people, in the first place).
Unless you already got it, it is more preferable and faster just to do yourself a favor of comfortably staying on the server where you working and issue Install-WindowsFeature RSAT-AD-PowerShell to solve missing tools problem in 20 seconds, and then, use PoSh to get your questions answered. Here is sample PS function, which I named similarly to CHKDSK (thing of which I have very fond memories ever since I use it to help my classmate to repair his HDD at the time of 1-2 GB hard drives and Windows 95) – ADDSCHK:
In the world where increasing number of people does not hone their “I can do this in N ways” skills (and sometimes even “I understand how it works” too), you frequently better off speaking PoSh with infrastructure directly than with those who entrusted to keep it up and running 🙂
Normally, I write the “technical how to” type of articles, but this one will be more of a product review/introduction (though, I think even with this format we can go into technical details 😊). Relatively recently, StarWind released a free tool which allows you to measure latency and bandwidth of RDMA connections (pay attention to conjunction “and” here) and to do this in heterogeneous environments (meaning that you can measure Windows – Linux RDMA connection bandwidth and latency). This utility is called rPerf and can be downloaded for free from StarWind website. To download it, you will need to fill in a little form with some of your data, but that’s not much to pay for a good tool, right?
I would allow myself to write a little bit on what RDMA is so that we are clear on what we are going to measure here with this utility😊 (though, this technology is a huge topic in its own which calls for a lot of reading to fully understand it). Next, we will touch a little bit on what rPerf can do for you and even more briefly how to use it (just because it is straightforward and easy).
What is RDMA? RDMA or Remote Direct Memory Access is a technology which enables direct access from memory of one computer to another bypassing OS data buffers of both computers (meaning it is all happens on hardware level through device drivers). That type of access allows you to have high-throughput and low-latency networking which is something you really need for massively parallel computing clusters. RDMA-enabled data transfers do not add extra load on CPUs, caches or context switches, allowing your data transfers to continue in parallel with others system tasks. As an example of practical use case may be Hyper-V live migration, there is a YouTube video from Mellanox demonstrating a comparison of live migration performance with RDMA vs. TCP (and it shows impressive 29 seconds VS 2 hours result).
RDMA read and write requests are delivered directly to the network allowing for fast message transfer and reduced latency, but also introduces certain problems of single-side communications, where target node is not notified about the completion of the request (you may want to read up more on this to really understand this technology).
How can you get it? RDMA implementations require you to have both hardware (NIC) and software support (API and drivers support) and currently different varieties of RDMA implementations exist: Virtual Interface Architecture, RoCE (RDMA over Converged Ethernet), InfiniBand, Omni-Path, iWARP.
All in all, you most likely will find RDMA capability in high-end servers (you need to make sure that you have NIC supporting RDMA, something from Broadcom, Cavium or Mellanox Technologies) and HPC type of Microsoft Azure VMs (H16r, H16mr, A8 and A9, and some of N-series sizes with “r” in their name too).
What can you do with rPerf? You can measure RDMA link performance between RDMA-enabled hosts. The rPerf tool is a CLI utility which has to be run on both machines: one of them running as a server and another as a client. On the machine which you run as a client you specify the number of read iterations, buffer size and queue depth to start testing and once test completes you are going to get throughput in MiB/s and kIOPS along with latency information units/microseconds (minimum/maximum/average).
I’ve already mentioned that one of the strong points of this tool is its ability to work cross-platform. OS wise it supports Windows 7/Server 2012 or newer, CentOS 7, Ubuntu. Windows based OS must have Network Direct Provider v1 and lossless RDMA configured. Keep in mind that the latest drivers from the NIC manufacturer are recommended as standard Windows drivers don’t have ND API support. In case of Linux-based OS, you will need the latest network drivers with RDMA and RoCE support.
All the command switches you need to use are well documented in the technical paper dedicated to this tool on StarWind site so I won’t be dwelling on that, and I would say that best thing is to try to use this tool in your RDMA enabled environments.
Having real numbers comes in really handy in scenarios when you set up your cluster and need to make sure which mix of technologies gives you the best latency, or when you need to make sure whether your setup meets the requirements of your workload or application demand outlined by an application vendor, or (and this is the most frequently forgotten thing) when you need to set up the baseline performance numbers of your environment to be able to compare against it once your setup receives higher load or when service consumers report degraded performance. With rPerf, you can solve at least one part of writing your performance baseline documentation. Having some firm numbers for RDMA connection performance also serves well for verifying/auditing RDMA connection performance in any other scenario and with rPerf you can do it with one simple cross-platform tool.
Somehow I kept forgetting this thing frequently enough to expend some effort to write this 🙂
At times when you troubleshooting something in K2 you need to identify process having only process instance ID and frequently knowledge of the solutions and workflow is a missing part (developer is away on vacations or , in the worst case scenario, nobody even knows if there was a developer in the first place 🙂 ). As a sample scenario, you can think of troubleshooting failed process escalation or process instance which stuck in Running state.
Let’s look at this in more details. For failed escalation you will definitely have error in K2 host server log and entry in K2 Server.Async table – that will give you ProcInstID value, and your next steps are: A) Find out which process this instance belongs to and B) Status of this instance. Finding (B), at least if your process is in error state is easy as it supposed to be listed in Error Profiles View where you can retry error and also see Process Instance ID and process name.
But in case your instance not listed in Error Profiles View, or let’s say you going step by step before jumping into Error Profiles, then you still have 2 options to get Process Name process instance ID:
(1) Using Workflow Reporting SmartObjects. You can use Process Instance SmartObject (Workflow Reports > Workflow General > Process Instance) to get list of Process Instances – you just feed ProcInstID to it to get back ProcessSetID:
Process Set ID in turn can be feed to Process Overview SmartObject (Workflow Reports > Workflow General > Process Overview) which will give you Process Name:
(2) Querying K2 database (in case you already in SSMS and too lazy to switch over too K2 Server/Tester Tool 🙂 ). Here is a SQL query you need to run: