Initialization failed before PreInit: Unable to establish a secure connection with the Active Directory server

The other day I had a support case where temporary outage of AD DS infrastructure caused K2 workspace to enter into the error state where it started throwing the following error:

“An error has occurred.
Please contact your administrator.
Error:
Initialization failed before PreInit: Unable to establish a secure connection with the Active Directory server.
Possible causes
– the ADConnectionString in the K2 Workspace web.config may have an incorrect LDAP path.
– the physical connection to the Active Directory Server might be down.
– please review log files for more information.”

Just for lazy readers and those in a hurry: Bump into error above? Try Recycle Application Pool which runs your K2 Workspace (default application pool name is “K2”).

The tricky thing here that it is really easy to miss short period of AD outage and start “fixing” K2 instead. But in case this is an environment which used to work and you are sure  that no changes were made in K2 configuration recently then it is just an issue caused by AD DS outage.

When K2 Workspace is loaded it attempts to establish the connection with AD as the application pool account. If there is an issue with accessing AD under this account it leads to above mentioned error. What can be wrong with this account? It can be disabled or locked out in AD but also after AD DS outage it may be necessary to perform K2 workspace application pool restart to force it to reconnect to AD DS. Now the interesting thing here is that a lot of people trying to use a big hammer immediately, i.e. iisreset and it may not fix this issue sometimes (according to my experience) leaving you wondering why IIS reset does not fix this, where as just K2 Workspace application pool restart does.

In attempt to remove any confusion you mat want to read up a bit on iisreset VS Recycling – and good explanation of this can be found here. Your main take away from that post should be understanding of IIS architecture and its 3 main components:

IIS Architecture 2

Image source – IIS7 For Non IIS PFEs

Three components are the following:

  1. HTTP.SYS (runs in Kernel Mode).  This component responsible for client connection management, routing requests from browsers, and managing response cache.
  2. Worker Processes (run in User Mode). If you look at the picture above that we may also have so called Web Garden which is nothing more than application pool which allowed to use more than one worker processes by means of setting “Maximum number of worker processes” to a value higher than 1. Web garden feature has been designed for one purpose which is “Offering applications that are not CPU-bound but execute long running requests the ability to scale and not use up all threads available in the worker process.” Leaving out Web Gardens each Application Pool has one specific worker process within which it is running (W3wp.exe). Worker process handles all the contents (aka static contents), such as HTML/GIF/JPG files, and runs dynamic contents, such as ASP/ASP.NET applications. Therefore, the status of W3WP process (=Application Pool) is critical for the performance and stability of web applications, or web sites.
  3. IIS Admin Services (run in User Mode). Prior to IIS7 there used to be IISADMIN service which used to host the IIS 6.0 configuration compatibility component (metabase). The metabase is required to run IIS 6.0 administrative scripts, SMTP, and FTP. Starting from IIS7 we have Windows Process Activation Service (WAS) which manages application pool configuration and worker processes instead of the WWW Service. This enables you to use the same configuration and process model for HTTP and non-HTTP sites.

OK it seems I went in too much of details here and now have to get back to main topic here: main thing for you to know is what actually happens when you execute iisreset. It actually restart IIS services (all of them) and for most of us this is exactly what we expect and this is what may make you wonder about why IIS reset does not fix an issue, where specific application pool restart does it? Sounds strange…

I would venture to suggest that iisreset may fail to restart some of specific w3wp processes sometimes but after spending couple of hours searching through the web and doing couple of quick tests this does not seem to be the case. But what I can say based on above mentioned article you should actually prefer Application Pool recycle anyway.

On a side note I would also be aware of the following iisreset keys:

Output of this will look as follows:

iisreset-status

It gives you current status of all IIS services as well as what exactly will be restarted by iisreset.

This parameter prevents the server from forcefully stop worker processes process. This can cause IIS to reset slower but is more graceful. With this parameter it is a compromise between lowering downtime and trying to be less disruptive to what is already running.

And just to confirm iisereset executed without any keys is the same iisreset /restart

Getting back to K2 Workspace issue mentined in the very beginning of this article my advice is try to Recycle your K2 workspace application pool – it is preferable and less disruptive action than iisreset. When you recycle an application pool, IIS will create a new process (keeping the old one) to serve requests. Then it tries to move all requests on the new process. This is known as “overlapped recycling” as opposed to “process recycling” and it is default behavior for all IIS application pools.

In case it did not help you to resolve “Initialization failed before PreInit: Unable to establish a secure connection with the Active Directory server” error in K2 Workspace below are some K2 side checks to do. Make sure that:

  1. K2 Workspace site is running in IIS Manager (not Stopped)
  2. Application Pool designated to run this site and applications therein are running as well. If they are not running, the service account running the K2 Workspace application pool may be locked in Active Directory.
  3. Make sure the Workspace Application Pool account has at least read access in AD for the newly added domain (in case you added any) or in one which you always had. When Workspace is loaded it attempts to establish the connection with AD as an application pool account.
  4. Try including the domain controller name and LDAP port number in the LDAP connection string as follows:

    OR
  5. If you continue to get the same error you may try using the Distinguished name format for the domain instead, for example:

If after checking all these things issue still persist consider enabling TracingPath in the Workspace web.config, to get a more detailed debug output from the PreInit error.

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

24404 Authentication with server failed when connecting from WorkflowManagmentServer to the WorkflowClient

Recently I had an interesting support case where we spent way too much time on investigation of a problem which was simple one as soon as we figure it out 🙂 It was the typical case where it was difficult to see the forest behind the trees as K2 environment we dealt with was quite complex and involved F5 NLBs so it was easy to be distracted by all this complexity and blame issue  environment configuration problems.

Anyhow main symptom was that custom application built on top of K2 platform which worked just fine in K2 4.6.6 started to fail immediately after environment upgrade to 4.6.11. Specifically application started to throw the following exceptions when calling ReleaseWorklistItem method:

“2025 Error Marshalling SourceCode.Workflow.Runtime.Management.WorkflowManagementHostServer.ReleaseWorklistItem, 24404 Authentication with server failed for %K2_Service_Account% with Message: AcceptSecurityContext failed: The logon attempt failed

We were a bit distracted in the beginning by NLBs and environment complexity (which I should admit was designed and managed remarkably well) but in the end root cause was isolated to the way K2 connection string was configured. Let’s assume app connection string is configured as follows:

So this connection string if we look at it does not seem to be correct as if you look at it carefully you may notice that we indicate use of integrated authentication for WF Management but at the same time provide explicit credentials. And indeed as soon as we remove credentials or set UseIntegratedWFManagement to false app starts working in 4.6.11. But now the thing is that such connection string works just fine in K2 4.6.6 – 4.6.10 but does not work in 4.6.11. So it looks a bit like breaking change which in reality is a fix implemented in 4.6.11 which changed system behavior.

Prior to 4.6.11 when you Authenticate a HostServer session with the following connection string:

the connection string associated with the session was:

If you pay attention to the end of sample connection strings above the WindowsDomain key wasn’t persisted pre-SSO and instead it was added as AuthData.

When you open a connection from the WorkflowManagmentServer to the WorkflowClient, there is a check to see if the connection has a WindowsDomain, Username and Password. If it had all 3 of them, it would try and use those details to authenticate a user. In versions prior to 4.6.11 K2 didn’t persist the WindowsDomain property, and because of that even if you specified all three parameters it would just do a normal integrated connection string without the username and password as WindowsDomain is “missing”.

In 4.6.11 K2 persists the WindowsDomain, so with connection string properties configured as above K2 actually tries to authenticate with the following values:

This works in HostServer because there K2 checks if we specified WindowsDomain AND if there is a domain specified in the UserID, but there is no such check in WorkflowServer. This leads to connection attempt with values from both WindowsDomain + UserID which leads to use of something like “Domain\Domain\User” for authentication and authentication attempt will fail because of that.

Workaround to this is to not specify the WindowsDomain in the connectionstring if it is already included in the UserID
OR to not specify the domain with the userName.

e. g.

or

This is something to be aware of if you using connection strings and your app connects to WorkflowServer, otherwise you can have sort of little surprise after upgrading to 4.6.11 from older versions.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Provisioning SharePoint App Catalog in SP 2013/2016

Starting from SP 2013 we have to have application catalog in order to host SharePoint hosted apps which is part of SP App Model which replaces solutions you used to use in older versions of SharePoint.

This topic is documented both by MSFT and by IT community but the problem with any documentation that you have to internalize it to get clear understanding and even properly written explanations sometimes does not click for you until you do some hands on practice and, yes, internalize this information. Recently I finally decided to do some practice and create an app catalog from scratch in my test environment as well as jot down the steps which are easy to follow and more appropriate for those whose sole question is “I need an app catalog. How can I quickly set it up?”

Here is the steps:

1) Provision required service applications. You need to have Subscription Settings Service and App Management Service Applications provisioned and running. You need to use PowerShell to provision service apps:

Once this script has been executed make sure to start services from CA.

K2 - App Catalog required services

2) DNS part. You have to have separate app domain or wildcard CNAME  entry in existing domain (the latter is no go in production environments for security reasons). We need wildcard DNS entry just because we want dedicated DNS domain names for our apps but we don’t want to create new DNS records for each and every app which comes online. We also want to have our apps running in their own isolated DNS domain (separate TLD) outside of SharePoint – this is better isolation approach which comes with SP app model.

You can just create wildcard CNAME record in existing domain like that:

K2 - App Catalog Wildcard CNAME entry in existing domain

Once again this is “no go” from security POV and you either want separate TLD or sub-domain for your apps. Steps below describe how to create DNS sub-domain and wildcard CNAME entry in it.

Start DNS Manager snap-in, right-click on Forward Lookup Zones and select “New Zone…”

K2 - App Catalog creating sub-domain 1

Next you just go through New Zone wizard mostly accepting defaults with exception of the page where you have to specify your sub-domain name, which in my case is “apps.conundrum.com”:

K2 - App Catalog creating sub-domain 1 K2 - App Catalog creating sub-domain 2 K2 - App Catalog creating sub-domain 3 K2 - App Catalog creating sub-domain 4 K2 - App Catalog creating sub-domain 5 K2 - App Catalog creating sub-domain 6 K2 - App Catalog creating sub-domain 7

Once DNS sub-domain is created you can create wildcard CNAME entry which have to point to your SharePoint app server in your parent/main domain:

K2 - App Catalog sub domain CNAME record 1 K2 - App Catalog sub domain CNAME record 2

Here is how end result should look like in DNS Manager:

K2 - App Catalog sub domain CNAME record 3

What it gives you in the end? Thanks to wildcard CNANE DNS entry in sub-domain you can ping any name in this sub-domain and it always will be resolved to your SharePoint app server IP. Example:

K2 - App Catalog sub domain CNAME record - test

3) Create new App Catalog site collection. Go to CA > Apps > Manage App Catalog:

K2 - App Catalog creating App Catalog 1

Then select Create a new app catalog site and click OK:

K2 - App Catalog creating App Catalog 2

On the next page specify required values – Title, Web Site Address, Primary Site Collection Administrator and End Users, and click OK:

K2 - App Catalog creating App Catalog 3

After this App Catalog sites collection will be created and you will be able to browse it:

K2 - App Catalog creating App Catalog 4

4) Last touch 🙂 Configure App URLs. Go to CA and click on Apps to get to Configure App URLs link:

K2 - App Catalog Configure App URLs 1

On the next page you have to specify App domain and App prefix and  click OK These settings will shape your apps URLs.

K2 - App Catalog Configure App URLs 2

This concludes App Catalog configuration and you can now test your App Catalog. As proverb puts it “The proof of pudding is in eating” and by extension we can say that “The proof of App Catalog is adding some app(s) into it”.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

WDS, DHCP and different subnets

I decided that it make sense for me to jot down different things as I prepare to 70-410 and other MSFT exams from MCSA Server 2012 track, though since recently I have strange feeling that I’m trying to take MSFT exams when they about to retire 🙂 .

One of the questions/topics we had since Server 2008 is WDS and there are some facts to be aware of when it comes to WDS.

  1. Port 67. WDS server uses UDP 67 and this is the same port on which DHCP server listening too. In case of coexistence of DHCH and WDS on the same server you have to configure WDS not to listen on port 67. When you add WDS role on a server which already hosts DHCP role all configuration settings for such coexistence (points 1 & 2 in this list) being configured for you automagically. But if WDS installed first and then you adding DHCP role you have to take care about this manually.
  2.  DHCP Option 60. Once you configured DHCP server not to listen on port 67, you have to configure DHCP option 60 which will tells DHCP clients that their DHCP server is also WDS server/PXE (Preboot eXecution Environment) server. You have to switch on DCHP option 60 and set it to “PXEClient”. In addition to this TFTP should be allowed on FDS along with BINL service (UDP 4011). Note: DHCP option 060 PXE Client does not appear unless your server has the WDS role installed.
  3.  RFC 1542. If your DHCP/WDS server is on a different subnet from one your client reside in then you have to have RFC 1542 compliant router between these subnets, and most modern routers are RFC 1542 compliant. Such routers can be configured to pass BOOTP broadcasts (i.e. broadcast messages which use ports 67 and 68). If you don’t have router compliant with such standard you have to leverage RRAS role and configure DHCP Relay agent.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 blackpearl installation – complete removal/clean up

Recently I did a lot of test installs of K2 blackpearl reusing the same machines. I.e. it was necessary for me to remove everything related to K2 blackpearl before I can install it again on the same server. Below you may find few notes/observations with regards to this.

In order to remove K2 blackpearl you just run K2 blackpearl Setup Manager on your server and select “Remove K2 blackpearl”:

K2 blackpearl Setup Manager - Remove K2 blackpearl

This will remove all K2 components from your server and will ask you for reboot. Once this is done the following things still have to be removed if your goal is to clean up everything and start from scratch:

1) Some files may still remain in the following folders:

If your goal is full clean up you can remove all these folders given the fact that you uninstalled all your K2 components via Setup Manager before hand and there is no K2 components listed in Programs and Features list (appwiz.cpl). NOTE: If you have SmartForms or other additional components you do uninstall in reverse order – last component installed being removed first and so on.

2) Self-signed certificates for K2 server and sites not being removed from Personal Machine store on your K2 server. If your goal is full clean up you may want to remove them too.

3) K2 database not being deleted too, for complete clean up you should drop/remove it on SQL server.

4) I also noticed that in my case K2WTS service has not been removed correctly by Setup Manager during removal process. K2WTS service also known under display name “K2 Claims To Windows Token Service.” Here is an example how to check if it is still present after removal of K2 blackpearl via PowerShell:

Below sample output in case service is still present:

Get-WmiObject K2WTS

No output means that no service with such name found.

And this is how to remove via PowerShell:

Of course there are other ways to remove service in Windows as Remove-WmiObject available only in PS 3.0 or newer. You can also use sc.exe or even locate and delete relevant entry in registry using regedit.exe

Also only after writing this blog post I accidentally discovered relevant section in official K2 documentation which covers this topic: K2 blackpearl Installation and Configuration Guide  > Maintenance > Remove > Manual Environment Clean Up

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Fixing failed Windows 10 Anniversary Update and DISM & ReFS registry hack

This blog post covers some issues I run into while installing Windows 10 Anniversary Update on one of my machines and some other issues I discovered/fixed in the process 🙂

As I twitted earlier that for me Windows 10 Anniversary update failed on one of my home machines:

AnniversaryUpdate Error

Machine was really low on space on C drive and installation of update failed with error code 0x800705b4. Once I realized it I tried to use available option to move download folder to another drive and freed up enough of space on C drive – but in spite of this I kept getting this 0x800705b4 error. Back then there was no MSFT KB on this and after a while Windows Update even stopped to offer Anniversary Update to me. So I give up temporarily.

Yesterday I decided to give it another try and as Anniversary Update was no longer offered via Windows Update I downloaded Windows 10 Upgrade Assistant from support.microsoft.com:

AnniversaryUpdate Download Tool

Once downloaded, this tool provides you with wizard style UI for upgrade:

AnniversaryUpdate Upgrade Assistant

This tool allowed me to re-try installation of Anniversary Update, but I end up with the same 0x800705b4 error. This time Google I some how come across to an official MSFT KB dedicated to this error. I guess I wasn’t able to find this useful KB earlier as I tried to search something specifically applicable to Anniversary Update whereas it was rather generic Windows Update error.

First suggestion from above mentioned KB was “sfc /scannow” executed from elevated command prompt seemingly helped me, but I’ve got credentials prompt at update installation stage. At this point I decided to give a call to MSFT support, or rather I opt out to request call back from them which I received relatively quickly – and it helped me to move on further. I was explained that I have to activate my Windows using my Windows 8.1 key I had by means of issuing the following command:

This brings you the following Windows which allows you to activate your Windows system:

AnniversaryUpdate slui 3

Once activation succeeded I was advised to start update process from scratch, and I also get a recommendation to use update from installation media to speed up this process. I opt out to continue with Windows 10 Upgrade Assistant.

But alas once I did activation I run into the same error and “sfc /scannow” was not able to fix it, and I proceed to suggestion #2 from MSFT KB – use DISM to  to fix Windows Update corruption errors. And solution is to run this:

KB also states that you have to use repair from source here but I decided to try online repair first and run into the following problem:

DISM Error 50

Error message here is rather non descriptive and give little hints what is wrong for real. And I realized that I already struggled with this back when I tried to play with Windows To Go and give up on this. But since then answer to this appeared in the Internet:

BEST FIX: Error 50 DISM does not support servicing Windows PE with the /online option

Essentially this error caused by misplaced MiniNT key in registry which makes DISM thing that you try to service Windows PE installation. And truth to be told I have nobody to blame for that except me as I did a little unsupported trick to enable ReFS support on Windows 8.1 long time ago and I seen some other issues caused by this unsupported registry hack. So take away here is that it you use this enable ReFS trick either enable it to format drives, then remove registry key or if for some strange reason you may want to keep it be prepared to issue like non-working Windows Restore and this DISM error 50.

Anyhow once I removed MiniNT key DISM cleanup-image worked well for me and I was able to install Anniversary update, albeit not without another minor glitch which cause disproportionate amount of fuss in the Internet (example) – look like people don’t see how Anniversary Update being rolled out smoothly on 80%+ of super-diverse hardware base and moaning about individual issues with random configurations/old hardware saying that MSFT does a poor job here. Just for your reference on two other machines I have this update installed without slightest issues automatically (and one of them was really old Dell desktop with customized configuration). Glitch I’m talking about is that during update installation on a first boot I got an endless spinning circle on a black background and being experienced with this I waited up to 4 hours, then looked and the interned where a lot of folks report that it was necessary to unplug different Bluetooth USB dongles to get around this issue, and some even report that they were guided by MSFT to do 3-times hard power off to go to recovery mode… 🙁 Just in case I removed my Logitech Unifying receiver from USB port and waited a bit more (~15 mins or so), then just powered down my desktop and switched it on again – system started just fine.

So with a bit of help here and there my entire house hold now runs Windows 10 Anniversary update (2 desktops & 1 laptop). I hope this blog post may help those who run into similar issues.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to: Drop multiple databases via SQL Script (no worries backup/restore is covered too :) )

Recently I did rather a lot of test requiring me to work with non-consolidated K2 DBs. Test included multiple DB restore/delete operations and I realized that I need some script to quickly drop all my K2 DBs and start from scratch. Here is this script:

Script selects every database prefixes with “K2” and you just need to copy its output into new query window and execute.

And in case you tend to backup things before you delete them, similar script for backup:

And for restore you can use the script below. Unfortunately it uses hard coded file paths but assuming your back up files have default DB names (and for example were created by the script above) you can get away with minimum find and replace adjustments (path to backup files and your SQL instance data directories may need to be adjusted). Here is the script for restore:

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to create self-signed certificate for K2 NLB cluster add it to trusted root CA on client machines via GPO

I’ve recently recorded a video covering this topic, but I think it also makes sense to write a bit here, if only for giving you ability to copy paste related commands 🙂

When you install K2 blackpearl NLB cluster K2 Setup Manager can create K2 sites for you and it also creates HTTPS bindings for it. But K2 Setup Manager create individual self-signed certificates for each of the NLB cluster nodes which leads to ugly certificate security warning whenever you try to access K2 Workspace or any other K2 site.

To address this you have to do the following:

1) Create new self-signed certificate for your K2 NLB cluster name using New-Self signed certificate cmdlet:

You have to do this on one of your K2 servers. This cmdlet will create new self-signed certificate and place it to Personal certificates store of your server. Copy certificate hash from output of this command – you will need it for next steps.

2) Next you want to obtain appid of your current K2 HTTPS app/binding using the following command (use elevated CMD for this):

Copy appid from the output to use it in step 3.

3) “Delete”/un-assign current SSL certificate from your HTTPS binding (one which was assigned by K2 Setup Manager):

Insert your certificate thumbprint copied on step (1) and appid obtained on step (2) into the following command and execute it from elevated command prompt:

At this point we created self-signed certificate and assigned it to HTTPS binding for K2 on our first server. But we still going to get certificate warning because our certificate is self-signed and not trusted. To address this it is necessary to import it into Trusted Root Certification Authorities on all machines which we will be using to access K2 sites.

4) At this step we will export certificate into P7B file to further import it into Trusted Root Certification Authorities. Execute the following in PowerShell:

This will create “servercert.p7b” file in the root of C drive. For testing purposed you can add it into Trusted Root Certification Authorities manually on your K2 server – right-click on it, select Install Certificate > Next >  Place all certificates in the following store > Browse > Trusted Root Certification Authorities > OK > Next > Finish.

At this point you should be able to access K2 Workspace via NLB name from your 1st K2 server assuming all above listed steps were performed on it and you not hit second node of your K2 NLB cluster by chance. To exclude the latter, you can take this node off-line or Stop in NLB Cluster Manager:

K2 NLB Stop Node

5) Now we can just deploy our P7B certificate file to Trusted Root Certification Authorities on all machines in our domain using GPO certificate deployment option (Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities):

K2 NLB Import Certificate GPO

Once you created this GPO and linked it to appropriate OU (one which contains machines from which you accessing K2 sites), you can update your local group policies on your client machines and access K2 sites via NLB name using HTTPS without any certificate related warnings.

6) Final touch 🙂 We need to add certificate created on step (1) to the second K2 server and configure it for use for K2 HTTPS binding on this second server. P7B file we created earlier does not fit for this purpose and we need export certificate once again including private key this time.

Run MMC on K2 server one and add Certificates snap-in targeting Computer Certificates store:

K2 NLB Open Computer Cert Store

Locate your K2 NLB cluster certificate created on step (1) and export it including private key:

K2 NLB Export Certificate

Make sure you select “Export Private Key”, specify password on certificate and in the end you should get PFX file. Copy this PFX file to your second server and install it to personal certificates store for this machine, then use IIS console and select this certificate to be used for K2 sites HTTPS binding.

That’s it – you created self-signed certificate for K2 NLB cluster name, configure it to be used on all your nodes and added it to the Trusted Root Certification Authorities on all your machines via GPO.

Here is the video which walk you through all these steps:

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to: Join Windows Server 2012 Core to domain

Since Windows Server 2012 allowed add/remove of GUI “on the fly” via Uninstall-WindowsFeature/Ininstall-WindowsFeature and their aliases amount of questions “How do I do X in Server Core” decreased drastically as there is now universal lazy man response to this – temporarily add GUI do thing X and remove GUI again. Not always time efficient but effective 🙂

Anyhow almost everything can be done without GUI. Here is your option to perform domain join operation for server core box:

1) Old-school crutch sconfig 🙂 Option (1):

sconfig

You may see that it actually uses in netdom.exe in the background when it asks for password:

sconfig domain join

It even suggest you to change computer name in case you forgot do it in advance:

sconfig domain join - computer name change prompt

Assuming you entered correct password and DNS/IP settings allow you to locate and reach out domain controller you will receive reboot prompt in the end of this process:

sconfig domain join - restart prompt

Once restart is performed you can verify the results either via WMIC or PowerShell:

sconfig domain join - verify via WMIC or PS

2) Add-Computer commandlet.

3) djoin command. This one allows to perform offline domain join.

djoin

There is also related dsadd command but this can only be used to pre-create computer account in domain. This utility will create a computer account in the domain, but will not join the local computer from a workgroup to a domain.

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 Community Articles

Since K2 Community Articles were introduced one year or so ago this channel allowed to bring a lot of great content to K2 community site. Of course quality varies across the board for these articles but bottom line is that K2 community benefit from quickly available, relevant information on real world K2 issues. I see a lot of folks solving their problems without logging support ticket or discover relevant information at the early stage of investigation of their issues, often without any help of K2 support engineers.

I authored some of these articles and edited others, and as I found it difficult sometimes to locate one or another K2 community article I worked on I decided to list all of these articles here. And I think I also list links to some really good articles authored by other people.

Good entry point to check out latest Community Aricles on K2 community site can be this page, where you can see such things as popular threads in the K2 Community, latest community articles as well as most kudo’d authors and articles.

In case you see any mistakes (technical or just typos/grammar 🙂 or have any questions about those articles feel free to let me know about them via comments under this post.

Currently I just listing articles in no particular order but I maybe categorize/rearrange them at some later point.

K2 blackpearl service high RAM usage

K2 Host Service CPU usages close to 100% 

Thread pool locking issues when using K2 Client API inside of workflow

Unresponsive K2 Workspace – Server run out of worker threads

IPC Event processing delays

Workflow permissions not working correctly when configured via group

Analysis fails after upgrading from 4.6.x to 4.6.8: Constrained delegation is not enabled for the Active Directory account

Initialization failed before PreInit – Unable to establish a secure connection with the Active Directory server

Configuring Kerberos for K2 environment

How to reduce the size of K2 database on development machine

4.6.9 upgrade wipes out serverlog tables

PDF convertor generates an empty form when multiple security providers configured

How to increase default file size limit for File Attachment Control in K2 SmartForms

64007 Provider did not return a result for K2:Domain\User on GetUser 

Facebooktwittergoogle_plusredditpinterestlinkedinmail