Tag Archives: logging

System.IO.IOException : The requested operation could not be completed due to a file system limitation

I recently had a support case thanks to which I discovered rather cool way of checking out on big files in specific directory which I will describe later here.

Under certain conditions you may see the following issue in K2: very high CPU usage and by extension overall sluggishness of K2 applications accompanied with “System.IO.IOException : The requested operation could not be completed due to a file system limitation.”

As in most of the cases error message itself indicates what is wrong here and “The requested operation could not be completed due to a file system limitation” should ring a bell for you that some file or files run amok and growth beyond file system limits or something along these lines. If you read your logs even more closely they may even give away specific culprit to you indicating log file name which is responsible for this.

K2 has broad logging capabilities for monitoring and troubleshooting purposes (quite good overview of K2 logging can be found here) but in terms of logging volume main suspects are: SmO logging (the only logging which can’t be capped in terms of file size), ADUM logs (very voluminous, especially on debug logging level; file size can be limited by adjusting configurable settings, meaning that you have to go extra mile if you want to allow unhealthy big file name) and lastly debug assemblies you may receive from K2 support. Debug assemblies usually are quickly build ad-hoc troubleshooting tools to investigate specific issue and may well not have log file limit and write super detailed logging (=voluminous log files). As such those supposed to be removed upon completion of your troubleshooting effort, but in reality can be left applied for a while which gradually evolves into forever…

Anyhow exception “System.IO.IOException : The requested operation could not be completed due to a file system limitation.” in K2 host server log in most of the cases caused by abnormally high in size log file, which becomes so big that it exceeds RAM size which makes it difficult to open it and append for writing, and then you have that slippery slope situation with degraded performance and high CPU moment, and to that “aha, I forgot to disable/remove unneeded logging” moment.

Now my take away from this case (though what is said above also worth noting). How to quickly check on huge files in specific directory. Just use this PS script:

Get-ChildItem -Path 'C:\Program Files (x86)\K2 blackpearl' -Recurse -Force -File | Select-Object -Property FullName,@{Name='SizeGB';Expression={$_.Length / 1GB}},@{Name='SizeMB';Expression={$_.Length / 1MB}} | Sort-Object { $_.SizeGB } -Descending | Out-GridView

You may add “-First 10” parameter right after Select-Object in the script above to minimize output which is especially useful when you primarily interested to identify largest file or files.

Here is how the result for healthy K2 folder looks like (by healthy I mean one without strangely big log files):

Large files search

As you can see normally you should not have anything with size of 1 gigabyte more, but above mentioned exception is usually caused by 10-20 GB log file which will be featured prominently on the top of the output.

See also related K2 community KB: Exception – The requested operation could not be completed due to a file system limitation.

Please follow and like us:
error0

K2 Host Server Logging – MaxLifeTimeSpan

There is enough of documentation about K2 logging as well as blog post which cover how to enable and configure different logging types available in K2: host server logging, ADUM, SmO… For example there is a very good blog post “HOW DO I USE LOGGING IN K2?” which covers most of the things logging related.

Working on all sorts of support cases I most frequently have to switch around settings in ApplicationLevelLogSettings section of  HostServerLogging.config file:

K2 ApplicationLevelLogSettings section

Usually it is all about temporarily raising logging level so that we have more details logged for troubleshooting purposes. In case you have hard time remembering what level is maximum this picture may be helpful for you:

K2 logging levels

As usual when you use something routinely it makes you oblivious about other options and things which are readily available for you. In a way it is like that little known phenomenon of Hammer’s Bias (aka the law of the instrument/golden hammer) which is also largely forgotten idea of Abraham Maslow whose pyramid model enjoys the most of the limelight. So the other day I run into question on whether it is possible to roll over/cycle K2 host server log let’s say daily and whereas I clearly remember that by default it is being rolled over on each service restart I’m not sure if it has such settings. Quick glance at documentation brought to my attention extension specific sections in config which I largely ignore in my troubleshooting sessions – log extension specific properties:

K2 Extension specific log properties

In particular there is MaxLifetimeSpan property where you can set days/hour/minutes/seconds value which specifies time after which the log file cycles.  e.g. if you set the value to “1:0:0:00”, it will cycle every 1 day. Note that it works in combination with MaxFileSizeKB property, i.e. log file cycle happens depending on which condition become TRUE faster.

Now why it is so interesting/important. I think it may be a good idea for those who do operational support of K2 servers to configure their logs to cycle every 24 hour and restart their service at 00:00:00 (adjust MaxFileSizeKB so that 24 hours logging volume always fit into this size by adding generous safety margin). This will enable you to have very neat log files archive which will be a pleasure to work with as it is very easy to review logs for specific day as well as see the difference for specific date between night hours and business hours.

Please follow and like us:
error0