Category Archives: How-to

Non working Python 3 – invalid active developer path

I’ve recently updated macOS Big Sur to version 11.2.2, and it seems that after this update I run into issues with running my Python programs from PyCharm, which started to give me the following error on attempt to execute any program:

Error running 'P3-2': Cannot run program "/Users/User_Name/PycharmProjects/Project_Name/venv/bin/python" (in directory "/Users/User_Name/PycharmProjects/Project_Name"): error=2, No such file or directory

I’ve looked into relevant venv directory which contained symbolic link to Python executable which, in turn, by the looks of it in Finder, was broken (path/target cannot be found). I found that strange as Python 3 was installed and working for quite some time, so I tried to run python3 from terminal and saw the following error:

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

Long story short to fix that it was necessary to download and install the Command Line Tools package using the following command:

xcode-select --install

Not quite sure why it stopped working for me, but possibly installed version got removed during macOS update process.

XRDP service error: Cannot read private key file

Recently it was necessary for me to enable XRDP service on Ubuntu 20.04 VM, so I followed the steps outlined in one of my old posts and get it working quickly. Unfortunately I run into some new issue with not being able to reset or shutdown Hyper-V VM for some reason, which I ignored for now, but after couple of power offs I realized that I cannot connect via XRDP until I open session locally. I then decided to check on the service status with sudo systemctl status xrdp command and got the output shown below:

XRDP Cannot read private key

Full error message says: [ERROR] Cannot read private key file /etc/xrdp/key.pem and I’m pretty sure that it didn’t show up when I used the same status command after initial configuration, though people tend to forget and miss things 🙂

Anyhow to clear up this the following command has to be executed:

sudo adduser xrdp ssl-cert

Abovementioned error occurs when the default user for XRDP’s service lacks access to the directory to which /etc/xrdp links, and with the command above you allow the user xrdp access through ssl-cert group membership. I hope that this information may come in handy to some one else 🙂

How to check local Hyper-V VM generation

Seemingly Hyper-V Manager GUI does not show VM generation of already created VMs whereas we do need to check on that sometimes. As usual PowerShell is an answer to this problem:

That command will output name of VMs and generation information as shown on a sample screenshot below:

Using Get-VM to check VM generation information

Command above need to be run in elevated (Run as administrator) PowerShell window and from your Hyper-V host.

That’s it – Just a little note on how to grab VM generation information.

Migrating WordPress to Amazon Lightsail

What is Amazon Lightsail?. If you are new to AWS and looking to… | by Kunal  Yadav | Level Up Coding
Amazon Lightsail Logo

I’ve recently migrated this blog to Amazon Lightsail along with moving some of my domains to AWS Route 53. Since I’ve started this blog I had it hosted in different places – Synolgy NAS device at my home, SiteGround, Gandhi… As I recently dived into AWS certifications it gave me extra incentive to use and explore AWS services and in course of preparation for AWS Certified Cloud Practitioner I had to look at Amazon Lightsail and realized that that was exactly I was looking for while struggling with excessive complexity and enterprise level pricing of AWS service offering.

Amazon Lightsail intended use case is to offer AWS for developers who need to start quickly and have no expertise nor desire to deal with complexity and plethora of functions of standard AWS service offering which can scare away any newcomer by sheer number of services and additional features. So for AWS exams test takers this service shows up only in scenarios where they ask you which service you need to recommend for developers who don’t have AWS expertise or any advanced AWS features. So it is kind of “AWS made easy”.

I was not following AWS offering to closely, as I worked for MSFT-house company where everything was run either with on-premise MSFT stack or with Azure with a tiny bit of exposure/use of AWS stack, so I’ve missed this service, which was announced back in 2016, until I stumbled on it while preparing to AWS exams.

Actually to name things properly Lightsail is, basically VPS offering from AWS, and it features its own, separate, management console which will look somewhat familiar to you if worked with DigitalOcean or Linode:

Amazon Lightsail – Create an Instance

But as you can imagine this VPS offering powered by ginormous and battletested AWS infrastructure which ensures that console is super streamlined and responsive, instances can be provisioned in just a few clicks and, no surprises there, price tag is very good, and not only if compared against running the same workload using EC2 or Elastic Beanstalk. IMO pricing is compelling enough to consider migration from other VPS services (especially if you have just small/individual instances scattered across different VPS providers). I guess after migrating my blog to Lightsail, I’ll soon be migrating my Django app VPS which so far is hosted on DigitalOcean.

Just a few notes on WordPress migration experience. I don’t have time to write comprehensive step by step guide, so I just jot down some points / steps I did:

  • I’ve created Amazon Lightsail WordPresss 5.6.0 instance (WordPress Certified by Bitnami and Automattic 5.6.0) with desired specs (instance plan) and assigned a static IP to it.
  • For migration itself I used All-in-One WP Migration plugin which allows you to download your WordPress blog as a single file and later on import it to a newly created WordPress blog on your new server.
All-in-One WP Migration plugin
  • Main problem which blocks some people to do migrations with this plugin is a default 40 MB upload limit which some people try to resolve through installing different versions of plugin. Actually all you need to do is to connect to your VPS instance and adjust PHP File Upload Limit in php.ini file (see some details on how to do this here) but it is basically as simple as running sudo nano/opt/bitnami/php/etc/php.ini command and increasing post_max_filesize and upload_max_filesize settings and restarting services with sudo /opt/bitnami/ctlscript.sh restart command. After that you should be able increased limit value in plugin UI.
All-in-One WP Migration plugin – PHP File Upload Limit set to 400 MB
  • I’ve also migrated my domain names to Route 53 and for WordPress domains all you need to do is create a hosted zone for domain and next create A record which will resolve domain name into Lightsail instance static IP along with CNAME record which translated www.domain.com into domain.com (be sure to also update name servers to AWS ones in case you did domain transfer).
  • I was a little bit confused as to whether it is possible to use Route 53 SSL certificate for Lightsail WordPress instance (it didn’t work for me after first attempt) so I end up using Bitnami bncert-tool to provision SSL certificate from Let’s Encrypt for my instance. Some details on how to do that can be found here and here.
  • Another little adjustment I did is removal of Bitnami banner. which can be done by running sudo /opt/bitnami/apache2/bnconfig –disable_banner 1 and restarting Apache with sudo /opt/bitnami/ctlscript.sh restart apache command.
Bitnami Banner

There were probably some other minor config changes I did, but all in all migration was fast and easy and it seems that my blog become a bit more responsive now, and if I need to improve its performance I still can leverage Lightsail CDN and load balancing features.

One confusing thing about AWS services is their naming prefix – some of the services names are prefixed with Amazon and other with AWS, and while Lightsail tries to be a bit differentiated from all the other AWS services, and it is natural to call it Amazon Lightsail and not an AWS Lightsail because of that, I see that all the other AWS services are prefixed either with “Amazon” or with “AWS” without any apparent logic. But for a company with 175 services portfolio naming system is more than OK, as I saw much more confusing and disorganized naming employed by a vendors with less than 10 products or services 🙂 Luckily enough cloud services do not get that major/minor versioning in their names which sometimes gets too creative for on-premise products where version plays an important role of vehicle to wrap up certain number of features into it and make a “new product” which client supposed to buy/or upgrade to. I guess in that interim period between “boxed”/COTS/buy once software and cloud/SaaS that recurring major version concept went too far in attempt to ensure recurring revenue for software vendors 🙂

So that was just an announcement of this blog hosting change along with a few notes on WordPress migration process and Lightsail in general, I hope that it may come in handy/interesting for some of my blog readers too.

Unable to connect over SSH to EC2 instance from Linux

Common issue which occurs when trying to connect to AWS EC2 instances from Linux machines is the following error:

WARNING: UNPROTECTED PRIVATE KEY PAIR

As you can see on the screenshot octal representation of pem file permissions is 0644 and it means that everyone has read-only access to the file while security best practice requires to limit access to private key files more strictly. This 0644 permission translates into RW for Owner, R for Group, and R for other/word (Everyone counterpart of Windows ACLs).

If you want to view file permissions in Linux shell you have 2 couple of commands for that ls -l %filename% and stat %filename% and the latter will show you octal permissions value as shown below:

Checking file permissions using ls -l and stat commands

Actually AWS EC2 console indicates recommended pem file permissions configuration and gives us a command to set them:

Command to ensue that your key is not publicly viewable

So just run this command and you will be able to connect (be sure running all the commands after switching directory to the one which contains your pem file and make sure that you use correct pem file name – AWS will give you command specific for EC2 instance and uses access key name based on selection you’ve made for specific EC2 instance).

Adjusting pem file permissions and connecting to EC2 instance over SSH

Although this is not a big problem and all the explanations/solutions are given to you by respective commands output and AWS console I just decided to jot this down in case someone will get stuck with this and switches to googling bypassing reading error messages and instructions 🙂