I’ve recently migrated this blog to Amazon Lightsail along with moving some of my domains to AWS Route 53. Since I’ve started this blog I had it hosted in different places – Synolgy NAS device at my home, SiteGround, Gandhi… As I recently dived into AWS certifications it gave me extra incentive to use and explore AWS services and in course of preparation for AWS Certified Cloud Practitioner I had to look at Amazon Lightsail and realized that that was exactly I was looking for while struggling with excessive complexity and enterprise level pricing of AWS service offering.
Amazon Lightsail intended use case is to offer AWS for developers who need to start quickly and have no expertise nor desire to deal with complexity and plethora of functions of standard AWS service offering which can scare away any newcomer by sheer number of services and additional features. So for AWS exams test takers this service shows up only in scenarios where they ask you which service you need to recommend for developers who don’t have AWS expertise or any advanced AWS features. So it is kind of “AWS made easy”.
I was not following AWS offering to closely, as I worked for MSFT-house company where everything was run either with on-premise MSFT stack or with Azure with a tiny bit of exposure/use of AWS stack, so I’ve missed this service, which was announced back in 2016, until I stumbled on it while preparing to AWS exams.
Actually to name things properly Lightsail is, basically VPS offering from AWS, and it features its own, separate, management console which will look somewhat familiar to you if worked with DigitalOcean or Linode:
But as you can imagine this VPS offering powered by ginormous and battletested AWS infrastructure which ensures that console is super streamlined and responsive, instances can be provisioned in just a few clicks and, no surprises there, price tag is very good, and not only if compared against running the same workload using EC2 or Elastic Beanstalk. IMO pricing is compelling enough to consider migration from other VPS services (especially if you have just small/individual instances scattered across different VPS providers). I guess after migrating my blog to Lightsail, I’ll soon be migrating my Django app VPS which so far is hosted on DigitalOcean.
Just a few notes on WordPress migration experience. I don’t have time to write comprehensive step by step guide, so I just jot down some points / steps I did:
I’ve created Amazon Lightsail WordPresss 5.6.0 instance (WordPress Certified by Bitnami and Automattic 5.6.0) with desired specs (instance plan) and assigned a static IP to it.
For migration itself I used All-in-One WP Migration plugin which allows you to download your WordPress blog as a single file and later on import it to a newly created WordPress blog on your new server.
Main problem which blocks some people to do migrations with this plugin is a default 40 MB upload limit which some people try to resolve through installing different versions of plugin. Actually all you need to do is to connect to your VPS instance and adjust PHP File Upload Limit in php.ini file (see some details on how to do this here) but it is basically as simple as running sudo nano/opt/bitnami/php/etc/php.ini command and increasing post_max_filesize and upload_max_filesize settings and restarting services with sudo /opt/bitnami/ctlscript.sh restart command. After that you should be able increased limit value in plugin UI.
I’ve also migrated my domain names to Route 53 and for WordPress domains all you need to do is create a hosted zone for domain and next create A record which will resolve domain name into Lightsail instance static IP along with CNAME record which translated www.domain.com into domain.com (be sure to also update name servers to AWS ones in case you did domain transfer).
I was a little bit confused as to whether it is possible to use Route 53 SSL certificate for Lightsail WordPress instance (it didn’t work for me after first attempt) so I end up using Bitnami bncert-tool to provision SSL certificate from Let’s Encrypt for my instance. Some details on how to do that can be found here and here.
Another little adjustment I did is removal of Bitnami banner. which can be done by running sudo /opt/bitnami/apache2/bnconfig –disable_banner 1 and restarting Apache with sudo /opt/bitnami/ctlscript.sh restart apache command.
There were probably some other minor config changes I did, but all in all migration was fast and easy and it seems that my blog become a bit more responsive now, and if I need to improve its performance I still can leverage Lightsail CDN and load balancing features.
One confusing thing about AWS services is their naming prefix – some of the services names are prefixed with Amazon and other with AWS, and while Lightsail tries to be a bit differentiated from all the other AWS services, and it is natural to call it Amazon Lightsail and not an AWS Lightsail because of that, I see that all the other AWS services are prefixed either with “Amazon” or with “AWS” without any apparent logic. But for a company with 175 services portfolio naming system is more than OK, as I saw much more confusing and disorganized naming employed by a vendors with less than 10 products or services 🙂 Luckily enough cloud services do not get that major/minor versioning in their names which sometimes gets too creative for on-premise products where version plays an important role of vehicle to wrap up certain number of features into it and make a “new product” which client supposed to buy/or upgrade to. I guess in that interim period between “boxed”/COTS/buy once software and cloud/SaaS that recurring major version concept went too far in attempt to ensure recurring revenue for software vendors 🙂
So that was just an announcement of this blog hosting change along with a few notes on WordPress migration process and Lightsail in general, I hope that it may come in handy/interesting for some of my blog readers too.
Common issue which occurs when trying to connect to AWS EC2 instances from Linux machines is the following error:
As you can see on the screenshot octal representation of pem file permissions is 0644 and it means that everyone has read-only access to the file while security best practice requires to limit access to private key files more strictly. This 0644 permission translates into RW for Owner, R for Group, and R for other/word (Everyone counterpart of Windows ACLs).
If you want to view file permissions in Linux shell you have 2 couple of commands for that ls -l %filename% and stat %filename% and the latter will show you octal permissions value as shown below:
Actually AWS EC2 console indicates recommended pem file permissions configuration and gives us a command to set them:
So just run this command and you will be able to connect (be sure running all the commands after switching directory to the one which contains your pem file and make sure that you use correct pem file name – AWS will give you command specific for EC2 instance and uses access key name based on selection you’ve made for specific EC2 instance).
Although this is not a big problem and all the explanations/solutions are given to you by respective commands output and AWS console I just decided to jot this down in case someone will get stuck with this and switches to googling bypassing reading error messages and instructions 🙂
I recently passed AWS Certified Cloud Practitioner Exam aka CCP aka CLF-C01. This was a first step and a part of my extensive learning plan for the nearest future. I still want to do a separate blog post on overall CCP exam taking experience and preparation and hopefully I’ll manage to do that before my AWS Certified Solutions Architect Associate exam which I have scheduled for January 2021. In 2021 I also plan to take Linux Professional Institute LPIC-1 exam and will be doing 5 months university course in DevOps and Cloud – so you can expect some interesting content and posts as I’m working on that.
In this post I just wanted to talk about “Overview of Amazon Web Services” AWS whitepaper. This whitepaper is definitely qualifies for must read free preparation resource both for CCP exam takers and anyone who starts to learn AWS (and even if you started quite some time ago it is never too late too late to review some basics 🙂 ). Actually as a part of your CCP preparation, as well as preparation to subsequent AWS exams, you supposed to go through quite a few whitepapers, here is a list of those which are relevant for CCP exam (obviously you supposed to know their content if you taking more advanced AWS certifications too):
As I’ve already mentioned, there are more whitepapers but those should be on your reading list for CCP exam. Although Overview of Amazon Web Services whitepaper is very useful or even mandatory for AWS CCP exam takers, I did passed my exam without reading it and only read it afterwards. You might wonder how can I say that this whitepaper is a must read if I didn’t use it? In my case I used DigitalCloud training CCP preparation course and (as any other good CCP course) it served me with right extracts from all required whitepapers, and this one was featured prominently. Honestly I sit my exam with 80% training completion and after taking some of practice tests – that, and 14+ years of experience with Microsoft technology stack and certifications with a bit of exposure to Azure and AWS, was sufficient to pass an exam. But I’m reading all these whitepapers and reviewing CCP preparation materials now before dive in into materials dedicated to AWS Certified Solutions Architect Associate exam topics.
Do read it before your CCP exam and revisit/review before taking other AWS exams
Being “overview” it is not deep on details but be sure that “scope” will be overwhelming as AWS nowadays is BIG – 175 services available in 190 countries (heck I even thought we had slightly less number of countries in the world before reading this whitepaper 🙂 )
Don’t try to read this whitepaper for the first time in the very last moment – unless you are super focused and quick reader it may take you entire day or more to read it
My recommendation is to read all required whitepapers 3 times – can be dull and time consuming but it will serve you well on exam and for your learning in general, read it when you start your exam preparation then re-read in the middle of the process and finally revisit again shortly before exam
It contains right wording in which AWS speaks about their services and their intended use cases – so you will see the same wording on exam
You do need to have some idea about all 175 services available – really you do 🙂 I was warned in my online training to read through all services descriptions (and this white paper contains those) and neglected that. As someone who took an exam I can tell you that it was very good advice. Roughly speaking you will be directly quizzed on around 60% of core AWS services but remaining 40% will show up on the test as distractors/wrong answers – knowing what they are will help you to discard wrong answers
Beyond brief description of all 175 services which comprise the bulk of this whitepaper another important chunks to read and digest there are the following: cloud computing definition, 6 advantages of cloud computing, types of cloud computing, global infrastructure and security and compliance.
If you have experience with pre-cloud IT certifications you should keep in mind that these days whitepapers and exam content gets renewed more regularly so be sure to read the latest version of white paper available. TIP: Some of the AWS whitepapers available in Kindle format but I noticed that Kindle whitepaper versions are often outdated so you need to opt out for PDF download to get the latest version at times.
If you about to take CCP exam be sure to read this whitepaper (more than once if you can) – it has 100% relevance for exam and as well as exam itself introduces you to AWS building blocks and power of this cloud platform. I’m going to read all the others mentioned above soon and will write up a post dedicated to each of the whitepapers.
Although my keep steep drop in amount of K2 posts from my side with more than high probability in the future, I’m keep jotting little things for now. The other day I was a bit perplexed by K2 Platform Classic setup manager which blocked me from adjusting K2 service account like that:
What it actually does is forcing you to use currently logged in user. And addressing why part it happens when you have developer license – in that case setup manager forces you to use currently logged in user as a “service” account (remember that Developer license requires you to run your K2 service in a window within interactive user session). Just though it may be useful to write this down and share.