I wanted to touch briefly on the security concerns for having Scalr accessible via the Internet. If you are running your own install of Scalr this is an important factor before even adding the first farm. For my own sake I will not getting into my exact setup, but instead talk about a few approaches to locking down access to Scalr.
Possibly the best approach is to limit access to Scalr interface to internal network requiring users to use OpenVPN or some other VPN solution to access internal resources which would include Scalr. If you are hosting Scalr on an AWS instance be sure to set the security group to only allow the port you are running for VPN. You can find a quick and dirty howto for OpenVPN on an EC2 instance at Google Books.
Another option is to use SSL and mod_access (Apache 1.3) or its renamed equivalent in Apache 2.2 mod_authz_host to limit those who have access to Scalr interface. You should for sure at least use SSL to access Scalr. You can also add a layer of authentication for good measure using Apache Basic Authentication.
Being that Scalr controls the rest of your AWS setup it is by far the one thing you want to lock down as much as possible.
I wanted to touch again on the use of Subversion (SVN) to populate the /var/www of app servers on Scalr. Basically the issue is how to add your web content to a new instance once it has automagically launched a new instance due to high load. So Scalr will launch another app role once the server reaches a load threshold you have previously set. So the issue is I can have the instance started, but once it has launched the /var/www needs to be populated for that server to be able to serve content via load balancer.
This is where SVN and Scalr Scripting come into play. I keep all my site content in a SVN repo. So I link to whatever production tag I want to be live at that time. In order to get the directory populated I make a simple script to do an svn checkout of that tag to /var/www. A simple bash script is added to do the checkout and is added to the “OnHostUp” option. This way once the server sends its SNMP trap saying it is up the script will be executed. This is also a helpful means of updating your servers to a newer build. I DO NOT checkout the tag directly into /var/www instead I make a symlink to /var/svn where the tags are checked out. So when it is time to roll out a new production tag I simply checkout the new tag to /var/svn and redo the symlink to point at new tag. This way if there is an issue that was not forseen in the QA process I can roll back to known good tag by redoing symlink. This is an easy but very effective way of using Scalr scripts and SVN to manage content loading on servers.
Since I have been using Scalr to manage my Amazon Web Services farms I have been wanting more monitoring in terms of statistical information on services, traffic, disk usage, and uptime to name a few. Scalr has built in means of basic event notifications such as host up, host down, etc. Along with providing very basic load statistic via RRDtool. In the past I have always used Zabbix for most projects I have worked on so I wanted to be able to use it with Scalr. I am still testing the setup I am going to speak of so please keep that in mind. This is NOT a howto, but more of a brainstorming of how I plan on getting Zabbix integrated into my Scalr setup. In the Zabbix documentation (PDF) there are a few ways to use the auto-discovery that they cover (page 173). You can have Zabbix monitor a block of IPs to find new Zabbix Agents running for example. So here is what I will have my Zabbix Server do:
- Look for new Zabbix Agents on my AWS internal IP range.
- If the system.uname contains “Scalr” it will add to Scalr server group
- Server must be up for 30+ minutes
There will be other stipulations in order to get the server added to Zabbix. I will have system templates for each of my Scalr AMI roles. Once the server is added to Zabbix it will add them to to their respective groups and monitor for items and triggers listed in the system template. There will also be a rule to remove old instances after 24 hours from Zabbix after receiving the host down trigger. This way I will not have a bunch of old instances that were once monitored still cluttering Zabbix database. If you happen to also have Windows AWS instances you can add a rule to monitor these as well. The AMI just needs to have the Zabbix Windows Agent installed.
When I decided to take the route of running Scalr on our own servers to manage our Amazon Web Services farms one important consideration was Scalr’s use of DNS servers to change records. I made the choice of hosting our own DNS infrastructure in order to keep initial cost down. But also to allow us the flexibility to change and control our DNS internally. So now onto my approach to doing this most effectively. Firstly two separate DNS servers were chosen of the self-managed dedicated server form. One server was chosen in a west coast location while the second was on the east coast. Being that more of our traffic come from the western states the NS1 was selected accordingly. Now I used two non-Scalr managed AMIs to run our NS3 and NS4 servers. Each in a separate AWS datacenter. The idea being that the internal custom bundled AMIs for Scalr I built would use the NS3 and NS4 for their internal DNS. I find this to be an excellent mix of using AWS and old fashioned dedicated servers to manage our DNS.
I have been using Amazon Web Services for some time now and decided to use the Open Source Scalr Project to manage my farms on AWS. After overcoming many hurtles to getting Scalr running successfully I have been using it to manage my farms for about a month. Compared to the initial outlay required my RightScale the time it took to get Scalr running was nominal. Plus I like the ability to have a developer tweak the functionality of Scalr to fit our business requirements. There is an active Google Group for Scalr that I have used to solve most of my issues. People also have the option of using Scalr.net as a pay per month solution to manage their AWS farms. I chose to host my own instance of Scalr since we are doing large scale hosting and the previously mentioned need to customize it. I do enjoy the ease Scalr provides in bundling new custom roles I build for our various application servers. It allows you to simply press a button to save a new role for future use. Along with its ability to auto-scale as traffic dictates those are the two biggest pluses for me in using Scalr.
I will be adding more on my experiences with Scalr in coming days. If you are installing on CentOS5 I have some install notes I posted here.
I have been playing around with the AWS Console recently released. It is a good start to a nice AWS provided interface for controlling EC2. It seems to only make sense that they provide a console instead of forcing people to look elsewhere such as RightScale or Scalr. For that matter I am not sure why Amazon does not just buy RightScale and provide their services as part of AWS.
I came across Scalr by accident when I was browsing projects in Google Code. It appears as though Scalr has become a pay service to manage your AWS instances along similar lines to RightScale. But the main difference is that Scalr charges a scant $50 a month. From the Scalr Google Code page:
Scalr is a fully redundant, self-curing and self-scaling hosting environment utilizing Amazon’s EC2.
It allows you to create server farms through a web-based interface using prebuilt AMI’s for load balancers (pound or nginx), app servers (apache, others), databases (mysql master-slave, others), and a generic AMI to build on top of.
The health of the farm is continuously monitored and maintained. When the Load Average on a type of node goes above a configurable threshold a new node is inserted into the farm to spread the load and the cluster is reconfigured. When a node crashes a new machine of that type is inserted into the farm to replace it.
Multiple AMI’s are provided for load balancers, mysql databases, application servers, and a generic base image to customize. Scalr allows you to further customize each image, bundle the image and use that for future nodes that are inserted into the farm. You can make changes to one machine and use that for a specific type of node. New machines of this type will be brought online to meet current levels and the old machines are terminated one by one.
I would love to hear some comments from those already using the service and how it compares to RightScale.