… What the? Ok, so the basic idea here is that we’re going to be creating a cluster, sharing a block device from one server to the other nodes in a cluster as a Global Network Block Device, putting it into an LVM configuration, and formatting the filesystem using Global File System for file locking between nodes… And we’re gonna do this all natively with Red Hat Clustering Suite. This is a good, low-rent implementation of block device sharing in a cluster, where iSCSI or FCP is not available to the hosts. It’s better than NFS because we get real locking mechanisms, and we get our own fencing mechanism for the cluster (which, unfortunately I won’t be covering in this post). I’ve recently had the opportunity to do this as a proof of concept and this is really cool stuff… Continue reading »

The situation goes as such, if you can visualize it, where you have a dedicated server hosted at an off-site location (disaster recovery, branch office, etc…), you have your primary location, and then you have a third party location somewhere in the middle — perhaps cloud hosting… From your primary location to your secondary site your transfer speeds are terrible and, if you’re using this for a DR site, it quickly becomes agonizing when off-loading backups or other important data. However, from your tertiary location you get great routing, great speeds, to your secondary location, and from your primary location to your tertiary location it’s as fast as you can go. At your DR site, you have an FTP server connected to a NAS, a SAN, or some other mass storage unit, and you need to be able to quickly and securely ship backups and other sensitive data over the line. With the routing problems from your primary site, you find yourself spending way too much time trying to figure out the best way to get the data from point A to point B rapidly. Somehow, you ponder, there must be a way to utilize the tertiary host as a hop in the trip from your primary site to your secondary site… Continue reading »

As Linux administrators, we generally host a range of multi-purpose scripts that are designed for the purposes of duct-taping the infrastructure together. Nobody else in the organization knows how/when/where 95% of these scripts are run and on what interval — it’s a great tactic for job security. Inevitably, we’re going to get a request at some point (and most of us already have, that’s for sure) to have a script watch one folder on a server for a file to appear, and then move it somewhere else (usually with some data or filename massaging in the process). So, all of us have our own variation of this “move-file” script… Continue reading »

On to something more interesting…

We use VMware ESX 3.5 for our virtualization solution, and I was tasked with finding a way to automate monitoring CPU/MEM usage for ESX guests. We use VirtualCenter to manage and maintain all of our VMs, and we use a Network Monitoring solution to monitor all of the devices in our infrastructure. Relying on the VMware guest to provide accurate performance data has proven unreliable in the past, but the performance data that VirtualCenter provides is reliable and is what we wanted to monitor from our centralized solution. Continue reading »

Just this past week, I was given a programming task to take a Microsoft Word template, which had been saved as an XML file (Word Markup-Language format), and auto-populate all of the bookmarks in the document with dynamic data from a database. The purpose of this task was to take the name of the bookmark (for example, “FirstNameLastName”) and populate only that field with the database data, leaving the other data untouched and untransformed. This will allow an end user to manage the static data and formatting of the document, without programming intervention. Therefore, we can have one person from each department in charge of modifying the legal wording, or wording to customers, and not have to have programming create a new document template for every change. Continue reading »

© 2013 Dan's Blog Suffusion theme by Sayontan Sinha