Monday, July 30, 2012

Establishing Network Load Balancing with Windows Server 2008 R2 x64

Introduction

It is important to understand what a network load balanced cluster is and what it does. The cluster is made up of member hosts, and each host is bound to a public IP address (or addresses) which resolve to the cluster itself.
Each host in the cluster must have Network Load Balancing installed, and bound to the network interface(s) of choice.
When you initially install the cluster, you configure both an initial member host, and then the cluster settings. This can be a little confusing at first! After the cluster is created (with a single member host) you then add additional host members separately.
An example Network Load Balancing Cluster:
image
Note how each host in the cluster has a network interface card which is bound to the cluster’s IP address (10.2.194.100).
What follows is a set-by-step guide to installing and configuring a Network Load Balancing cluster on Windows Server 2008 (R2 x64).  You’ll need to be (at minimum) a local Administrator on each machine which will participate in the cluster, to install and configure.

Installation and Configuration

1. Install Network Load Balancing
Open Server Manager, click on Features and then click on the Add Features link. In the “Add Features Wizard” scroll to, and select “Network Load Balancing”:
clip_image004
Confirm and install.
2. Start Load Balancing Administration
clip_image006
3. Create a new Cluster
Right click on the root tree element on the left hand column marked “Network Load Balancing Clusters” and select “New Cluster”
1. For Host, enter the computer name (or IP Address) of a host to add to Network Load Balancing 
    note: Network Load Balancing must be installed on this machine beforehand!
2. Select the interface on the machine you wish to configure for NLB (you might have more than one) – click Next
clip_image008
3. Give the Priority to the host (it reflects the priority order for the host in the cluster). For the initial host, you might leave it as 1 (the highest priority) – Click Next
clip_image010
4. Cluster Configuration (not host-specific!)
Add a public IP address which will be resolved to the NLB cluster
You will have this address bound to each of the selected interface(s) of hosts in the cluster – i.e. multiple hosts will be configured with a static binding to the IP address(es) of the cluster IP address(es).
clip_image012
clip_image014
You may bind the cluster to a specific host name (or leave it blank). You can have multiple IP addresses for a NLB cluster (I’m using just one).
On the last page, you may configure port rules (which ports are routed by the NLB, on the specified cluster IP address(es))
clip_image016
Clicking finish will see the cluster created for the specified host you selected at the beginning. If for some reason, there are errors, you can simply bind the selected interface of a host to the public IP address of the cluster yourself, manually, and then just refresh/”start” the host again.
Here is an example of a configured NLB cluster:
image
The final piece of the puzzle might be to add a DNS record to allow us to resolve resources which are Network Load Balanced with a nice friendly URI:
image

Troubleshooting

If you get the error message "NLB not bound” on a cluster host, access the host machine and simply add the cluster’s IP address(es) to the selected interface on the machine manually:
clip_image019
You might also need to review the firewall settings on each cluster host, to ensure that traffic is reaching each host as expected.

Understanding IIS Bindings, Websites, Virtual Directories, and lastly Application Pools

Bindings:  Did you say “Bindings?”

So you’ve been tasked with development of a new Web application to be hosted on IIS (any version)?  The first thing on your mind is usually the design of the Website, how the application will interact with the middle-tier, and usually security.  This is a great start in the design process.  However, let’s not forget that often jumping into this level of design will mean that later on your going to make some other decisions a bit more tricky.
It starts with these questions:
  1. Am I going to host everything in one IIS Website?
  2. Will I use an “existing” Website like the Default Web Site or create my own?
  3. Will some of the site require secure authentication using SSL?
The first thing that often happens with developers posed with these questions are they say these aren’t important but I quickly smile and say, “We’ll see”.
The primary reason that these questions are important are around the fact Websites are accessed by every client using bindings.  The end-user of your Web application(s) don’t know they are using bindings because they are usually hidden behind a nice, pretty “Web address” using DNS.  If you don’t have the answer of how many Websites your Web application will utilize then you are going to be struggling when you are upset that you are limited to “rules” governed by directories.
You see, Websites have something called Server Bindings which represent the underlying address, port, and potentially a host header that your Website is accessed using.  Do you think that HR staff would be happy if their Website is accessed using the same bindings as your company’s intranet?  I would venture to guess the answer is no.
Bindings 101:
A typical binding for Websites are in the form of IP:Port:HostHeader.  For all versions of IIS that anyone reading this in 2010 care about (version 6.0 and higher), the default Web Site binding is set to *:80:* meaning that all requests to that server will land at that site.
Valid Bindings:
IP FieldPort FieldHost HeaderResult
*80*All requests to this server’s IP address will access this site.
*81*All requests to this server’s IP address with :81 will access this site
192.168.1.1.80*All requests to this server’s IP address will access this site*
*80www.microsoft.comAll requests to this URL will access this site
*80microsoft.comAll requests to this URL will access this site
For option where you utilize IP address as the “unique” point for access, you will need to disable HTTP.sys default behavior of listening on all IP addresses configured on your server.  For example, if you have 192.168.1.1 and 192.167.1.2 configured as IP addresses on the same server the default behavior “out of the box” is to listen on port 80 no matter if you do the binding in the IIS Manager.
To change this behavior, you will need to configure HTTP.sys’s IPListenList  to only listen on a specific address.  This is done via the registry or NetSH depending on what you are most comfortable with.
image
Figure 1:  Default setting for IPListen (blank equals *:80:*)
In short, if you plan to utilize a Website then know what your bindings will be and where your application will live in production.  If a shared server, you can bet you will need a Host Header or a unique IP address so think ahead and get ‘er going.

Websites versus Application Pools

There are so many reasons that Websites & Application Pools are confused that I don’t have enough time to do a post on it.  I’m not going to try and solve the debate here, but instead, I’m going to try and educate you on what the fundamental difference between the two are.  In discussions with IT Pro’s & Developers, rarely will you have any of them “admit” they know what each is and when to utilize one or the other but my guess is that over 70% of them don’t know.
Thus, I hope for readers out there who used their decision engine (nice plug, ay?) to find this reading will enjoy learning this topic and we can together reduce this 70% to a much lower number…

Websites:  Container of physical and virtual directories

It really is simple.  A website is nothing more than a container of physical and virtual directories that have a unique “Server Binding” for clients to access the content.  The default container in IIS, for years, has been %systemdrive%\inetpub\wwwroot unless you are doing a unattended install in IIS 6.0 which allowed you to put the files where ever you choose.
Path + Server Binding = Website  … It really is easy. 
NOTE:  Their is a serious obmission completely on purpose here.  As you can see, Websites have nothing to do with memory, server processes, bitness, or performance.  They simply are a path + binding.
When to choose a “Website”
With that understanding, you can now make an educated guess as to how to answer the question of whether you should create a new Website or use an existing one.  However, I will make sure to share it in case you missed it - “You decide whether to create a new Website based on whether you would like to have a unique binding for your Website or if you want to use an existing one.”
The path isn’t important in this equation as I can create a 1000 Websites all pointing to exactly the same path and there is absolutely no problems with doing this (of course, why in the heck would you do this is a great question).  The key decision here is that any physical or virtual directory will always use the bindings of the Website so ensure that you understand this.
When to choose directories?
If there is a website which is already running and utilizing a binding that you would prefer to use then you should select this option.  This allows you to utilize the resources of the parent site, if interested, as the server (e.g. IIS) will handle any requests over the same connection(s).  For example, any physical or virtual directory in the IIS path is still considered “/” to the server as it builds out the URI because the bindings are already mapped at the site level.  This means that URLs can be re-written to go various different places within the folder hiearchy over the the same connection since the binding is the “same”…
If you choose to put your Web application in its own Website then you will have to use the HTTP 302 redirection capability (exposed via Server.Transfer or other methods) to push the request elsewhere. 
So, as you can see, thinking ahead of time about whether you are building a Website for your application or whether it is a child directory (physical or virtual) is an important piece of information to have locked early, early on!

Application Pools:  Container of applications

The very nature of application pools is to do the obvious, contain a single or multiple applications.  The introduction of application pools in IIS 6.0 caused some head scratching but in today’s world where IIS 6.0 is very engrain in enterprises and the Web leads to less scratching.  However, again, development teams often make mistakes by not “thinking” about application pools and there impact on their new applications they are building.  Hence the reason we will chat about this some more today…
First Concept…  Windows Process = Application Pools *not* Windows Process = Website
Second Concept… Process Management = Application Pools *not* Process Management = Website
When to create a new Application?
By default, IIS 6.0 or IIS 7.0 must have a single application to run.  If the root application (/) is deleted or corrupted then IIS will fail, as in, not serve your application.  Both products ship with a default application which is assigned to the Default App Pool.  I should not this is only if no other Microsoft features have been installed and instead we have the basic Web server installed.
imageAs you can see, there is also a Classic .NET AppPool but no applications are currently bound to it.  In IIS 7.0, any managed code application can choose to utilize the Integrated Pipeline or to use the classic ASP.NET pipeline which is present in IIS 6.0.
By default, you as a developer of a Web application can choose to simply inherit the settings of the parent Application Pool (/) and choose to not create your own.  This is absolutely fine.  So you might ask, what do I get from choosing this route?  I’m glad you asked because it is important to know that you get all the settings of the parent application pool which in this case is the DefaultAppPool.
imageThese settings include the following:
SettingPurpose
Recycling SettingsHow often the App will be recycled such as by time intervals, memory usage, etc.
Process SecurityWho is the identity that the W3WP process will run as
Pipeline Type (IIS 7.0 Only)Whether to use the integrated pipeline, classic pipeline, or no Managed code at all
BitnessWhether the process runs in native 64-bit or uses a 32-process (64-bit OS only)
As you can see, you need make some important decisions early on or you are going to change a lot during the development process. 
When to create a new Application Pool?
Well, it sounds like I’m best to create a new application pool for all my Web applications.  I would say you’ve been suckered and convince that this is the best without all the facts.  The fact is that creating an application pool includes understanding better your strategies for security such as do you run Network Service, a Domain Service Account, etc. that starts to complicate things very quickly.  The one thing that many manage code developers often love to take advantage of is the caching capabilities of processes and manage code.  Each time you create a application, bind it to its own unique application pool then you are limiting your ability to share cache with other .NET applications running on the same box.  For example, if you have the Microsoft Enterprise Library in use all throughout your Web applications then you can often utilize caching to improve performance.  As soon as you break these out into different process boundaries (e.g. App Pools) then you no longer have that benefit.
There are a number of these types of examples listed above that drives the question – Do I use my own application pool or do use another one already running?  I’m happy to get posed a question via comments or email regarding this topic and see what your situation is and make my suggestion :)
Nonetheless, be careful in your planning when utilizing your own Application Pools and share resources where possible is my guidance.  There are absolutely situations where one might choose to always go hard line with creation of app pools for every new Web development project.  I just caution you and say, “Not so fast my friend… “


Oracle commands to create user

S:\>sqlplus

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Jul 30 18:09:19 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Enter user-name: system
Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user username identified by Password;

User created.

SQL> grant connect,resource to username;

Grant succeeded.

SQL> alter user username account unlock;

User altered.

SQL>

Tuesday, July 24, 2012

Security Event Logging in to Windows XP - Folder and File Audit

1) From the command line, launch "C:\WINNT\system32\secpol.msc" (you can 
also find this in the GUI)
2) Under "Local Policies\Audit Policy", double click "Audit object access"
3) Click "Success" and/or "Failure" to turn auditing on for successful
and/or failed attempts you wish to audit
4) Close the above and then use Windows Explorer to find the folder or 
specific file(s) you want to audit
5) Right-click this folder, select "Properties" and then click the 
"Security" tab
6) Click "Advanced" button
7) Click "Auditing" tab
8) Click "Add" button
9) Type in who you want to audit (user or group name) or "Everyone" if you 
wish
10) Click OK
11) Click the check boxes for whatever you want to audit (e.g., both "Delete 
Subfolders and Files" and "Delete"). You can audit "Sucessful" and/or 
"Failed" attempts as per 3 above
12) Repeatedly click OK to exit all the way out

Whatever you selected for auditing is now active and will appear in the 
"Security" event log (the process should likely be very similar in Win2003)

Windows 2008 R2 backup exec and failure occurred accessing the Writer metadata - Workaround



Nothing tramples the joy of playing with a new operating system faster than finding out that your vendor is being a deadbeat and hasn't put out a compatible release yet. You'd think that out of the army of programmers that Symantec has that they'd have at least one technet or msdn subscription and that they'd have started working out compatibility issues in the meager half year that the betas were available. I was also amused to find that on their forums some of their staff didn't realize that the RTM was out yet for Windows 7 and 2008 R2... But I digress.

So you're using Backup Exec 12.5 and trying to backup a Windows 2008 R2 RTM server using the Advanced Open File option and you get this error:

V-79-57344-65225 - AOFO: Initialization failure on: "\\MyServerName\System?State". Advanced Open File Option used: Microsoft Volume Shadow Copy Service (VSS).
Snapshot provider error (0xE000FEC9): A failure occurred accessing the Writer metadata


  • Option 1: Wait a month or so till a hotfix comes out.
  • Option 2: Wait until Backup Exec 2010 comes out with official support for R2.
  • Option 3: Fix the VSS issue that's causing it in the first place!

During the installation of Windows 2008 R2 RTM, it creates a Recovery Partition that's about 100MB. When the AOFO agent kicks in, it works with the VSS providers in the operating system to create snapshots. However, VSS really doesn't like those tiny partitions like the 100MB System Reserved (Recovery) partition. So at this point you have two choices.

  • A) Wipe the partition out. (Note, if you used Diskpart to setup the drive instead of the windows 2008 setup program, this won't exist anyway.)
  • B) Find a workaround for the VSS snapshot.

I didn't really want to do option A yet as I'm not fully sure if that'll have any impact down the line so I decided on option B.

UPDATE: Some of you reported success with just assigning the partition a drive letter. Try it and if it works for you, then don't bother with the vssadmin parts.

I got pretty familiar with the VSSADMIN command while working with Hyper-V and backups so I knew that it could be used to redirect VSS snapshots to larger partitions. The problem I ran into is that it didn't like the fact that the System Reserved partition didn't have a drive letter. So I did the quick fix and used Disk Management to assign it a random drive letter - in this case P:



Then a quick drop to a command prompt and run vssadmin list volumes

C:\Users\Administrator>vssadmin list volumes
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2005 Microsoft Corp.

Volume path: P:\
Volume name: \\?\Volume{a2b716d3-8c1f-11de-a5ed-826d6f6e6973}\
Volume path: C:\
Volume name: \\?\Volume{a2b716d4-8c1f-11de-a5ed-826d6f6e6973}\
Volume path: D:\
Volume name: \\?\Volume{75c2418c-8c0e-11de-ae3c-001143dd2544}\


You'll note there's an entry for all your partitions. Now we set up a ShadowStorage for P:\ (100MB partition). ShadowStorage basically sets aside space on a volume to store snapshots of a volume. In this case I'm going to store snapshots of P: on D:

vssadmin add shadowstorage /For=P: /On=D: /MaxSize=1GB

And you have to put a MaxSize so I picked 1GB.

Now run vssadmin list shadowstorage to confirm the link has been set up.

C:\Users\Administrator>vssadmin list shadowstorage
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2005 Microsoft Corp.

Shadow Copy Storage association
For volume: (P:)\\?\Volume{a2b716d3-8c1f-11de-a5ed-826d6f6e6973}\
Shadow Copy Storage volume: (D:)\\?\Volume{75b2419c-8c5e-11de-af3b-001143dd23
44}\
Used Shadow Copy Storage space: 0 B (0%)
Allocated Shadow Copy Storage space: 0 B (0%)
Maximum Shadow Copy Storage space: 1 GB (4%)


If you have any other volumes configured for Shadow Copies you'll also see them listed there. (i.e. If you enabled "Previous Versions" for a file share, etc)

At this point you're done. I was able to do a successful backup of the server with the AOFO (Advanced open file option) enabled after making this change. My backup seemed a bit slow but it is an older server so I can't be sure if speed was a machine issue or an R2/Symantec issue.