Protecting Active Directory with Snapshot Strategies

Using snapshots to protect Active Directory (AD) without careful planning will most definitely end up in a complete disaster. AD is a loosely consistent distributed multi-master database and it must not be treated as a static system.  Without carefully addressing how AD works with Time Stamps, Version Stamps, Update Sequence Numbers (USNs), Globally Unique Identification numbers (GUIDs), Relative Identification numbers (RIDs),  Security Identifiers (SIDs) and restoration requirements the system could quickly become unusable or severally damaged in the event of an incorrectly invoked snapshot reversion.

There are many negative scenarios that can occur if we were to re-introduce an AD replica to service from a snapshot instance without special handling. In the event of a snapshot based re-introduction the RID functional component is seriously impacted. In any AD system RIDs are created in range blocks and assigned for use to a participating Domain Controller (DC) by the RID master DC AD role. RIDs are used to create SIDs for all AD objects like Group or User objects and they must all be unique. Lets take a closer look at the SID to understand why RIDs are such a critical function.

A SID is composed with the following symbolic format: SRIASARID:

  • S: Indicates the type of value is a  SID.
  • R: Indicates the revision of the SID.
  • IA: Indicates the issuing authority. Most are the NT Authority identity number 5.
  • SA: Indicates the sub-authority aka domain identifier.
  • RID: Indicates the Relative ID.

Now looking at some real SID example values we see that on a DC instance only the RID component of the SID is unique as show here in red text.

DS0User1      = S1521-3725033245-1308764377-1800888333212
DS0UserGroup1 = S1521-3725033245-1308764377-1800888337611

When an older snapshot image of a DC is reintroduced it’s assigned RID range will likely have RID entries that were already used to generate SIDs. Those SIDs would have replicated to the other DCs in the AD forest. When the reintroduced DC starts up it will try to participate in replication and servicing authentications of accounts. Depending on the age and configuration of its secure channel the DC could be successfully connected. This snapshot reintroduction event should be avoided since any RID usage from the aged DC will very likely result in duplicated SID creations and is obviously very undesirable.

Under normal AD recovery methods we would either need to restore AD or build a new server and perform a DC promo on it and possibly seize DC roles if required . The most important element of an normal AD restore process is the DC GUID reinitialization function. The DC GUID value reinitialization operation  allows the restoration of an AD DC to occur correctly. A  newly generated GUID becomes part of the Domain Identifier and thus the DC can create SIDs that are unique despite the fact that the RID assignment range it holds may be from a previously used one.

When we use a snapshot image of a guest DC VM none of the required Active Directory restore requirements will occur on  system startup and thus we must manually bring the host online in DSRM mode without a network connection and then set the NTDS restore mode up. I see this as a serious security risk as there a is significant probability that the host could be brought online without these steps occurring and potentially create integrity issues.

One mitigation to this identified risk is to perform the required changes before a snapshot is captured and once the capture is complete revert the change back to the non-restore state. This action will completely prevent a snapshot image of a DC from coming online from a past time reference.

In order to achieve this level of server state and snapshot automation we would need to provision a service channel from our storage head to the involved VMs or for that matter any storage consumer. A service channel can provide other functionality beyond the NDTS state change as well. One example is the ability to flush I/O using VSS or sync etc.

We can now look at a practical example of how to implement this strategy on OpenSolaris based storage heads and W2K3 or W2K8 servers.

The first part of the process is to create the service channel on a VM or any other windows host which can support VB or Power Shell etc. In this specific case we need to provision an SSH Server daemon that will allow us to issue commands directed towards the storage consuming guest VM from the providing storage head. There are many possible products available that can provide this service. I personally like MobaSSH which I will use in this example. Since this is a Domain Controller we need to use the Pro version which supports domain based user authentication from our service channel VM.

We need to create a dedicated user that is a member of the domains BUILTINAdministrators group. This poses a security risk and thus you should mitigate it by restricting this account to only the machines it needs to service.

e.g. in AD restrict it to the DCs or possibly any involved VM’s to be managed and the Service Channel system itself.

Restricting user machine logins

A dedicated user allows us to define authentication from the storage head to the service channel VM  using a trusted ssh RSA key that is mapped to the user instance on both the VM and OpenSolaris storage host. This user will launch any execution process that is issued from the OpenSolaris storage head.

In this example I will use the name scu, which is short for Service Channel User.

First we need to create the scu user on our OpenSolaris storage head.

root@ss1:~# useradd -s /bin/bash -d /export/home/scu -P ‘ZFS File System Management’ scu
root@ss1:~# mkdir /export/home/scu
root@ss1:~# cp /etc/skel/* /export/home/scu
root@ss1:~# echo PATH=/bin:/sbin:/usr/ucb:/etc:. > /export/home/scu/.profile
root@ss1:~# echo export PATH >> /export/home/scu/.profile
root@ss1:~# echo PS1=$’${LOGNAME}@$(/usr/bin/hostname)’~#’ ‘ >> /export/home/scu/.profile

root@ss1:~# chown –R scu /export/home/scu
root@ss1:~# passwd scu

In order to use an RSA key for authentication we must first generate an RSA private/public key pair on the storage head. This is performed using ssh-keygen while logged in as the scu user. You must set the passphrase as blank otherwise the session will prompt for it.

root@ss1:~# su – scu

scu@ss1~#ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/scu/.ssh/id_rsa):
Created directory ‘/export/home/scu/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/scu/.ssh/id_rsa.
Your public key has been saved in /export/home/scu/.ssh/id_rsa.pub.
The key fingerprint is:
0c:82:88:fa:46:c7:a2:6c:e2:28:5e:13:0f:a2:38:7f scu@ss1
scu@ss1~#

We now have the public key available in the file named id_rsa.pub the content of this file must be copied to the target ssh instance file named .ssh/authorized_keys. The private key file named id_rsa MUST NOT be exposed to any other location and should be secured. You do not need to store the private key anywhere else as you can regenerate the pair anytime if required.

Before we can continue we must install and configure the target Service Channel VM with MobaSSH.

Its a simple setup, just download MobaSSH Pro to the target local file system.

Execute it.

Click install.

Configure only the scu domain based user and clear all others from accessing the host.

e.g.















Moba Domain Users















Once MobaSSH is installed and restarted we can connect to it and finalize the secured ssh session. Don’t forget to add the scu user to your AD domains BUILTINAdministrators group before proceeding.  Also you need to perform an initial NT login to the Service Channel Windows VM using the scu user account prior to using the SSH daemon, this is required to create it’s home directories.

In this step we are using  putty to establish an ssh session to the Service Channel VM and then secure shelling to the storage server named ss1. Then we transfer the public key back to our self using scp and exit host ss1. Finally we use cat to append the public key file content to our  .ssh/authorized_key file in the scu users profile. Once these steps are complete we can establish an automated prompt less secured encrypted session from ss1 to the Service Channel Windows NT VM.

[Fri Dec 18 – 19:47:24] ~
[scu.ws0] $ ssh ss1
The authenticity of host ‘ss1 (10.10.0.1)’ can’t be established.
RSA key fingerprint is 5a:64:ea:d4:fd:e5:b6:bf:43:0f:15:eb:66:99:63:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘ss1,10.10.0.1’ (RSA) to the list of known hosts.
Password:
Last login: Fri Dec 18 19:47:28 2009 from ws0.laspina.ca
Sun Microsystems Inc.   SunOS 5.11      snv_128 November 2008

scu@ss1~#scp .ssh/id_rsa.pub ws0:/home/scu/.ssh/ss1-rsa-scu.pub
scu@ws0’s password:
id_rsa.pub           100% |*****************************|   217       00:00
scu@ss1~#exit

[Fri Dec 18 – 19:48:09]
[scu.ws0] $ cat .ssh/ss1-rsa-scu.pub >> .ssh/authorized_keys

With our automated RSA key password definition completed we can proceed to customize the MobaSSH service instance to run as the scu user. We need to perform this modification in order to enable VB script WMI DCOM impersonate caller rights when instantiating objects. In this case we are calling a remote regedit object over WMI and modifying the NTDS service registry start up values and thus this can only be performed by an administrator account. This modification essentially extends the storage hosts capabilities to reach any Windows host that need integral system management function calls.

On our OpenSolaris Storage head we need to invoke a script which will remotely change the NTDS service state and then locally snapshot the provisioned storage  and lastly return the NTDS service back to a normal state.  To accomplish this function we will define a cron job. The cron job needs some basic configuration steps as follows.

The solaris.jobs.user is required to submit a cron job, this allows us to create the job but not administer the cron service.
If an /etc/cron.d/cron.allow file exists then this RBAC setting will be overridden by the files existence and you will need to add the user to that file or convert to the best practice methods of RBAC.

root@ss1~# usermod -A solaris.jobs.user scu
root@ss1~# crontab –e scu
59 23 * * * ./vol1-snapshot.sh

Hint: crontab uses vi – http://www.kcomputing.com/kcvi.pdf  “vi cheat sheet”

The key sequence would be hit “i” and key in the line then hit “esc :wq” and to abort “esc :q!”

Be aware of the timezone the cron service runs under, you should check it and adjust it if required. Here is a example of whats required to set it.

root@ss1~# pargs -e `pgrep -f /usr/sbin/cron`

8550:   /usr/sbin/cron
envp[0]: LOGNAME=root
envp[1]: _=/usr/sbin/cron
envp[2]: LANG=en_US.UTF-8
envp[3]: PATH=/usr/sbin:/usr/bin
envp[4]: PWD=/root
envp[5]: SMF_FMRI=svc:/system/cron:default
envp[6]: SMF_METHOD=start
envp[7]: SMF_RESTARTER=svc:/system/svc/restarter:default
envp[8]: SMF_ZONENAME=global
envp[9]: TZ=PST8PDT

Let’s change it to CST6CDT

root@ss1~# svccfg -s system/cron:default setenv TZ CST6DST

Also the default environment path for cron may cause some script “command not found” issues, check for a path and adjust it if required.

root@ss1~# cat /etc/default/cron
#
# Copyright 1991 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
#pragma ident   “%Z%%M% %I%     %E% SMI”
CRONLOG=YES

This one has no default path, add the path using echo.

root@ss1~# echo PATH=/usr/bin:/usr/sbin:/usr/ucb:/etc:. > /etc/default/cron
# svcadm refresh cron
# svcadm restart cron

With a cron job defined to run the script named vol1-snapshot.sh in the default home directory of the scu user we are now ready to create the script content. Our OpenSolaris storage host needs to call a batch file on the remote Service Channel VM and it will execute  a vbscript from there to set the NTDS start up mode . To do this from a unix bash script we will use the following statements in the vol1-snapshot.sh file.

ssh -t ws0 NTDS-PreSnapshot.bat
snap_date=”$(date +%d-%m-%y-%H:%M)”
pfexec zfs snapshot rp1/san/vol1@$snap_date
ssh -t ws0 NTDS-PostSnapshot.bat
exit

Here we are running a secure shell call to the MobaSSH daemon with a -t option which runs the tty screen locally and this allows use to issue an “exit” from the remote calling script closing the secure shell. On the Service Channel VM the followng batch file vbscript calls are executed using the pre and post batch files illustrated as follows.

scu Batch Files

NTDS-PreSnapshot.bat
cscript NTDS-SnapshotRestoreModeOn.vbs DS0
exit

NTDS-PostSnapshot.bat
cscript NTDS-SnapshotRestoreModeOff.vbs DS0
exit

NTDS-SnapshotRestoreModeOn.vbs

strComputer = Wscript.Arguments(0)  
const HKLM=&H80000002
Set oregService=GetObject(“WinMgmts:{impersonationLevel=impersonate}!\” & strComputer & “rootdefault:stdRegProv”)
oregService.SetDWordValue HKLM, “SYSTEMCurrentControlSetServicesntdsparameters”, “Database restored from   backup”, 1
Set oregService=Nothing

NTDS-SnapshotRestoreModeOff.vbs

strComputer = Wscript.Arguments(0)  
const HKLM=&H80000002
Set oregService=GetObject(“WinMgmts:{impersonationLevel=impersonate}!\” & strComputer & “rootdefault:stdRegProv”)
oregService.SetDWordValue HKLM, “SYSTEMCurrentControlSetServicesntdsparameters”, “Database restored from   backup”, 0
Set oregService=Nothing

We now have Windows integrated storage volume snapshot functionality that allows an Active Directory domain controller to be securely protected using a snapshot strategy. In the event we need to fail back to a previous point in time there will be no danger that the snapshot will cause AD corruption. The integration process has other highly desirable capabilities such as the ability to call VSS snapshots and any other application backup preparatory function calls.  We could also branch out using more sophisticated PowerShell calls to VMware hosts in a fully automated recovery strategy using ZFS replication and remote sites.

Hope you enjoyed this entry.

Seasons Greetings to All.

Regards,

Mike



Site Contents: © 2009  Mike La Spina

Power House Blog on iSCSI and VMware lead by Chad Sakac

I had the pleasure of reading this power house blog article that Chad Sakac of EMC initiated. It’s a great read for anyone using iSCSI and VMware.

Quote:

Today’s post is one you don’t often find in the blogosphere, see today’s post is a collaborative effort initiated by me, Chad Sakac (EMC), which includes contributions from Andy Banta (VMware), Vaughn Stewart (NetApp), Eric Schott (Dell/EqualLogic), and Adam Carter (HP/Lefthand), David Black (EMC) and various other folks at each of the companies.

A “Multivendor Post” to help our mutual iSCSI customers using VMware

Many thanks to Chad and all of the contributors!Regards,

 

Mike

 

Site Contents: © 2009  Mike La Spina

iSCSI Security Basics

With iSCSI’s growing popularity the need for improved iSCSI security understanding is becoming very important. Multiple issues arise when we choose to transport storage over our networks. The fundamental security areas of availability, confidentiality and integrity are all at risk when iSCSI best practices are not implemented. For example a single attachment error can corrupt an iSCSI attached device at the speed of electrons. An entire data volume can be connected to an unauthorized users system. Access can be interrupted by a simple network problem. The possible outage issues are many and far reaching thus we must carefully assess our risk and mitigate them where appropriate.

The most important step to a best practice is good old planning. It never fails to amaze me when I see instances of adhoc implementations of highly sensitive storage system architectures. Without fail the adhoc unplanned method will result in unstable systems. Primarily due to forced system changes that were required to correct a multitude of issues like miss-configuration, poor performance, miss matched firmware etc. Planning is one of the most difficult parts of the process where we almost need to be a fortune teller. You have to ask yourself the proverbial 1000 questions and come up with the right answers. So where do you begin? Well a good start would be to use a risk based approach. How much would it cost if a failure occurred and what data is most critical, how much is the data worth etc. We need to walk through the possible events and determine what make sense to prevent the events and when not to overdo it.

Some of the more obvious ways to prevent security events are to build on fundamental best practices like the following.

 

  • Build a physically separate network to service iSCSI
  • Zone the services with VLANS
  • Provide power redundancy to all devices (Separate UPS etc.)  
  • Provide a minimum of two data paths to all critical components
  • Disable unused ports
  • Physically secure all the iSCSI network access points
  • Use mutual authentication
  • Where appropriate use TOE (TCP Offloading Engine) hardware but be careful of complexity
  • Encrypt data crossing a WAN      

The physical separation of the iSCSI network domain is important in many ways. We can prevent unpredictable capacity demands and provide much greater security by reducing the unauthorized physical connectivity. Change control is simplified and can be isolated. External network undesirables are completely eliminated. The dedicated physical network segregation practice is probably the single most important step of building a stable secure iSCSI environment.

Just as we segregate zones on traditional FCP based SAN’s the same practice is beneficial in the iSCSI world, we can filter and control OS based access and data flow we can isolate authentication and name services, performance monitoring can be observed in a VLAN context. Priority can be controlled on a per VLAN context.

I can never forget the day one of our circuit breakers popped in the DC, alarms were TAPped out and a feeling of panic stirred. But that’s when planning pays off. The system continued to run on the secondary power line. So by simply powering on a server you can pop a weak breaker and kill the entire system. So when it comes to power supplies, I take them very seriously. Use TRUE power redundancy on all devices that are part of a critical SAN system, iSCSI or not. It will likely save your bacon in a multitude of scenarios.

No matter how hard we try to be perfect, it is in our nature to make mistakes. And when you are staring at a bundle of cables they really do all look the same. Moving the wrong cable or a port failure can completely take out your iSCSI functionality. Worse yet, a switch module dies on you. For the cost of  a an additional port we can avoid all of these ugly scenarios, besides multi pathing gives you more peek bandwidth. And you gain the ability to do firmware updates without a full outage. Data path redundancy is a must for critical systems.

My neighbor had an interesting story the other day. He kept popping breakers when only two car block heaters were in use. (Block Heater? Yes up north when its cold you have to keep your cars plugged in. And no, we don’t live in igloos) So what does this have to do with iSCSI. Well my neighbor has discovered that if you have a power receptacle available other people will use it. They didn’t ask to use it and he did not expect them to, but you get the point. Your IT neighbors will at some point do the same so disable accessible ports by default and enable only what is in supposed to be in use. 

If my neighbor could have locked the receptacle he would not need to worry about disabling it, but security is about layers of prevention. The more you have the less likely that a breach will happen and preventing physical access is usually easy.

The fastest way to corrupt an iSCSI target is to allow unauthenticated access. It’s just to easy to connect to the wrong node. I decided to do a quick test the other day which involved connecting two Windows initiators to a target. On one I created a directory with jpg photos and on the other I connected and browsed to the photo directory. Bam corrupt target! The second node wrote a thumb nail while the first node copied more photos to the target. Its just completely crazy to run without authentication and one way authentication is not going to prevent this in many cases. Thus we should use a mutual authentication and no it does not make it less hackable but it does help prevent the more probable human error event. For example most IT shops will not want to have separate passwords for every iSCSI CHAP Pair, they will likely use one for many nodes to reduce the administrative overhead. Using the same passwords essentially allows the connection of any correctly configured initiator to any correctly configured target. If you couple a single password config with mutual authentication then you must define the return target access on the respective initiator and this reduces accidental connections to the wrong node. A small amount of effort will prevent this human error and it costs nothing to setup. Of course you can have unique passwords on every iSCSI host, it will however require a central password store and far more administration in larger configurations. There are other solutions but those are out of scope for this basic security blog.

If you anticipate heavy traffic or plan to use IPSec you will want to consider TCP Offload Engine Adaptors (TOE). This does not mean that every server should use TOE on the contrary you should evaluate the iSCSI network traffic of the service and the server CPU utilization level on that server. You can anticipate that 1Gb of sustained iSCSI network will consume about 25% of a 3 Ghz Dual CPU server for strait iSCSI work. IPSec will vary depending on the key size and the OS implementation of the IPSec protocol so I can’t even ball park it. An example where it would make a lot of sense to use a TOE is in the case of a VMware ESX host where you want as much CPU as possible available to VM’s. The TOE Adaptor will do all the iSCSI and IPSec work and the server CPU is all left to the VM’s. It does not make sense if the host will be underutilized and a medium amount of network traffic is predictable. You will also want to consider keeping your IPSec implementation as simple as possible. When a problem occurs, things can get very difficult to analyze and correct with IPSec as well setting up IPSec can be painful if things do not go as planned. (Blog on that subject to come later)

This PDF http://siis.cse.psu.edu/pubs/storagess06.pdf link provides some excelent information on IPSec and iSCSI performance.  

If you have a DR centre or other multisite iSCSI functions across a WAN you will need to consider if data encryption is required. For example if you are using an external network provider then you should definitely be encrypting the data. If securing the data is not critical then consider using compression of some type at the bare minimum. You will promote better availability and as an added bonus the data is will not cross the WAN in the clear for your neighbors to view. If encryption is important then IPSec is a good solid inexpensive way of ensuring confidentiality, integrity and authenticationt. There are many options available to secure the transmission over the WAN. It should be easy to manage and conform to industry standards like 128Bit AES symmetric ciphers which are reasonably fast and extremely secure when managed correctly.

Till Next Time

Regards,

Mike

Site Contents: © 2008  Mike La Spina