8 Dec 2014

How to use Powershell to create Windows Server backups

Leave a Comment
How to use Powershell to create Windows Server backupsWe've talked about Windows Server Backup feature in a previous article posted on Poweradmin blog. I've shown you how to install this feature and how to create and schedule Windows Server backups using the backup console. In this article I want to show you how to achieve similar results by using a Powershell script. I will try to describe each line so you can understand the logic behind it. Note that I use Windows PowerShell Integrated Scripting Environment (ISE) to create Windows scripts. 
#We will start by creating a new Windows Backup(WB) policy object:
$backupPolicy = New-WBPolicy

#We will add the System State and the Bare Metal Recovery options to our new policy:
Add-WBSystemState -Policy $backupPolicy
Add-WBBareMetalRecovery -Policy $backupPolicy

#We'll need to set a location on where the backups will be stored, I will use a local disk attached to my server. The Get-WBDisk cmdlet displays all disks attached to the server so I've used the $disk[1] to specify the needed partition.
$disk = Get-WBDisk
Write-Host $disk
$backupTarget = New-WBBackupTarget -Disk $disk[1]
Add-WBBackupTarget -Policy $backupPolicy -Target $backupTarget

The script will display the backup destination disk information:
Get-WBDisk cmdlet

#We'll set the Vss Full Backup option to our policy:
Set-WBVssBackupOption -Policy $backupPolicy -VssFullBackup

#The backup schedule can be configured using the Set-WBSchedule cmdlet. I've set the task to run everyday at 9 am:
Set-WBSchedule -Policy $backupPolicy -Schedule 09:00

#Normally, we would not be allowed to run a one-time only backup this is why we'll have to force our policy by executing the following command: 
Set-WBPolicy -force -policy $backupPolicy

#All that's left to do is to start the backup process:
Start-WBBackup -Policy $backupPolicy

Windows Powershell ISEThe backup will now start, Windows Powershell ISE will display a status of the backup process. You will be prompted once the the operation is completed.
Windows Backup ConsoleYou can also use the Windows Backup Console to verify the status of the backup operation. Note that the scheduled task will be configured as specified in the script (everyday at 9 am full VSS backup)
That's about it for this script folks, hope you will enjoy it when automatizing Windows Server backup tasks. You can further develop the script to run backups for multiple machines at the same time. Note that you can also use a network share as the backup destination. Wish you all the best and stay tuned for the following articles from IT training day.
Read More

Zabbix per process monitoring using Powershell

Leave a Comment
Hello folks,
Zabbix per process monitoring using PowershellIn this article I will show you how to use Windows Powershell to implement per process monitoring and calculate CPU % and memory usage of your Windows Servers. Data will be send to Zabbix monitoring System in Json format. Note that I've used Windows PowerShell Integrated Scripting Environment (ISE) which helps you a lot in developing Powershell scripts. I will just paste the code and explain each line so you can understand the logic behind my script. The script will contain two parts, one that populates Zabbix hosts with the desired items and the second one which will retrieve the values for these items:

# The keys parameter determines whether the script will populate items or retrieve values
param([Int32]$keys=0)

#Open Json statement
if ($keys -eq "1")
{
write-host "{"
write-host " `"data`":["
write-host
}

#Get the processes that run on the machine from a certain path
$colItems = Get-Process | Where-Object {$_.Path -like "D:\Servers\*" }

#$val hash table will be used to get the CPU counters at a single point in time
$val = @{}

# For each process contained in $colItems we will extract several parameters (process name, path and PID) which will be used to identify a particular service in the Zabbix monitoring system. We will replace a single '\' character with '\\' for process paths to be correctly displayed in Zabbix

foreach ($objItem in $colItems) {
  if ( $keys -eq "1")
  {
 $line = " { `"{#PROCESSNAME}`":`"" + $objItem.ProcessName + "`" ,`"{#PROCESSPATH}`":`"" + $objItem.Path + "`" ,`"{#PROCESSPID}`":`"" + $objItem.Id + "`" },"
 $line = $line -replace '\\','\\'
 write-host $line
 }

    else
    {
#We will calculate a process CPU time in seconds at t0 by adding the user and kernel mode times. Values will be added in the $val hash table
     $procid = $objItem.Id
     $proc = gwmi win32_process | where-object {$_.handle -eq $procid}
     $proccputime0 = [TimeSpan]::FromSeconds(($proc.UserModeTime + $proc.KernelModeTime) / 10000000)
     $val.Add($objItem.Id,$proccputime0.TotalSeconds)
     }
}

#To be able to calculate CPU time in seconds at t1 we will have to pause the script for several seconds
 Start-sleep -s 3

#We will define the second hash table for storing values at t1
 $val1 = @{}
 $colItems1 = Get-Process | Where-Object {$_.Path -like "D:\Servers\*" }

 foreach ($objItem in $colItems1) 
 {
#We verify first if the process has not been closed since t0  and then calculate values at t1
   if ($val.ContainsKey($objItem.Id))
   {
     $procid = $objItem.Id
     $proc = gwmi win32_process | where-object {$_.handle -eq $procid}
     $proccputime3 = [TimeSpan]::FromSeconds(($proc.UserModeTime + $proc.KernelModeTime) / 10000000)
     $val1.Add($objItem.Id,$proccputime3.TotalSeconds)

#To be able to calculate the CPU% time we will need to get the number of logical processors on our
machine
     $nrproc = (Get-WmiObject "Win32_ComputerSystem").numberoflogicalprocessors

#Finally, we calculate the subtraction between each process t1 and t0 and we'll divide it by the wait time and the number of logical processors to get the CPU% time
     $result = ($val1.get_item($objItem.Id) - $val.get_item($objItem.Id)) / 3 / $nrproc * 100
     $resultf = [System.Math]::Round($Result, 3)

#The CPU% values will then be sent to Zabbix
     $line = "- perprocess.CPU[`"" +$objItem.Path + "`"] " + $resultf
     write-host $line
   }

#The working set for each process is sent to Zabbix
   $ws = "- perprocess.WS[`"" +$objItem.Path + "`"] " + $objItem.WorkingSet 
     write-host $ws

#Private memory for each process is sent to Zabbix
   $pm = "- perprocess.PM[`"" +$objItem.Path + "`"] " + $objItem.PM
     write-host $pm
  }

# Close the JSON message
if ($keys -eq "1")
{
write-host
write-host " ]"
write-host "}"
write-host
}

On each machine, in the Zabbix client you will have to modify the zabbix_agentd config file and add the following lines. The first line will execute the script with $keys parameter set to 1 to add the items to Zabbix while the second one will send the items values:
UserParameter=perprocess.getkeys, powershell -NoProfile -ExecutionPolicy Bypass -file "C:\Program Files\Zabbix Agent\UserParameters\script\PerProcess.ps1" -keys 1
UserParameter=perprocess.getvalues, powershell -NoProfile -ExecutionPolicy Bypass -file "C:\Program Files\Zabbix Agent\UserParameters\script\PerProcess.ps1" -keys 0 | "C:\Program Files\Zabbix Agent\zabbix_sender.exe" -v -c "C:\Program Files\Zabbix Agent\zabbix_agentd.conf" -i -

Once you've configured the discovery rule in Zabbix and create the necessary items, you should have implemented per process monitoring successfully. That's about it for this article folks, wish you all the best and have a great day!
Read More
19 Nov 2014

Packet Switching Methods: Process Switching, Fast Switching and CEF

Leave a Comment

By: Adeolu Owokade, Intense School

For a router to move traffic across the network, it needs to perform two different functions: routing and switching. Routing refers to how a router determines the best path to send the traffic through. This is usually achieved using various routing protocols like EIGRP and OSPF. Packet switching, on the other hand, relates to how packets are moved from the input interface to the output interface or interfaces (in the case of more than one best path).
In Cisco IOS, there are many packet switching methods, but the common ones which we will be discussing in this article are process switching, fast switching and Cisco Express Forwarding (CEF).

Note: In this article, we will be focusing on IP packets although the same concept applies to other protocol packets.

Layer 2 Header Rewrite
Before we go on to discuss the switching methods, I would like to quickly discuss the rewriting of the layer 2 header of a packet. Look at the diagram below:



Host A and Host B are on different subnets and they have the router configured as their default gateway. If Host A wants to send a packet to Host B, it sends the packet to its default gateway. The Layer 2 header will contain Host A’s MAC address as the source of the packet and the router’s Fa0/0 MAC address as the destination.


When the router makes a forwarding decision for the packet, it needs to add a new Layer 2 header as follows: it replaces the source MAC address of the packet with the MAC address of its outgoing interface (Fa0/1 in this example). It also replaces the destination MAC address with the MAC address of the next-hop (Host B’s MAC address in this case).


Now that we know what a Layer 2 header rewrite entails, we can go ahead with our packet switching methods.

Process Switching
Process switching is the oldest of the three switching methods we will be discussing in this article. It is also the slowest and we will see why.

When the router receives a packet that is to be processed, the router stores this packet in memory. The router’s processor is then interrupted informing it that there is a packet waiting to be processed. The router inspects the packet and places it in the input queue of the appropriate switching process, e.g. ip_input for IP packets.

When the switching process runs, it checks the routing table to determine the next-hop and outbound interface for the destination of the packet. It also determines the layer 2 address (e.g. MAC address) of the next-hop by consulting a table such as the ARP cache. Armed with this information, the switching process rewrites the layer 2 header of the packet. The packet is then sent out through the determined outbound interface.

The issue with process switching is that the process described above happens for every packet, making it quite slow. Recent IOS versions have CEF (discussed later) as the default switching method for IP but we can enable process switching using the no ip route-cache interface configuration command.

Using our network diagram above, I will enable process switching on the router’s Fa0/0 and Fa0/1 interfaces. I will then ping from Host A to Host B and enable IP packet debugging (debug ip packet [detail]) on the router.

Hint: Process switched packets show up in IP packet debugging. Fast switched and CEF switched packets do not.

A sample output of the debug is as shown below. Notice that the packet is “routed via RIB” RIB stands for Routing Information Base which is basically the routing table of the router.


I received 10 of these messages in my debug output, 5 from the ping request from Host A and 5 from the ping reply from Host B.

Fast Switching
Fast switching improves on process switching by making use of a cache. The first packet to a destination is still process switched but the result of this switching, which includes the outgoing interface, next-hop and Layer 2 header rewrite information, is stored in the Fast Cache. Future packets to this destination will be switched using information from the fast cache, thus improving on the speed of this switching method.

We use the ip route-cache interface configuration command to enable fast switching.


We can confirm that fast switching is enabled on an interface using the show ip interfacecommand.


Before I test using ping, I will check the fast cache. Since we have not sent any packet across the router, this cache is empty.


Now when I ping from Host A to Host B, notice that the first ping request packet and the corresponding ping reply packet are process switched. After this first process switching, entries are created for these destinations in the fast cache.


As I mentioned above, fast switched packets will not show up in our debug output; only the two packets that were process switched will show up.

We can view the fast cache again where we notice those two created entries which include information about the destination, the outgoing interface, the next-hop and the Layer 2 header rewrite.


The diagram below helps make sense of the Layer 2 rewrite information:


Since the first packet to a destination is always process switched, switching performance will be degraded in the event where the router receives a lot of traffic for destinations that are not yet in the fast cache. Also, since entries in the fast cache will be invalidated when a route in the routing table changes, fast switching is not suitable on routers with a large number of changing routes like Internet backbone routers.

Cisco Express Forwarding (CEF)
The CEF switching method goes a step further than fast switching by building the cache in advance even before any packets need to be processed. CEF uses two components to perform its function: the Forwarding Information Base (FIB) and the Adjacency table. The FIB is more like a mirror of the routing table but with faster search capability. The FIB is used to make the forwarding decision for the destination of the packet. It contains prefixes, next hop (recursive) and the outgoing interface. The Adjacency table contains information about directly connected next hops including Layer 2 header rewrite information.

CEF is enabled globally using the ip cef command and the ip route-cache cef interface configuration command on interfaces.


We can view the FIB using the show ip cef command and the adjacency table using the show adjacency command.



Because CEF does not to wait for a packet before building the cache, switching performance is greatly increased. Note however that even though CEF is enabled on a router, there are times when packets will need to be punted to the next best switching method for packets that CEF cannot handle.

Summary
In this article, we have considered three different packet switching methods used on Cisco routers: process switching, fast switching and CEF. Process switching is the oldest, slowest and most processor intensive. In fast switching, the first packet to a destination is process switched but subsequent packets are forwarded using the information stored in the fast cache. Finally, CEF pre-builds the cache before any packets need to be forwarded. CEF makes use of the FIB and Adjacency table to perform its functions.

Cisco’s implementation of MPLS, for example, requires CEF to be enabled because CEF is the only switching method that makes use of the FIB. I hope you have found this article interesting.

References and further reading

  1. Cisco Express Forwarding. Understanding and troubleshooting CEF in Cisco routers and switches by Nakia Stringfield
  2. Intense School’s CCNA Training: http://www.intenseschool.com/boot_camp/cisco/ccna
  3. Inside Cisco IOS Software Architecture by Russ White, Vijay Bollapragada, Curtis Murphy
  4. Process, Fast and CEF Switching and Packet Punting: http://blog.ipspace.net/2013/02/process-fast-and-cef-switching-and.html
Read More
11 Nov 2014

Certification Authority (CA) supported by Windows Server

2 comments

Certification Authority (CA) supported by Windows ServerIn this article I want to talk about the different types of Certification Authorities that can be deployed in a Windows Server infrastructure. The type of CA you choose to deploy depends on your network requirement and you should study carefully before deciding to deploy such infrastructure. Note that once you deploy an Enterprise or Standalone CA, you cannot change it later. When you install the CA Role on a Windows Server, the wizard will prompt you to select either a Standalone or Enterprise Certification Authority (CA).  
Windows Server 2008 offers support for four types of CA:
Enterprise Root 
Enterprise Subordinate
Standalone Root
Standalone Subordinate

Enterprise CA - can be deployed in an Active Directory domain and uses Group Policy to replicate digital certificates within your network. GP is also used to publish certificate revocation lists to AD. Enterprise CA  uses the concept o certificate templates to issue certificates in an automated manner. The way a template is configured determines how data is generated from Active Directory. For example, certificate names are generated from AD, but you'll need to configure this feature in the certificate template. Enterprise CA offers support for autoenrollment which is used to issue certificates automatically by applying certificate template permissions. When a certificate is requested, the local CA will verify if the user/computer has the necessary permissions to request the certificate. This is achieved by verifying certificate permissions that were previously configured. 

Standalone CA - does not require an Active Directory infrastructure. Because Standalone CA does not integrate with AD, all features supported by the Enterprise CA do not apply anymore. For example, a user must provide all needed information when requesting a certificate. Autoenrollment is not supported with Standalone CAs. Administrators must also accept incoming certificate requests manually thus increasing the overall workload of Sysadmins.

A Root CA sits at the top of the PKI (Public Key Infrastructure) architecture. A Root CA is the most trusted entity within the network. These servers must be as secured as possible because if a Root CA is compromised then all certificate infrastructure must be rebuild. Root CAs are usually used to issue certificates for Subordinate CA and are kept offline to ensure highest security is provided. 

Subordinate CA sits under the Root CA and is used to issue certificates for users and computers. An enterprise ca use multiple Subordinate CAs within the network. If one of these Subordinate CA is compromised then the Root CA can revoke its certificate thus protecting the rest of the network. Only certificates issued by the compromised Subordinate CA must be replaced.

These are the four types of Certification Authority supported by Windows Server 2008 Editions. Hope this article will serve you well in better understating this technology. We will talk about Windows Server Certificate Authority in future articles so stay tuned for the following posts from IT training day.
Read More
27 Oct 2014

Considerations when choosing page file size

3 comments
If you've been working in the IT industry you most probably heard about the term page file and what is its main role within the Operating System. Paging is a technology created to support the limitations of physical RAM memory. Its main role is to extend the virtual memory ("It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory"  Wikipediafunctionality by removing memory blocks from physical memory and moving them on the disk and thus alleviating the overall hardware usage. Another important aspect of page files is that it offers support for crash dumps. Note that you may choose not to use page files if for example your system has enough memory, but remember that crash dumps will not be supported if page file is disabled. It's recommended that the page file has a larger size than the physical RAM memory for several reasons:

  • to store memory crash dumps 
  • extend the committed memory
  • store all RAM data in the page file 

The file is usually located in the root of C: drive and by default will be hidden. You'll need to disable the Hide protected operating system files option from the folder options section:
Windows Folder Options

The physical memory requirements vary from server to server and it's up to System Administrators to choose the optimal hardware specs for each machine. There are several factors that can influence how to choose the best page file size based on the hardware necessities. We've talked about the first two earlier (support for crash dump and extend the physical memory), but you can also set the size of the page file based on the highest memory peak that your system can handle. For example, you may have different applications running on your Server and from time to time the requirement of the overall usage my exceed the system commit limit (physical memory + page file). If the System reaches its commit memory limit then applications may not get the necessary resources and this may lead to hangs or crashes. You may want to check out the following counters to troubleshoot page file/memory usage: \Memory\Commit Limit\Memory\Committed Bytes and \Memory\% Committed Bytes In Use.

It's also important to study how frequent applications are accessed in memory, what's the available physical memory and the usage of the page file by checking the following counters:
\Memory\Modified Page List Bytes, \Memory\Available MBytes and \Paging Files(*)\% Usage

People often use the terms swapping and paging as if both terms define the same operations. While you may know that both refer to the virtual memory, they actually have different roles in the Operating System. Swapping occurs on heavy traffic when the physical memory is overloaded and it's used to move entire processes from RAM to the swap file. Paging on the other hand occurs from time to time depending on the memory usage and it moves portions of processes from RAM to the page file. "Pages" from memory are moved to the page file when they are not accessed frequently. While swapping will result in emptying RAM immediately, paging frees up memory space, but does not allocate it to other processes instantly and instead memory blocks are put in standby mode. Swap files were used in older Windows Systems, but in our days all devices use only a page file to define the virtual memory this is why paging and swapping are used in the same context.

By default, Windows Systems will have an automatic paging file size allocation process. This feature will set the size of the page file based on System usage. You can change this feature and manually set the size from Control Panel\System and Security\System\Advanced System Settings\Performance Settings\Advanced\Virtual memory section. From the same location you can view the minimum, maximum and currently allocated page file size when using automatic allocation. To manually set the limits simply uncheck the Automatically manage paging file size for all drivers setting:
Windows Virtual Memory

Crash dumps are files (memory.dmp) that store RAM information when System errors occur. You can configure your System to create memory dumps when the the System crashes or hangs at any point in time. The page file must be large enough to store all RAM data if you configure your machine in this manner (Complete memory dump). There are four types of crash dumps that can be configured on a Windows devices:
Small memory dump (256 KB) - page file must be at least 1 MB in size
Kernel memory dump - the size of the page file depends on the virtual memory used by the kernel
Complete memory dump - the size of all RAM information + 257 MB
Automatic memory dump - the System will decide by itself what kind of dump will create
based on the frequency of system crashes. The system will try to create a crash dump in this order: small, kernel, complete. This feature was introduced with Windows Server 2012 and Windows 8.

By default, page files are managed by the OS which means that their size increases and decreases based on the behavior of the System. There are three factors that determines the size of the page file: system commit charge, system crash dump and physical memory installed.
I've pasted this table from Microsoft's website which contains the limits of page files when the System manages its size:
Operating systemMinimum page file sizeMaximum page file size
Windows XP and Windows Server 2003 with less than 1 GB of RAM1.5 x RAM3 x RAM or 4 GB, whichever is larger
Windows XP and Windows Server 2003 with more than 1 GB of RAM1 x RAM3 x RAM or 4 GB, whichever is larger
Windows Vista and Windows Server 20081 x RAM3 x RAM or 4 GB, whichever is larger
Windows 7 and Windows Server 2008 R21 x RAM3 x RAM or 4 GB, whichever is larger
Windows 8 and Windows Server 2012Depends on crash dump setting*3 x RAM or 4 GB, whichever is larger
Windows 8.1 and Windows Server 2012 R2Depends on crash dump setting*3 x RAM or 4 GB, whichever is larger

Page file is a System component that needs consideration, if you choose to use it or not  depends on the system requirements and if crash dumps are needed or not. If you think there are more things worth mentioning here please post a comment in my dedicated section and I will try to respond as soon as possible. Wish you all the best and have a wonderful day!
Read More
15 Oct 2014

How to migrate a DFS Namespace to Windows Server 2008 Mode

1 comment
Hello dear readers,
In this short article I want to show you how to migrate a DFS Namespace that was enabled for Windows 2000 mode.  Suppose you are using a Windows Server 2003 infrastructure and want to migrate it to Windows Server 2008. Besides the OS install you will also need to migrate all the DFS infrastructure to the new Servers. DFS offers the possibility of exporting a namespace to an xml file and then importing it to your new namespace. To migrate our namespace we will use dfsutil command. Open command prompt and type the following:


The namespace will be exported in the specified path
dfsutil root export \\ppscu.com\Documents C:\namespace.xml

We will now remove the namespace by typing the following:
dfsutil root remove \\ppscu.com\Documents


On the new Servers that are running at Windows Server 2008 mode, we will recreate our namespace using the same dfsutil command:

dfsutil root adddom \\ppscu.com\Documents


The config file must now be imported to the new namespace by typing the following:

dfsutil root import merge C:\namespace.xml \\ppscu.com\Documents

Once you migrate all the files and folders to your new servers, you will have a new DFS infrastructure running in Windows Server 2008 mode. The migration process should be easy to follow and implement. That's about it for this short article folks, stay tuned for the following posts from IT training day.
Read More
© 2014 All Rights Reserved.
IT training day & Powered By BloggerHero