27 Oct 2014

Considerations when choosing page file size

Leave a Comment
If you've been working in the IT industry you most probably heard about the term page file and what is its main role within the Operating System. Paging is a technology created to support the limitations of physical RAM memory. Its main role is to extend the virtual memory ("It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory"  Wikipediafunctionality by removing memory blocks from physical memory and moving them on the disk and thus alleviating the overall hardware usage. Another important aspect of page files is that it offers support for crash dumps. Note that you may choose not to use page files if for example your system has enough memory, but remember that crash dumps will not be supported if page file is disabled. It's recommended that the page file has a larger size than the physical RAM memory for several reasons:

  • to store memory crash dumps 
  • extend the committed memory
  • store all RAM data in the page file 

The file is usually located in the root of C: drive and by default will be hidden. You'll need to disable the Hide protected operating system files option from the folder options section:
Windows Folder Options

The physical memory requirements vary from server to server and it's up to System Administrators to choose the optimal hardware specs for each machine. There are several factors that can influence how to choose the best page file size based on the hardware necessities. We've talked about the first two earlier (support for crash dump and extend the physical memory), but you can also set the size of the page file based on the highest memory peak that your system can handle. For example, you may have different applications running on your Server and from time to time the requirement of the overall usage my exceed the system commit limit (physical memory + page file). If the System reaches its commit memory limit then applications may not get the necessary resources and this may lead to hangs or crashes. You may want to check out the following counters to troubleshoot page file/memory usage: \Memory\Commit Limit\Memory\Committed Bytes and \Memory\% Committed Bytes In Use.

It's also important to study how frequent applications are accessed in memory, what's the available physical memory and the usage of the page file by checking the following counters:
\Memory\Modified Page List Bytes, \Memory\Available MBytes and \Paging Files(*)\% Usage

People often use the terms swapping and paging as if both terms define the same operations. While you may know that both refer to the virtual memory, they actually have different roles in the Operating System. Swapping occurs on heavy traffic when the physical memory is overloaded and it's used to move entire processes from RAM to the swap file. Paging on the other hand occurs from time to time depending on the memory usage and it moves portions of processes from RAM to the page file. "Pages" from memory are moved to the page file when they are not accessed frequently. While swapping will result in emptying RAM immediately, paging frees up memory space, but does not allocate it to other processes instantly and instead memory blocks are put in standby mode. Swap files were used in older Windows Systems, but in our days all devices use only a page file to define the virtual memory this is why paging and swapping are used in the same context.

By default, Windows Systems will have an automatic paging file size allocation process. This feature will set the size of the page file based on System usage. You can change this feature and manually set the size from Control Panel\System and Security\System\Advanced System Settings\Performance Settings\Advanced\Virtual memory section. From the same location you can view the minimum, maximum and currently allocated page file size when using automatic allocation. To manually set the limits simply uncheck the Automatically manage paging file size for all drivers setting:
Windows Virtual Memory

Crash dumps are files (memory.dmp) that store RAM information when System errors occur. You can configure your System to create memory dumps when the the System crashes or hangs at any point in time. The page file must be large enough to store all RAM data if you configure your machine in this manner (Complete memory dump). There are four types of crash dumps that can be configured on a Windows devices:
Small memory dump (256 KB) - page file must be at least 1 MB in size
Kernel memory dump - the size of the page file depends on the virtual memory used by the kernel
Complete memory dump - the size of all RAM information + 257 MB
Automatic memory dump - the System will decide by itself what kind of dump will create
based on the frequency of system crashes. The system will try to create a crash dump in this order: small, kernel, complete. This feature was introduced with Windows Server 2012 and Windows 8.

By default, page files are managed by the OS which means that their size increases and decreases based on the behavior of the System. There are three factors that determines the size of the page file: system commit charge, system crash dump and physical memory installed.
I've pasted this table from Microsoft's website which contains the limits of page files when the System manages its size:
Operating systemMinimum page file sizeMaximum page file size
Windows XP and Windows Server 2003 with less than 1 GB of RAM1.5 x RAM3 x RAM or 4 GB, whichever is larger
Windows XP and Windows Server 2003 with more than 1 GB of RAM1 x RAM3 x RAM or 4 GB, whichever is larger
Windows Vista and Windows Server 20081 x RAM3 x RAM or 4 GB, whichever is larger
Windows 7 and Windows Server 2008 R21 x RAM3 x RAM or 4 GB, whichever is larger
Windows 8 and Windows Server 2012Depends on crash dump setting*3 x RAM or 4 GB, whichever is larger
Windows 8.1 and Windows Server 2012 R2Depends on crash dump setting*3 x RAM or 4 GB, whichever is larger

Page file is a System component that needs consideration, if you choose to use it or not  depends on the system requirements and if crash dumps are needed or not. If you think there are more things worth mentioning here please post a comment in my dedicated section and I will try to respond as soon as possible. Wish you all the best and have a wonderful day!
Read More
15 Oct 2014

How to migrate a DFS Namespace to Windows Server 2008 Mode

Leave a Comment
Hello dear readers,
In this short article I want to show you how to migrate a DFS Namespace that was enabled for Windows 2000 mode.  Suppose you are using a Windows Server 2003 infrastructure and want to migrate it to Windows Server 2008. Besides the OS install you will also need to migrate all the DFS infrastructure to the new Servers. DFS offers the possibility of exporting a namespace to an xml file and then importing it to your new namespace. To migrate our namespace we will use dfsutil command. Open command prompt and type the following:

The namespace will be exported in the specified path
dfsutil root export \\ppscu.com\Documents C:\namespace.xml

We will now remove the namespace by typing the following:
dfsutil root remove \\ppscu.com\Documents

On the new Servers that are running at Windows Server 2008 mode, we will recreate our namespace using the same dfsutil command:

dfsutil root adddom \\ppscu.com\Documents

The config file must now be imported to the new namespace by typing the following:

dfsutil root import merge C:\namespace.xml \\ppscu.com\Documents

Once you migrate all the files and folders to your new servers, you will have a new DFS infrastructure running in Windows Server 2008 mode. The migration process should be easy to follow and implement. That's about it for this short article folks, stay tuned for the following posts from IT training day.
Read More
10 Oct 2014

Deploying DFS namespaces

Leave a Comment
Now that we've had our first contact with DFS (Distributed File System) it's time to move further and discover new features of this technology. In this article I will show you how to install and configure DFS on Windows Server 2012 Edition. For this tutorial I will be using two Windows Server 2012 R2 virtual machines that are already deployed within an Active Directory Domain. In the previous article we've talked briefly about about DFS and DFSR, but we still have a long way to go before all aspects of these technologies have been covered.

DFS tutorialWe will start by installing the necessary roles for these two components. Open the Server Manger console on one server, navigate to the Dashboard section and click on Add roles and features button. Expand File and Storage Services and select DFS Namespaces and DFS Replication:
Distributed File System tutorial
Once you've selected these two roles, check the confirmation page and proceed with the installation:

How to configure a DFS namespace
Since DFSR is a multimaster technology, it doesn't matter on which server you'll configure the DFS namespace. Proceed with the installation of both roles on the second machine. Once this operation is completed we'll configure our DFS namespace. Note that we will create a Domain-based namespace since we are using two servers that are part of an Active Directory domain. You can add multiple servers to increase the availability of the DFS namespace in case of failures. 

Open the DFS Management console, navigate to the Namespace section, right click it and select New Namespace. Check out the Actions menu from the right side of the window to view available actions:

DFS namespace Wizard
Since this is the first time the namespace is configured we will need to enter the name of the server hosting the namespace. Once DFSR is configured, the master node concept will not be applicable anymore. 

Distributed File System
In the following section we'll have to configure the name for our namespace. Since we are deploying this namespace within our domain, the newly configured DFS namespace will be available when accessing \\domain_name\namespace_name (in my case \\ppscu.com\Documents):

Distributed File System training
You can configure extra settings in this section by clicking the Edit Settings button. Here you can set the local path of the shared folder and the shared folder permissions. Usually, you will be using custom permissions when deploying a DFS namespace, but for now we will use the default settings:

Domain-based namespace
From the namespace type menu select Domain-based namespace and check Enable Windows Server 2008 mode to support increased features like scalability and access-based enumeration (ABE). Remember that the metadata of Domain-based namespaces are stored within AD DS:

Domain-based namespace tutorial
Review the settings and create the namespace. The newly created namespace will appear in the DFS Management console. We will need to add the second server to our namespace. From the actions menu select Add Namespace Server and add the second machine:

DFS console
On the Namespace Servers tab you can view all machines that are part of this namespace. Delegation tab is used to allow groups and users to administrate the DFS namespace. The Search tab can be used to locate folders or folder targets within the DFS namespace:

DFS tutorialNow let's create a new folder on one of our servers and add it to the DFS namespace. Note that I don't have a dedicated partition to store my shared folders since this is a testing environment. You should always have a separate partition that is different from the OS partition to store shared files and folders. The folder must have sharing activated before adding it to the namespace:
DFS namespace folders
Let's return to the DFS Management console and add this folder as a resource to our namespace. Navigate to the Actions menu and click on New Folder. Set a name for the new folder and browse for it by pressing the Add button:

DFS Referrals
You can configure further settings by right clicking on the namespace name and selecting Properties from the menu. Within the first section we can add a description for the namespace and view the general info. On the Referrals tab we can configure cache duration and set a method for ordering targets outside of the client's site. This option practically tells a DFS client what mechanism to use when trying to access a certain namespace resource. Remember that referrals settings can be configured at the server level or from each folder individually:

DFS Access-based enumeration
On the Advanced section we can optimize polling by selecting one of the two methods available Optimize for consistency and Optimize for scalability. Access-based enumeration (ABE) can also be enabled form this section:

Add Namespace to Display
On the second server the namespace will not be displayed automatically. To achieve this result open the DFS console and press the Add Namespace to Display button:

Now place a file within the folder and try to access it using the correct path. I first disabled the Windows Firewall on both servers just to make sure that there will be no network issues when accessing DFS shared folders. I've then typed \\ppscu.com\Documents\DanP from the second machine to verify if the namespace has been configured correctly:
Even though we've added two servers to host our namespace, replication hasn't been configured yet so resources will be accessible only on the server hosting the specified folder. If you verify the shared folder on both servers using DFS Management console you will see that the second machine has listed the folder's name but does not host its content.
In the following article I will show you how to configure DFS Replication for our newly created namespace and we will see how files and folders are replicated between DFS servers. Please don't hesitate to post a comment if there are things left unclear. Don't forget to enjoy your day and stay tuned for the following articles from IT training day.
Read More
6 Oct 2014

Introduction to DFS

Leave a Comment
Distributed File System (DFS) is a technology created by Microsoft to allow data consistency across large enterprises. DFS allows you to group shared folders from multiple servers into one or more namespaces. A namespace is a hierarchy of folders grouped together to create on large data tree. There are two main technologies that we must talk about on this topic: DFS and DFSR:

DFS is responsible for managing all the namespaces that are part of an organization. Note that DFS allows shared folders to be accessible across WAN links. DFS operation is done transparent to the user, which means that folders will appear just if they are stored in the same location. One unique and centralized namespace can be easily maintained by Sysadmins rather than using a distributed folders across multiple servers. DFS manages only the namespace and the hierarchy of folders and does not replicate files and folders between servers. Note that with Windows Server 2008 Standard Edition you can create only one namespace. Multiple namespaces are supported by Enterprise and Datacenter Editions. A namespace can be accessed using its UNC (Universal Naming Convention) path. Users will need access rights to be able to access the namespace. There are two types of DFS namespaces that you can create:

Stand-Alone Namespace - namespace that is not domain-based, can be hosted on a server that contains at least one NTFS volume. It also offers support for ABE (access-based enumeration) if it's hosted on a Windows Server 2008 or newer Editions. ABE allows users to view only the folders on which they have permissions. This feature is not enabled and can be configured using the dfsutil command (dfsutil property abde enable \\namespace_name). Stand-Alone namespaces can be hosted on a failover cluster for redundancy.

Domain-Based Namespace - namespace hosted on a DC or a member server that is part of a domain. Servers must also have a NTFS partition to host the DFS namespace. One important aspect of domain-based namespace is that the metadata is stored in AD DS and can be easily accessed by any DFS server. Note that this type of namespace cannot be stored within a failover cluster but, availability can be increased by adding more DFS members.
Introduction to DFS

A DFS namespace can be easily maintained using the DFS Management Console. Cache referrals can be configured to set the amount of time clients will store referrals for a namespace. We can also set the order on how clients will try to access folders that are in a different site. There are three methods available: Lowest Cost, Exclude Targets Outside Of The Clients Site and Random Order.
When a user will try to access shared folders that are part of the namespace, he/she will receive an ordered list from the Domain Controller. The list will contain the servers that host that specific resource. Based on the method configured on the namespace, a server will have a higher or lower priority when files and folders are requested. You can overwrite the referral ordering form the namespace Properties in the DFS Management Console
Namespace modes can be configured when creating the namespace, we will talk about available modes later in this article.
Depending on your network's size and needs, you can optimize the namespace polling for domain-based namespaces. You can Optimize pooling for Consistency or Optimize For Scalability.

We haven't talked about the DFS modes that you can use within your infrastructure. DFS supports two domain-based namespaces:
Windows Server 2000 mode - available with Windows Server 2003 R2 and 2008 Editions.
Windows Server 2008 mode - available with newer Windows Server Editions. You can enable access-based enumeration. This mode also provides increased server stability because you can now have a DFS namespace with more than 5000 folders. You can enable Windows Server 2008 mode if all your DFS servers are running this server edition and if the functional level of the domain is at Windows Server 2008. You should use this DFS mode whenever possible.

DFSR (Distributed File System Replication) manages the replication of shared folders between different parts of the network. This is a newer technology introduced with Windows Server 2008 to replace FRS (File Replication Service) that was used in older Windows Server Editions. DFSR is also used to replicate the SYSVOL folder in a AD DS infrastructure. You can configure replication to make sure that multiple servers host the same data within your infrastructure. Not only you provide a reliable way to store your data but also allows users to access files and folders which would normally be accessible remotely. Large enterprises will normally host a DFSR server in all of their offices ensuring that users can access data fast and secure. A user will always try to connect to shared folders in the same AD DS site and will not try to access it through the WAN link. DFSR is a multimaster technology which means that any changes within the shared folders will be replicated fast to all servers part of the replication group. DFSR uses a compression algorithm named RDC (remote differential compression) which is responsible for detecting changes that occur in the files and folders hosted on the DFS namespace. Replication topology must be specified when creating replication groups. You can choose either Hub and Spoke, Full mesh or No topology.

This introduction article should provide you some basic info regarding DFS and DFSR, in the following posts we will continue discovering this amazing technology and we will later see how to install and configure them. That's about it folks, hope you've enjoyed it. Wish you all the best!
Read More
29 Sep 2014

Windows Server VPN Protocols

Leave a Comment
Windows Server VPN Protocols
In the past article we've discussed a bit about VPN authentication protocols used with Windows Server Editions. We cannot talk about VPN authentication protocols without talking about the different VPN protocols that can be used with Windows Server 2008 and 2012. Based on the necessities of your company, you can opt for one of the four VPN protocols, as follows:

PPTP (Point-to-Point Tunneling Protocol) - one of the first VPN protocols that are still used today. It uses MPPE (Microsoft Point-to-Point Encryption) protocol to encrypt data sent by VPN clients. Even if this protocol provides features for data confidentiality it does not support data origin authentication nor data integrity so it's susceptible to exploits. PPTP connections can be authenticated using either MS-CHAP, MS-CHAPv2, PEAP or EAP. PPTP can be used with EAP-TLS but for that you will need a local CA (Certification Authority) deployed with a certificate installed on the VPN Server. Note that unlike other protocols, with PPTP EAP-TLS you don't need to install the certificate on the VPN clients. PPTP is mostly used with non-Microsoft products because it offers compatibility with all Operating Systems. You should opt for a newer VPN protocol whenever possible because others offer increased security.

L2TP/IPSec (Layer 2 Tunneling Protocol with Internet Protocol Security) - tunneling protocol that does not provide encryption or confidentiality alone. With Microsoft VPN Server, it's used with IPSec which deals with data encryption before it's sent on the tunnel. There are two levels of authentication that occurs within a L2TP/IPSec communication:
Computer authentication - made using digital certificates issued by a Certificate Authority trusted by both the Server and the Client.
Client authentication - this authentication mechanism is made using one of the PPP authentication Protocols discussed in the previous article.
This protocol offers data origin authentication, data confidentiality, data integrity and replay protection.

SSTP (Secure Socket Tunneling Protocol) - an authentication protocol which encapsulates PPP or L2TP traffic through an SSL 3.0 channel. The SSL traffic is passed using HTTPS (Hypertext Transfer Protocol Secure) which means that the traffic is passed by almost all routers or firewalls because 443 port is usually opened in the public Internet. The use of SSL provides transport-level security with key-negotiation, integrity checking and encryption. To successful deploy SSTP within your organization, you will need to take into consideration several factors. SSTP is supported only by Windows Server 2008 or newer Editions this is why it cannot be used with Windows Server 2003. You will also need a trusted CA to issue certificates for your Server and the Server must first install the certificate before enabling Routing and Remote Access. The client will then be able to connect using the VPN Server hostname that must be the same to the subject name specified in the SSL Certificate. Note that with SSTP you cannot create site-to-site tunnels and you cannot tunnel SSTP traffic on proxies which require authentication.

IKEv2 (Internet Key Exchange) a VPN tunneling protocol supported by Routing and Remote Access Service (RRAS). The protocol is used to configure a SA (Security Association) in the IPSec communication. Read more about Security Associations here. You will need a local CA issuing certificates with Enhanced Key Usage (EKU) options. You will then need to generate the authentication certificate and import it to the VPN Server store. IKEv2 offers support for VPN Reconnect (also known as Agile VPN) which is a technology that tolerates network interruptions. The VPN connection is re-established without the user intervention once the Internet connection is established again. Read more about IKE on this article from Wikipedia.

This was a short introduction to VPN Protcols that can be used on Windows Server Editions. The article should provide an overview of these protocols so you will better understand VPN technologies. For any questions on this topic, use my comments section. Wish you a wonderful day!
Read More
© 2014 All Rights Reserved.
IT training day & Powered By BloggerHero