PowerShell Module for Citrix ADM

I know it’s been a while since my last post, but I felt compelled to share a PowerShell module for Citrix ADM I wrote for interacting with Citrix Application Delivery Management appliances. The module uses Invoke-RestMethod to interface the Nitro REST APIs for ADM, and was inspired by the module that Citrix originally shared which was credited to Esther Barthel, and so thank you Esther for the foundation!

Basically, this module works much in the same was as the NetScaler version, and makes it easier to talk to an ADM and the ADCs that it manages by acting as an API Proxy to the ADCs. It also allows for advanced API operations such as uploading firmware, certificates, configuration jobs & templates, and pretty much anything else you can do in the GUI, for both ADM and ADCs.

Anyways, check out ADM.psm1 along with Sample.ps1 to get a feel for it (there’s also a readme), and hopefully this is helpful for others that manage ADMs and/or ADCs on a regular basis!

XenDesktop 7 Session Launch – Part 3, Brokering

In my last post I talked about the ways that the Citrix client/WI enumerates XenDesktop resources by way of NFuse transactions to the site’s XML broker. The XML broker is responsible for telling the StoreFront server which published resources were found for a particular user. For more technical detail on NFuse transactions, check out my XML Broker Health Check post which gives a good example of NFuse transactions by way of some pretty straightforward XML requests sent through PowerShell.

The next major piece of the session launch process is what’s known as Brokering. This process allows a user to click a desktop or app resource, and have a ‘worker’ selected and readied for an inbound ICA connection. XenDesktop 7’s brokering functionality is mostly unchanged from that of XenDesktop 5, the only main difference being that it now includes multi-user RDS workers.

Conceptually, this factor doesn’t change how the Citrix Connection Brokering Protocol works, it simply adds multi-user support for Windows RDS servers. This functionality has actually existed with limited capabilities since XenDesktop 5.6 for CSPs (Hosted Server VDI), so it’s certainly not a huge leap in terms of changes to the broker agent. The XenDesktop brokering process consists of several key components, including:

  • Citrix Desktop Service (CDS / VDA) – This component provides a bridge between the ‘Delivery Controller’ and the ‘Worker’ and is commonly referred to as the ‘Virtual Desktop Agent’ or VDA. In XD5 this was the WorkstationAgent.exe process, though in XD7 the process was renamed to BrokerAgent.exe. However, the directory still reflects the VDA designation, so I still like to refer to it as the VDA:

CDS

  • Citrix Broker Service – The Broker Service is responsible for negotiating session launch requests with ‘workers’. The Broker service communicates with the CDS over a protocol that Citrix refers to as CBP (connection brokering protocol) to validate a worker’s readiness to fulfill a session launch request, gather the necessary details (IP address or host name), and send the details back to the StoreFront site to be packaged and delivered as an .ICA launch file that’s consumed by the Receiver.
  • Connection Brokering Protocol – This protocol behaves much like NFuse, though it uses .NET WCF endpoints to exchange a series of contracts to communicate registration and session launch details between a worker and delivery controller. This protocol was designed with the following key requirements as it’s functionality is highly critical to reliably providing on-demand desktop sessions:
    • Efficient: information should be exchanged only if and when required (just in time). Limiting the data exchange to a minimum also reduces the risk of leaking sensitive data.
    • Versioned: It must be possible for both workers and controllers to evolve concurrently and out of step without breaking protocol syntax or semantics.
    • Scalable: The delivery controller is a key piece of infrastructure, and its performance must not be impacted by unprompted messages and data from workers, as can happen in IMA, for instance during “election storms”.
    • Flexible: the protocol should allow the architecture to evolve over time, by not building key assumptions into the protocol’s foundation code. Factoring independent operations into separate service interfaces is one example of how a protocol can allow for increasing controller differentiation in future.
    • Compliant: Standards-based mechanisms (WCF) are used instead of proprietary technologies (IMA).
    • Secure: Security is critical, and the protocol must support appropriate mechanisms to ensure confidentiality, integrity (WCF contracts), and authenticity (NTLM/Kerberos auth) of data exchanged between workers and controllers.

The XenDesktop brokering process makes the following basic assumptions about CDS workers:

  • Desktops are either Private or Shared
  • Each desktop is associated with a single delivery group
  • Each desktop is backed by a single worker
  • Each worker is individually associated with a hosting unit, with a null unit index value indicating an unmanaged worker (existing or physical catalog types)
  • Desktops within a private desktop group can have permanent user assignments. The association may comprise one or more users, or a single client IP address
  • Multiple desktops within a private desktop group may have the same user assignments
  • Desktops within a shared desktop group may temporarily be assigned to a single user for the duration of a session
  • Multiple desktops within a shared desktop group may be assigned to the same user concurrently
  • Automatic assign-on-first-use behavior involves the broker selecting a desktop within a private desktop group with no assignment, and assigning it to the currently requesting user; the desktop’s group will not change by virtue of user assignment
  • The assignment of a desktop to its assignee(s) in a private desktop group can only be undone by an administrative user through the PoSH SDK

In a nutshell, the Delivery Controller is responsible for negotiating session launch requests by locating and preparing workers to accept ICA sessions that were requested by a StoreFront server via the XML broker.

XD7brokering

The broker service finds a worker to fulfill the session request, powers it on if needed, waits for it to become ready if a power action was sent. Once the worker is ready, the DDC sends the requisite connection details to the StoreFront server to build and deliver the ICA file, which is sent to the Receiver for consumption by the ICA client.

Hopefully this was a decent enough explanation of brokering. While I didn’t get a chance to go into a lot of detail about how a worker is found, and how CBP interacts with the ICA stack, I think this at least gives a good high level overview of the concept to know what components are involved and what their general interactions with each other are.

My next part in this series will look at the ICA stack, and how a connection is established between ICA clients and servers.

XenDesktop 7 Session Launch – Part 2, Enumeration

In my last post I talked about how the Citrix Receiver authenticates to a StoreFront server. In this post, I want to talk about resource enumeration with Citrix Receiver <> StoreFront <> XenDesktop deployments.

Before I go into the technical aspects of the way Citrix enumerates published resources, I want to briefly explain the history behind the Citrix XML Broker, as well as how the Citrix client enumerates published resources. In case anyone is interested in a broader history of Citrix, I encourage you to check out the 20 years of Citrix History publication that was published in 2009.

Citrix NFuse and the XML Portal Server

Back in 2000, Citrix signed a licensing deal with Sequoia Software (whom they later acquired in 2001) to integrate the NFuse as the foundation for providing an extensible application portal for MetaFrame. The XML Portal Server (XPS) technology was then built around NFuse to provide the ability to dynamically enumerate and present resources to end users. This integration was critical in giving Citrix the ability to stand apart from the competition (terminal services), and was the reason the next version of MetaFrame had the XP designation:

xpsnfuse

Since it’s introduction back in 2000, the NFuse protocol has remained at the core of every Citrix desktop/application virtualization product by way of the ‘XML Broker’ service. This service was included in all future releases, including all versions of XenApp & XenDesktop. Until XenDesktop 5 was released, the XML broker service ran as it’s own standalone service. During the XenDesktop ‘Storm’ site architecture rework (now called FMA, aka NOT IMA) the XenDesktop product team decided to relocate the XML broker service to run as a ‘virtual’ service by piggy-backing on the XenDesktop Broker service. Other than this move to virtualize the XML broker service, the service remains as NFuse capable as the XML broker service used in MetaFrame.

Because of the NFuse protocol, resource enumeration has remained compatible as MetaFrame evolved into XenDesktop. In other words, the old MetaFrame Web Interface Server would still enumerate published desktops from a XenDesktop 7 DDC, and a StoreFront 2.0 server would enumerate published applications from a MetaFrame XP XML broker service (theoretically at least!), as long as the requests are NFuse compatible.

One of the main reasons the NFuse protocol is so durable is that it’s able to negotiate capabilities. In this example, a StoreFront site contacts a XenDesktop XML broker to determine what capabilities it has, and what resources are published to the authenticated user:

enumeration

In this process, the web front-end server sends an XML message to the configured XML broker to request a list of capabilities. The XML broker then responds with an XML formatted list of the types of resources it has access to. StoreFront will then request any compatible resources for the the authenticated user account. The XML broker then works with the XenDesktop broker and controller services to find out what resources are assigned to the user. The enumerated resources are consumed by StoreFront and presented to the end user.  This figure from the NFuse Classic 1.7 is still mostly relevant, just different companion components:

nfuse

StoreFront & Receiver

With Citrix Web Interface, the results of application enumeration were presented all at once to the authenticated user (optionally sorted into tabs and/or folders). In other words, all resources published to a user would be shown to them, though you could hide resources as needed. StoreFront with Receiver added the functionality that was originally introduced with Citrix Merchandising Server and Dazzle, which is to allow a user to pick their ‘favorite’ resources, providing Self-Service ‘App Store’ functionality and allowing a user’s favorite resources to follow them across multiple client devices and device types.

Prior to Dazzle/Receiver, applications were either enumerated in a web browser using a Web Interface ‘Web’ site, or enumerated directly by the Citrix client agent through a ‘Services’ site. In the past three years the Citrix client has evolved from the ‘Online Plug-in’ to the Receiver for Windows.

clients

The concept of client-side agent enumeration began with the ‘Program Neighborhood’ client (pre Online Plug-in), which would access a Services site (hence the default name PNAgent), and looked like this:

Program-Neighborhood-12

Until XenApp replaced Presentation Server, this was the way users would typically connect to applications, as the web portal wasn’t nearly as solid as the old WI server running in IE 5/6. By the time XenApp was released, Citrix decided to rework the ‘PNAgent’ to display resources as a system tray icon jump menu (which remains a fan favorite, quick, easy, intuitive) in what was first called the XenApp Plug-in, and later renamed to the Online Plug-in when XenDesktop was released:

DRXDBoth

However, around the time the Apple App Store was released, and Windows Vista changed some application UI design consideration, Citrix decided to create a self-service framework by way of Dazzle and Merchandising Server. As a result of this they decided to do away with the notification icon ‘jump-menu’ since it didn’t align with Microsoft’s general UI design recommendations for Windows Vista/7. When Receiver was launched, this legacy ‘PNAgent’ functionality was taken out of the standard ‘Receiver’, and moved to only be included in the ‘Enterprise’ flavor of the Receiver, which is really just a nice way to say that they’re accommodating ‘Enterprise’ customers who still want/need to use this legacy functionality.

As of Receiver Enterprise (the black icon) PNAgent enumerated shortcuts are only available in the start-menu or desktop (according to the site/farm settings):

PNA

Citrix’s current preferred method of displaying shortcuts to end-users is using Receiver 3/4 to connect to a StoreFront Store, which uses the ‘Dazzle’ framework to present shortcuts:

receiverwindow

The StoreWeb StoreFront site aims to provide the same look and feel for users that aren’t enumerating via the ‘Receiver’ agent:

receiverweb

Connecting from mobile receivers maintains this consistency of favorite resources:

androidreceiver

I could go on and on about the technical details of enumeration, but am out of time for today. I encourage readers to check out my previous post on the XML broker for a technical example of NFuse transactions.

In my next post I go into more detail about how enumerated resources are brokered to the receiver.

XenDesktop 7 Session Launch – Part 1, Authentication

The process of enumerating, brokering, and connecting to a XenDesktop resource involves quite a few moving parts, and can be a daunting task to troubleshoot for someone who isn’t familiar with the product. There are several key components involved in the session launch process including authentication, enumeration, registration, ticketing, and display/session handling.

In this post I’d like to briefly explain how the Citrix Receiver authenticates to a XenDesktop 7 application or desktop. To keep it simple, I’m only going to talk about StoreFront (no WI) and XenDesktop (no XenApp/IMA). So just Receiver <> StoreFront <> XenDesktop.

The first step to launching a XenDesktop session is to authenticate to the StoreFront Store that the XenDesktop resource is connected to. In this step, a user connects to the StoreFront server and:

  1. Authenticates to a StoreFront virtual directory via Citrix Receiver:
    • The Receiver is connecting to a StoreFront Store, StoreWeb, StoreDesktopAppliance, or PNAgent IIS virtual directory. IIS allows anonymous authentication since the StoreFront .NET services (Citrix.Storefront.exe & Citrix.StoreFront.PrivilegedService.exe) handle authentication:

storefront-IIS

    • To configure a StoreFront server’s authentication methods, use the Citrix Studio MMC to open the Authentication TreeNode of the Citrix StoreFront deployment. Here you can specify which authentication methods to allow on Stores hosted by that server:

storefront-authmethods

    • There are four authentication methods available as of StoreFront 2.0:
      • User name and password: Similar to windows basic auth in IIS, or explicit authentication in Citrix Web Interface Server.Prompts the user to enter their credentials at logon.
        • In this scenario, the broker passes the user’s credentials to the target ICA server on behalf of the client
      • Domain Pass-through: Similar to Integrated Authentication in IIS, and
        • Allows Receiver for Windows endpoints automatically log on using the local session’s logged on user domain account (via NTLM)
        • In this scenario, the client sends credentials (via ssonsvr.exe) directly to the target ICA server
        • For pass-through authentication to work, you must use the /includesson switch when installing Receiver (per CTX133982), which tells the meta-installer to include the ‘SSON’ component (ssonsvr.exe) that is needed to capture the user’s domain credentials at logon. Receiver relies on the SSON component to send the user’s domain credentials to the StoreFront server’s StoreWeb (via browser), Store (via Receiver), or legacy PNAgent (via Online Plugin / Receiver Enterprise) virtual directory.
          • There is currently a bug with XenDesktop 7 published desktops using pass-through authentication to provide ‘FlexCast’ functionality (enumerate and launch published apps from the published desktop) where ssonsvr.exe wouldn’t run (crashed at login) because pnsson.dll wasn’t playing nice with the ICA stack at session logon.
          • Citrix has provided a provisional test-fix to customers with an open case, and will soon be releasing a public hotfix. As of this post, pass-through authentication doesn’t work on XenDesktop 7 published desktops running Receiver 3/4 without this fix in place.
      • Smart Card: Allows smart card pass-through
        • Thankfully I don’t work with this method very often, so I’ll refrain from digging in. Just know that it’s required if smart-cards are used in the environment
      • Pass-through from NetScaler Gateway: Allows a NetScaler Gateway virtual server to handle user authentication on behalf of the user
        • Requires Set-BrokerSite -TrustRequestsSentToTheXmlServicePort $true to be set on the DDC/XML broker)
        • Use the Configure Delegated Authentication option to specify that the NetScaler send logon credentials directly to the remote Windows session

delegatedauth

    • There are also client-side registry values that control whether or not to allow pass-through authentication, and can even lock down the feature to only work with StoreFront sites in specified Internet Explorer Security Zones (aka Client Selective Trust). The easiest way to adjust this is to use the ADM template in %ProgramFiles%\Citrix\ICA Client\Configuration\icaclient.adm

Image

In my next post on XD7 session launch I’ll talk about resource enumeration, including details about the virtual XML broker and XenDesktop broker services.

XenApp/PVS Global Farm Overview

Since there was a lot of interest in the last Visio I posted, I thought I’d share another.

These diagrams outline high level overviews of a global XenApp w/PVS deployment, with XenApp zones and PVS sites in each datacenter. Each XenApp zone has two data collectors/XML brokers, PVS-Streamed OU-based worker groups. Each geographic region has a corresponding StoreFront Store (directed by host name):

XenAppGlobal

 

The PVS farm configuration is very similar, consisting of sites in each datacenter to stream XenApp workers for each XenApp zone in that datacenter, with the master database homed in the US datacenter:

PVSGlobal

 

The intent of these overviews are mainly to demonstrate how the XenApp and PVS farms interact in a global zone/site architecture. I’ll share some overview diagrams of XenApp zones and PVS sites in another post. Hope you enjoy!

SiteDiag v1.2 for XD7

I think I’ve gotten SiteDiag working pretty well for XD7 now, and feel comfortable to share it as a stable release. I also did some basic testing on XD5, and there doesn’t appear to be any noticeable regressions. As of version 1.2 (10/2/13) I added application icons into the tool using the Get-BrokerIcon cmdlet to convert the Base64 strings to images in the TreeView.

I’ll continue working to build out the functionality of the tool on XD7, so stay tuned for updates as progress is made.

Click here to download the latest stable build.

SiteDiagXD7

XenDesktop 7 Service Instances – What’s New?

Since XenDesktop 7 was built using the same service framework architecture as XenDesktop 5 (aka the ‘FlexCast Management Architecture’), the additional functionality introduced in XD7 was added as services, each with multiple service instances. These services are handled much in the same way as XenDesktop 5, and XenDesktop 7 sites use version 2 of the Citrix.Broker.Admin PowerShell SDK to return information on registered service instances using the cmdlets of the same name as XD5 (Get-ConfigRegisteredServiceInstance, Register-ConfigServiceInstance, etc.).

In XenDesktop 5, each DDC in a site has 5 services, with 12 total service instances that correspond to the various WCF endpoints used by each service. If the DDC is also running the Citrix License Server, there would be a total of 13 instances. For this reason, it’s a fairly straightforward process to find and register missing service instances.

XenDesktop 7 is quite different in this regard. Since it has optional FMA services, such as StoreFront, the number of service instances in any given site depends on which components are installed, and whether or not SSL-is in use.

For example, my single-DDC site running StoreFront 2.0 with SSL encryption has 10 services with 43 total service instances:

XenDesktop 7 Services

If StoreFront wasn’t installed, for example, there would be at least three less services (some of the Broker services would likely not be registered). There are also duplicate service instances for SSL encrypted services, such as the virtual STA service. Here’s a quick PoSH script to tell you what service instances are registered in your site (for XD5 & XD7):

asnp citrix.Broker*
Get-ConfigRegisteredServiceInstance -AdminAddress na-xd-01 | %{ 
"ServiceType: " + $_.ServiceType + " Address: " + $_.Address; $count++}
"Total Instances: " + $count

You could take this a step further to see how many instances are in each of the 10 possible service types:

New-Alias grsi Get-ConfigRegisteredserviceInstance
 $acct = grsi -AdminAddress na-xd-01 -serviceType Acct; "$($acct.Count) ADIdentity service instances"
 $admin = grsi -serviceType Admin ; "$($admin.count) Delegated Admin service instances"
 $broker = grsi -serviceType Broker; "$($broker.count) Broker service instances"
 $config = grsi -serviceType Config; "$($config.count) Configuration service instances"
 $envtest = grsi -serviceType EnvTest; "$($envtest.count) Environment Test service instances"
 $hyp = grsi -serviceType Hyp; "$($hyp.count) Hosting Unit service instances"
 $log = grsi -serviceType Log; "$($log.count) Configuration Logging service instances"
 $monitor = grsi -serviceType Monitor; "$($monitor.count) Monitor service instances"
 $prov = grsi -serviceType Prov; "$($prov.count) Machine Creation service instances"
 $sf = grsi -serviceType Sf; "$($sf.count) StoreFront service instances"
 "$($acct.Count + $admin.Count + $broker.Count + $config.Count + $envtest.Count + $hyp.Count + $log.Count + $monitor.Count + $prov.Count + $sf.Count) Total service instances"
XenDesktop 7 Service Instance Count

XenDesktop 7 Service Instance Count

Because of this nuance, I’m working on a more intelligent way of enumerating and validating service instance registrations in SiteDiag for XD7. Hopefully these scripts are helpful in illustrating the difference between XD5 & XD7. Also, here’s the latest nightly build of SiteDiag that has the beginnings of the additional logic needed to properly count, and fix, registered service instances in a XenDesktop 7 site.

XenDesktop 7 – Environment Test Service

If you’ve had a chance to review the XenDesktop 7 PowerShell SDK documentation, you might have noticed a few new snap-ins that provide the site interactions for the new services included with XenDesktop 7 (as part of the FlexCast Management Architecture). These new snapins are the designated as V1 on the cmdlet help site, and include StoreFront, Delegated Admin, Configuration Logging, Environment Tests, and Monitoring.

Out of these new services, the Environment Test Service sounds the most appealing to me, as it provides a framework to run pre-defined tests and test suites against a XenDesktop 7 site. However, I found that the SDK documentation didn’t provide much/any guidance on using this snap-in, so I thought I’d share a quick rundown on the meat of this new service, along with some sample scripts using the main cmdlets.

The most basic function of this service is to run predefined tests against various site components, configurations, and workflows. As of XD7 RTM, there are 201 individual TestID’s, which can be returned by running the Get-EnvTestDefinition cmdlet:

TestId 
------ 
Host_CdfEnabled 
Host_FileBasedLogging 
Host_DatabaseCanBeReached 
Host_DatabaseVersionIsRequiredVersion 
Host_XdusPresentInDatabase 
Host_RecentDatabaseBackup 
Host_SchemaNotModified 
Host_SnapshotIsolationState 
Host_SqlServerVersion 
Host_FirewallPortsOpen 
Host_UrlAclsCorrect 
Host_CheckBootstrapState 
Host_ValidateStoredCsServiceInstances 
Host_RegisteredWithConfigurationService 
Host_CoreServiceConnectivity 
Host_PeersConnectivity 
Host_Host_Connection_HypervisorConnected 
Host_Host_Connection_MaintenanceMode...

The tests are broken down into several functional groups that align with the various broker services, including Host, Configuration, MachineCreation, etc, and are named as such. For example, the test to verify that the site database can be connected to by the Configuration service is called Configuration_DatabaseCanBeReached.

Each test has a description of it’s function, and a test scope that dictates what type of object(s) can be tested. Tests can be executed against components and objects in the site according to the TestScope and/or TargetObjectType, and are executed by the service Synchronously or Aynchronously, depending on their InteractionModel. You can view all of the details about a test by passing the TestID to the Get-EnvTestDefinition cmdlet; for example:

PS C:> Get-EnvTestDefinition -TestId Configuration_DatabaseCanBeReached

Description : Test the connection details can be used to 
 connect successfully to the database.
DisplayName : Test the database can be reached.
InteractionModel : Synchronous
TargetObjectType : 
TestId : Configuration_DatabaseCanBeReached
TestScope : ServiceInstance
TestSuiteIds : {Infrastructure}

TestSuites are groups of tests executed in succession to validate groups of component, as well as their interactions and workflows. The Get-EnvTestSuite cmdlet returns a list of test suite definitions, and can be used to find out what tests a suite is comprised of. To get a list of TestSuiteIDs, for example, you can run a Get-EnvTestSuite | Select TestSuiteID, which returns all of the available test suites:

TestSuiteId 
----------- 
Infrastructure 
DesktopGroup 
Catalog 
HypervisorConnection 
HostingUnit 
MachineCreation_ProvisioningScheme_Basic 
MachineCreation_ProvisioningScheme_Collaboration 
MachineCreation_Availability 
MachineCreation_Identity_State 
MachineCreation_VirtualMachine_State 
ADIdentity_IdentityPool_Basic 
ADIdentity_IdentityPool_Provisioning 
ADIdentity_WhatIf 
ADIdentity_Identity_Available 
ADIdentity_Identity_State

Each of these suites can be queried using the same cmdlet, and passing the -TestSuiteID of the suite in question. Let’s take DesktopGroup as an example:

PS C:\> Get-EnvTestSuiteDefinition -TestSuiteId DesktopGroup

TestSuiteId         Tests 
-----------                  ----- 
DesktopGroup   Check hypervisor connection, Check connection maintenance mode, Ch...

One thing you’ll notice with the results of this cmdlet is that the list of tests are truncated, which is a result of the default stdout formatting in the PowerShell console. For that reason, my preferred method of looking at objects with large strings (ie descriptions) in PowerShell, is to view them in a graphical ISE (PowerGUI is my preference) and explore the objects in the ‘Variables’ pane.

For example, if you store the results of  Get-EnvTestSuiteDefinition -TestSuiteId DesktopGroup into a variable ($dgtest) in PowerGUI, each Test object that comprises the test suite can be inspected individually:

The DesktopGroup EnvTestSuite object

The DesktopGroup EnvTestSuite object

To start a test task, use the Start-EnvTestTask, passing the TestID or, alternatively, the TestSuiteID, and a target object (as needed). For example:

PS C:> Start-EnvTestTask -TestId Configuration_DatabaseCanBeReached

Active : False
ActiveElapsedTime : 11
CompletedTests : 1
CompletedWorkItems : 11
CurrentOperation : 
DateFinished : 9/16/2013 11:33:31 PM
DateStarted : 9/16/2013 11:33:20 PM
DiscoverRelatedObjects : True
DiscoveredObjects : {}
ExtendedProperties : {}
Host : 
LastUpdateTime : 9/16/2013 11:33:31 PM
Metadata : {}
MetadataMap : {}
Status : Finished
TaskExpectedCompletion : 
TaskId : 03f5480d-68e8-410a-9da4-5e65d96ac393
TaskProgress : 100
TerminatingError : 
TestIds : {Configuration_DatabaseCanBeReached}
TestResults : {Configuration_DatabaseCanBeReached}
TestSuiteIds : {}
TotalPendingTests : 1
TotalPendingWorkItems : 11
Type : EnvironmentTestRun

Once you know what tests there are, what they do, and what types of results to expect, health check scripts can easily be created using this service. Combinations of tests and test suites can, and should, be leveraged as needed to systematically validate XenDesktop 7 site components and functionality.

I plan on using these cmdlets to some extent in SiteDiag, and expect to get some good use out of this new service in the field. I’m interested to hear from anyone else who’s started using this snap-in, and if they’ve come up with any useful scripts.

NetScaler Gateway VPX v10.1 with StoreFront v2.0 – Encrypt and Theme!

I just finished up on a XenApp 6.5 upgrade where I replaced a single 2008R2 server running a DMZ’d CSG v3.2 SSL-proxied Citrix Web Interface v5.3 ‘Direct’ site with a NetScaler Gateway 10.1 Access Gateway virtual server and a StoreFront v2.0 Store.

This post is meant to share some tips on setting up and customizing a Citrix Receiver <> NetScaler Gateway <> StoreFront deployment. Before I get into the thick of it, I thought I’d share the following high-level topology of the environment I was working with:

XenApp65_SharedHostedDesktopDelivery

This scenario consists of WAN-connected Citrix Receivers accessing the XenApp farm via a NetScaler Gateway Access Gateway VPN fronted StoreFront Store. The NetScaler Gateway Access Gateway virtual server provides AD-auth via an LDAP Authentication policy, and replaces the SSL-Proxied ICA & HTTP traffic that the Secure Gateway server previously handled (EOL’d since ‘06!, yet running on Win2008R2??). The NG-AG virtual server also acts as the landing page for web browsers, and as such has it’s own visual style that can (and SHOULD) be customized. Receiver connections are passed through to the Store virtual directory, and all other connections (web browsers) are directed to the StoreWeb virtual directory.

One major consideration I found in this topology is that if your StoreFront ‘Store’ is not SSL-encyrpted, Citrix Receiver for Windows 3.1 and later will not work without tweaking a few client-side registry values (see CTX134341), even though the NetScaler Gateway session is encrypted. That said, a resultant consideration of securing the StoreFront site is that you need to be sure that the NetScaler trusts the StoreFront server’s SSL certificate.

To do this you need to install any of the StoreFront server’s certificate chain certs on the NetScaler (here’s a good Citrix blog on the topic) and make sure the Access Gateway session policy profile’s ‘Web Interface Address’ uses the same name that the StoreFront server’s certificate was issued to, and that the NetScaler can resolve the name via DNS. The other pieces of getting this setup working are pretty easy, thanks mostly in part to the foolproof NetScaler Gateway setup wizard (eDocs link), and StoreFront’s ‘Add NetScaler Gateway Appliance’ wizard (eDocs). As long as your SSL is working properly, this is a fairly painless install.

Once I got the site up and running, I immediately wanted to customize the NetScaler Gateway VPN web interface to make it look like the StoreWeb site that browser users are redirected to. Out of the box, the NG-AG site is themed with the old (boring) CAG visual style, which is themed to look like the old WI 5.0-5.3 black & blue sites. Since this page is proxying and for the StoreFront site, is makes for a very awkward, time-machinish, experience to login to the black and blue site, and land in StoreFront’s newer green bubble land!

I didn’t look hard to find Jeff Sani’s blog article that I’ve referenced many times before, which provides step-by-step instructions on applying the StoreFront look and feel to a NetScaler’s Access Gateway. After running through this, I decided to change the the logo and background, and referenced Terry D’s blog on customizing a StoreFront site by way of custom CSS. I used WinSCP and PuTTY to make the changes, and pretty quickly had a nice looking landing page to front the StoreFront Store:

CustomLandingPage

I then did the same on the StoreFront server using NotePad++, and was able to give the customer a customized and consistent look and by adding the following custom.style.css to the c:\inetpub\wwwroot\Citrix\StoreWeb\contrib folder of the StoreFront server:

body { background-image: url("custom.jpg");
  background-color: #262638;}
#credentialupdate-logonimage, #logonbox-logoimage 
{ background-image: url("custom.png");
  width: 180px;
  height: 101px;
  right: 63%;}
#.myapps-name 
{ font-weight: bold; color: #000; }

CustomStoreFrontWeb

Well, that’s about all the time I have for today. I hope someone finds this post helpful in producing a functional, and visually consistent, NetScaler Gateway fronted StoreFront deployment!

Exploring ShareFile’s ‘StorageZones’ Services

I was looking for more information on what makes a ShareFile StorageZone ‘tick’, and couldn’t find much that got into the nuts and bolts of this great feature. This post is intended to share some general information about the various StorageZones controller services, including their basic functionality, and some hidden configuration settings.

For the scope of this post, I’m going to focus on the three Windows services that are installed as part of a StorageZone v2.1 Controller. Each service is installed off the root of the IIS site as follows:

  • File Cleanup Service – Citrix\StorageCenter\SCFileCleanSvc\FileDeleteService.exe
  • File Copy Service – Citrix\StorageCenter\SCFileCopySvc\FileCopyService.exe
  • Management Service – Citrix\StorageCenter\s3uploader\S3UploaderService.exe

In each of these directories are the service’s .NET .config file, which can be modified to enable logging, and adjust hidden configuration settings. For example, if you open FileDeleteService.exe.config, you’ll see the following XML by default:

<?xml version="1.0"?>
<configuration>
   <appSettings>
       <add key="ProducerTimer" value="24"/> <!--Time interval in hours-->
       <add key="DeleteTimer" value="24"/> <!--Time interval in hours-->
       <add key="DeleteTimer" value="24"/> <!--Time interval in hours-->
       <add key="Period" value="7"/> <!--No. of days to keep data blob in active storage after deletion-->
       <add key="logFile" value="C:\inetpub\wwwroot\Citrix\StorageCenter\SC\logs\delete_YYYYMM.log"/>
       <add key="enable-extended-logging" value="0"/>
       <add key="BatchSize" value="5000"/></appSettings>
<startup><supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/></startup></configuration>

As you might have guessed, setting enable-extended-logging  to 1 will enable verbose logging after the service is restarted, writing to the specified logFile path. This setting is the same for the other services, and can come in handy when troubleshooting issues with a StorageZone.

In order to really understand what these services were doing, I decided to poke through the source code by decompiling the services’ assemblies using a free utility called DotPeek. Here’s a summary of what I found for each service’s functionality within a StorageZone Controller.

File Cleanup Service (FileDeleteService)

The name says it all here, as this service’s sole responsibility is managing data deletion from the storagezone storage repository. Since all of the data stored by ShareFile is in BLOB format, deleting a file through the ShareFile front-end doesn’t actually delete it from the storage; it simply ‘de-references’ the data, and marks it as ‘expired’.

This ‘expired’ data will remain in the storage repository until it’s ‘cleaned up’ by the File Cleanup Service. This is why if you look at a folder’s recycle bin, you’ll see the files are still listed and available for recovery until the configurable cleanup period lapses (7 days by default).

Citrix recommends configuring this cleanup period to match the backup schedule of your storage device so that data is removed shortly before or after they’re backed up. This design also allows for data to be recovered even if it’s not in the recycle bin, by using the “Recover Files” function in the StorageZone section of ShareFile’s Admin page.

Here are the .config extended settings for this service, along with their default values:

  • ProducerTimer = 24 Time interval in hours 
  • DeleteTimer = 24  Time interval in hours
  • Period = 7 Number of days to keep data in active storage after deletion

File Copy Service (SCFileCopySvc)

This service is what allows the StorageZones controller to communicate with ShareFile’s cloud infrastructure (by way of the ShareFile API), and allows users to upload and download files directly to and from a customer’s on-premise storage.

When a file is uploaded, ShareFile’s servers connect to the controller through this service to initiate an HTTP(S) POST request, allowing the data to be stored directly to the StorageZone. The service also converts files to and from the ShareFile’s proprietary format, converting files to BLOB data for uploads, and converting BLOB data back to the original file for downloads.

The service also has a configurable timer value (the default is 10 seconds key=”CopyTimer” value=”10000″ ) that controls how often retries are attempted for jobs that previously failed due to connectivity issues.

Management Service (S3Uploader)

Last but not least, the poorly named Management Service, which only really ‘manages’ transferring files to and from Amazon’s S3 cloud storage service. This service uses Amazon’s AWS SDK for .NET to take care of the data transfer, and is what allows you to migrate data to and from ShareFile’s storage, and the StorageZone.

There are a couple of configurable settings for this service as well; here they are with their default values:

  • httpMethod = https Transport method; secure or non-secure
  • HeartBeat-Interval = 5 Interval in minutes
  • Recovery-Interval = 3600 Interval in seconds

Well, I hope this post is useful for anyone who is using, or planning on using, ShareFile’s StorageZones feature. Feel free to share any other insights or thoughts in the comments!