Advertisements

Category Archives: Citrix

Radom Streams of Consciousness during “USE IT OR LOSE IT” vacation: Remote PC is ^&%$!* AWESOME and why isn’t everybody talking about it?

So I am sitting at in the coffee shop at my hotel while I am on the Oregon coast using some of my “use it or lose it” vacation before the year ends. Last night, I literally had the greatest pint of beer in my entire life (India Pelican Ale from the Pelican Bar and Grill in Pacific City Oregon) and have just noticed that the same bar I drank it at is opening for Breakfast at 8 AM. I am wondering if I have a drinking problem because I have now rationalized that it is 11AM (early lunch) on the East coast and I could easily blame drinking this early on Jetlag. Like a lot of IT workaholics, I am really trying to get better at this whole “vacation” thing. At any rate, I thought I would sit down and read Jarian Gipson’s post on Remote PC and try to “get hip” to it as I myself am pretty excited about it.

For the record, I am NOT a VDI guy, that in and of itself is no longer badge of shame and it has been nice to see the virtualization community become more tolerant of those who are not jumping for joy over VDI. That said, I think VDI is totally cool but it is very hard to justify paying more for desktop delivery and trying to sell OPEX savings to CIO’s who worry about the next Quarterly stock call. Selling OPEX in the world of publicly held companies is a tough row to hoe. Then, in May, I read Jarian Gibson’s Blog about Remote PC to which I immediately asked “Can I haz it?”

Now I am excited, this works better than traditional VDI for SO MANY reasons, let’s take 1000 Flex/Teleworkers.

The 1000 Telworker Scenario:
Say you have to set up a telework solution for 1000 remote users. Typically this involves the procurement of 1000 laptops and sending the users home with them then building out the back end infrastructure to support either XenAPP or XenDesktop.

Sending users home with a Laptop and providing VDI Access:
So I am doing some brief estimating but I am assuming a laptop costs around $1000 and supplying 1000 end users with one puts in into the project a cool 1 million dollars out of the gate.

Project Cost so far: $1,000,000

Supporting the back end infrastructure:

A quick rough estimate of VDI memory/core requirements I would say you would need at least 20 servers to accommodate 1000 XenDesktop users. At around $10K per server you are looking at another $200,000 in back end hardware costs (not to mention licensing)

Project Cost so far: $1,200,000

So, in addition to the licensing debacle that you get to go through (one I have since thrown my hands up in disgust over) with Microsoft and the set up of the infrastructure you are 1.2 million into this deployment. We could switch to XenAPP (Now we’re talkin!) to save a little more. If you use VMWare (I don’t want to get involved in the hypervisor holy war) than you are going to have more cost as well.

So with XenAPP, I think you should be able to get by with 9 servers (30 users per 8GB VM w/2 vCPUs). At 9 servers you are looking at $90,000 and you are looking at round $1.09 million for your project. Nice savings but you are still stuck with building out the back end infrastructure.

Remote PC Scenario:

With the remote PC Scenario, we get a chance to actually take advantage of BYOD (bear with me here) and take advantage of the cheap PC models that are out there. We can replace the initial investment from a $1000 Laptop to a $400-$600 desktop (bulk PC purchases get this kind of leverage). This presents an opportunity to reduce that cost from $1 million to $400K-$600K. (Let’s use $500K as a baseline)

Now you’re talking the “language of love” to your CIO, CAPEX. Not only have you reduced the initial procurement costs but you do not need to build out the same amount of back end infrastructure. In the Remote PC scenario, you have your DDC’s brokering the connections but the XenAPP/XenDesktop farms are completely gone or maybe one or two XenAPP Servers for apps like ArcGIS, SAS and CAD.

I have spent hours extrapolating CPU Cores and RAM to try and come up with a user density, in all likelihood you have several thousand cores and several terabytes of RAM sitting at desks and in cubicles that can now be tapped into for remote access using Remote PC.

 RemotePC

Why this would work?

While working to set up the teleworking solution at a previous employer we noted a few things. First, after making a seven figure investment in laptops, we found that only 20% of them (that’s a generous number) actually connected to us remotely. The remaining users insisted on using their own equipment. Let’s take my case for example, at my desk at home, I have the “Crackpot Command Center” going with four monitors and a ball busting six core 16GB system (As any true geek would). So, when I want to connect to work, am I supposed to unplug everything and connect my keyboard, mouse and ONE MONTIOR (seriously?) to my Laptop? Maybe two monitors if I have a docking station? No freakin’ way!

Even non-geeks have their setup at home already and I doubt they have a data-switch box to switch back and forth so a teleworker can either work from the kitchen table OR they can UNPLUG their monitor and plug it into the docking station or laptop? The fact is, this is just not likely and the same end user would likely prefer to just use their equipment. This is something I witnessed first-hand to the complete shock of management.

In addition to the BYOD paradigm or UYOD (Use your own device) paradigm you also maintain support continuity. The first time we discussed VDI with our server group my team looked at me like I was crazy. First off, desktop management is 20 years in the making and there are old, established methods of supporting it. A complete forklift of the status quo is much more difficult than just provisioning desktops on the fly.

One of the issues with VDI was the inability to get people to understand that your 5 person Citrix team cannot support 10,000 desktops. Even more, they did not put 5 to 10 years into their IT careers to go back to supporting desktops. I personally am not overly excited to deal with Desktops after 16+ years in this industry and neither are most of the server admins I work with. The inability to integrate XenDesktop/View/ VDI in general, with the incumbent support apparatus at an organization is a significant, and in my opinion, often overlooked barrier to adoption. Your Citrix team likely is not THAT excited about doing it and the desktop team is completely intimidated by it. We go from imaging systems to the “Corporate Image” to setting up PVS, configuring PXE, DHCP Scopes, DHCP failover logging into the hypervisor….etc. Folks, it’s a Dekstop, the level of complexity to do a large scale deployment is far more advanced and much less forgiving than imaging laptops as they come in the door. Advances in SCCM integration for XenDesktop were very welcome and a timely feature but ultimately Remote PC delivers continuity of support as it is little more than an agent installed on the existing PC. The same people who support the PC today can continue to do so, server admins are not being asked to become Desktop Admins and the only thing that changes is that you are extending your infrastructure into the cloud by integrating the DDC and the Access Gateway allowing users the same consistent experience regardless of where they are working from.

You know what, I could BE a VDI guy if:

• I don’t have to put Windows 7 images on my million-dollar SAN (I LOVE Atlantis but it is not safe to assume your Citrix team can put the squeeze on your storage team)
• I don’t have to strong arm Server Admins to do Desktop Support
• I don’t have to buy a 2nd Windows License (or deal with Licensing)
• It can be made consistent enough that the incumbent Desktop team can support it

Holy crap! I’m out of excuses…I think I could become a VDI guy…

Hey CCS! I bet you can even install the Edgesight agent?! (They’ll get the joke) What’s not to like here? Yes, VMware, HP/Dell/Cisco might be a little bent for awhile since you won’t need as much hardware/Hypervisor software and Microsoft might find themselves chagrined as they cannot gauge you for more licensing costs but in the end, you get to simply extend your enterprise into the cloud without drastically changing anyone’s role. This also allows organizations to wade into VDI instead of stand at the end of the high dive at the public pool while Citrix, VMWare and Gartner chanted “JUMP, JUMP, JUMP!”

Isn’t that what we wanted when all this started?

Thanks for reading

John

Advertisements

Preparing for life without Edgesight with ExtraHop

So, the rumors have been swirling and I think we have all come to the quiet realization that Edgesight is going to be coming to an end. At least the Edgesight we know and Love/Hate.

For those of us who have continued with this labor of love trying squeeze every possible metric we could out of Edgesight we are likely going to have to come to grips with the fact that the next generation of Edgesight will not have the same level of metrics we have today. While we all await the next version of HDX Edgesight with we can almost be certain that the data model and all of the custom queries we have written over the last 3 years will not be the same.

Let’s be honest, Edgesight has been a nice concept but there have been extensive problematic issues with the agent both from a CPU standpoint (firebird service taking up 90% CPU) and keeping the versions consistent. The real-time monitoring requires elevated permissions of the person looking into the server forcing you to grant your service desk higher permissions than many engineers are comfortable with. I am, for the most part, a “tools”-hater. In the last 15 years I have watched millions of dollars spent on any number of tools, all of which told me that they would be the last tool I would need and all of them in my opinion where, for the most part, underwhelming. I would say that Edgesight has been tolerable to me and it has done a great job of collecting metrics but, like most tools I have worked with, it is Agent based, also it cannot log in real-time. The console was so unusable that I literally have not logged into it for the last four years. (In case you were wondering why I don’t answer emails with questions about the console).

For me, depending on an agent to tell you there is an issue is a lot like telling someone to “yell for help if you start drowning”. If a person is under water, it’s a little tough for them to yell for help. With agents, if there is an issue with the computer, whatever that is (CPU, Disk I/O, Memory) will likely impact the agent as well. The next best thing, which is what I believe Desktop Director is using, is to interrogate a system via WMI. Thanks to folks like Brandon Shell, Mark Schill and the people at Citrix who set up the Powershell SDK. This has given rise to some very useful scripting that has given us the real-time logs that we have desperately wanted. That works great for looking at a specific XenApp server but in the Citrix world where we are constantly “proving the negative” it does not provide the holistic view that Edgesight’s downstream server metrics provided.

Proving the negative:

As some of you are painfully aware, Citrix is not just a Terminal Services delivery solution. In our world, XenApp is a Web Client, a Database Client, Printing Client and a CIFS/SMB client. The performance of any of these protocols will result in a ticket resting in your queue regardless of the downstream server performance. Edgesight did a great job of providing this metric letting you know if you had a 40 second network delay getting to a DFS share or a 5000ms delay waiting for a server to respond. It wasn’t real-time but it was better than anything I had used until then.

While I loved the data that Edgesight provided, the agent was problematic to work with, I had to wait until the next day to actually look at the data, unless you ran your own queries and did your own BI integration you had, yet another, console to go to and you needed to provide higher credentials for the service desk to use the real-time console.

Hey! Wouldn’t it be great if there were a solution that would give me the metrics I need to get a holistic view of my environment? Even better, if it were agentless I wouldn’t have to worry about which .NET framework version I had; changes in my OS, the next Security patch that takes away kernel level access and just all around agent bloat from the other two dozen agents I already have on my XenApp sever. Not to mention the fact that the decoupling of GUIDs and Images thanks to PVS has caused some agents to really struggle to function in this new world of provisioned server images.

It’s early in my implementation but I think I have found one….Extrahop.

Extrahop is the brain-child of ADC pioneer Jesse Rothstein who was one of the original developers of the modern Application Delivery Controller. The way Extrahop works is that it sits on the wire and grabs pertinent data and makes it available to your engineer and, if you want, your Operations staff. Unlike wireshark, a great tool for troubleshooting; it does not force you, figuratively, to drink water from a fire hose. They have formed relationships with several vendors, gained insight into their packets and are able to discriminate between which packets are useful to you and which packets are not. I am now able to see, in real-time, without worrying about an agent, ICA Launch times and the Authentication time when a user launches an application. I can also see client latency, Virtual Channel Bytes In and Bytes Out for Printer, Audio, Mouse, Clipboard, etc.

(The Client-Name, Login time and overall Load time as well as the Latency of my Citrix Session)

In addition to the Citrix monitoring, it helps us with “proving the negative” by providing detailed data about Database, HTTP and CIFS connections. This means that you can see, in real-time, performance metrics of the application servers that XenAPP is connecting to. If there is a specific URI that is taking 300 seconds to process, you will see it when it happens without waiting the next day for the data or having to go to edgesightunderthehood.com to see if John, David or Alain have written a custom query.

If there is a conf file that has an improper DNS entry, it will show up as a DNS Query failure. If your SQL Server is getting hammered and is sending RTOs, you will see it in real-time/near-time and be able to save yourself hours of troubleshooting.

(Below, you see the different metrics you can interrogate a XenApp server for.)


Extrahop Viewpoints:
Another advantage of Extrahop is that you can actually look at metrics from the point of view of the downstream application servers as well. This means that if you publish an IE Application and it connects to a web server that integrates with a downstream database server you can actually go to that web server you have published in your application and look at the performance of that web server and the database server. If you have been a Citrix Engineer for more than three years, you should already be used to doing the other team’s troubleshooting for them but this will make it even faster. You basically get a true, holistic view of your entire environment, even outside of XenApp, where you can find bottlenecks, flapping interfaces and tables that need indexing. If your clients are on an internal network, depending on your topology you can actually look at THEIR performance on their workstations and tell if the switch in the MDF is saturated.

Things I have noted so far looking at Extrahop Data:

  • SRV Record Lookup failures
  • Poorly written Database Queries
  • Exessive Retransmissions
  • Long login times (thus long load times)
  • Slow CIFS/SMB Traffic
  • Inappropriate User Behavior

GEOCODING Packets:
Another feature I like is the geocoding of packets, this is very useful to use if you want to bind a geomap to your XenApp servers to see if there is any malware making connections to China or Russia, etc. (I have an ESUTH post on monitoring Malware with Edgesight.) Again, this gives me a real-time look at all of my TCP Connections through my firewall or I can bind it on a per-XenApp, Web Server or even PC node. The specific image below is of my ASA 5505 and took less than 15 seconds to set up (not kidding).

On the wire (Extrahop) vs. On the System (Agent):
I know most of us are “systems” guys and not so much Network guys. Because there is no agent on the system and it works on the wire, you have to approach it a little differently and you can see how you can live without an agent. Just about everything that happens in IT has to come across the wire and you already have incumbent tools to monitor CPU, Memory, Disk and Windows Events. The wire is the last “blind spot” that I have not had a great deal of visibility into from a tools perspective until I started using Extrahop. Yes there was wireshark but for archival purposes and looking at specific streams are not quite as easy. Yes, you can filter and you can “flow TCP Stream” with wireshark but it is going to give you very raw data. I even edited a TCPDUMP based powershell script to write the data to SQL Server thinking I could archive the data that way. I had 20GB of data inside of 30 minutes, with Extrahop you can actually trigger wire captures based on specific metrics and events that it sees in the flow and all of the sifting and stirring is done by Extrahop just leaving you to collect the gold nuggets.

Because it is agentless you don’t have questions like “Will Extrahop support the next edition of XenAPP?” “Will Extrahop Support Windows Server2012” “What version of the .Net Framework do I need to run Extrahop” “I am on Server Version X but my agents are on version Y”

The only question you have to answer to determine if your next generation of hardware/software will be compatible with Extrahop is “Will you have an IP Address?” If your product is going to have an IP Address, you can use Extrahop with it. Now, you have to use RFC Compliant protocols and Extrahop has to continue to develop relationships with vendors for visibility but in terms of deploying and maintaining it, you have a much simpler endeavor than other vendors. The simplicity of monitoring on the wire is going to put an end to some of the more memorable headaches I have had in my career revolving around agent compatibility.

Splunk/Syslog Integration:
So, I recently told my work colleagues that the next monitoring vendor that shows up saying I have to add yet another console I am going to say “no thanks”. While the Extrahop console is actually quite good and gives you the ability to logically collate metrics, applications and devices the way you like, it also has extensive Splunk integration. If there are specific metrics that you want sent to an external monitor, you can send them to your syslog server and integrate them into the existing syslog strategy be it Envision, KIWI Syslog Server or any other SIEM product. They have a javascript based trigger solution that allows you to tap into custom flows and cherry pick those metrics that are relevant to you. Currently, there is a very nice and extensive Splunk APP for Extrahop.

I am currently logging (in real-time) the following with Extrahop:

  • DNS Failures (Few people realize how poor DNS can wreck nth-tiered environments)
  • ICA OPEN Events (to get logon times and authentication times)
  • HTTP User Agent Data
  • HTTP Performance Data

So if this works by monitoring the wire, isn’t it the Network team’s tool?
The truth is it’s everybody’s tool, the only thing you need the network team to do is span ports for you (then log in and check out their own important metrics). You can have the DBA log in and check the performance of their queries, the Network Engineers can log in and check jitter, TCP retransmissions, RTOs and throughput, the Citrix guy can log in and check Client Latency, STA Ticket delivery times, ICA Channel throughput, Logon/Launch Times, the Security team can look for TCP Connections to China, Russia and catch people RDPing home to their home networks and the Web Team can go check which user-Agents are the most popular to determine if they need to spend more time accommodating tablets. Everybody has something they need on the wire; I sometimes fear that we tend to select our tools based on what technical pundits tell us too. In our world, from a vendor standpoint, we tend to like to put things in boxes (which is a great irony given everyone’s “think outside the box” buzz statement). We depend on thought leaders to put products in boxes and tell us which ones are leaders, visionaries, etc. I don’t blame them for providing product evaluations that way, we have demanded that. For me, Extrahop is a great APM tool but it is also a great Network Monitoring tool and has value to every branch of my IT Department. This is not a product whose value can be judged by finding its bubble in a Gartner scatter plot.

Conclusion:
I have not even scratched the surface of what this product can do. The triggers engine basically gives you the ability to write nearly any rule you want to log/report any metric you want. Yes, there are likely things you can get with an agent that you cannot get without an agent but in the last few years these agents have become a lot like a ball and chain. You basically install the appliance or import the VM, span the ports and watch the metrics come in. I have had to change my way of thinking of metrics gather from system specific to siphoning data off the wire but once you wrap your head around how it is getting the data you really get a grasp of how much more flexibility you have with this product than with other agent based solutions. The Splunk integration was the icing on the cake.

I hope to record a few videos showing how I am doing specific tasks, but please check out the links below as they have several very good live demos.

To download a trial version: (you have to register first)
http://www.extrahop.com/discovery/

Numerous webinars:
http://www.extrahop.com/resources/

Youtube Channel:
http://www.youtube.com/user/ExtraHopNetworks?feature=watch

Thanks for reading and happy holidays!

John


The Evolution of the Remote Campus: HR 1722

In December of 2010 President Obama signed HR 1722, the Telework Enhancement Act of 2010. Basically this means that every Federal Agency has, now, less than 6 months to come up with a telework strategy for nearly 2 million federal employees. Recent storms in DC have caused sabers to rattle in the last two years to develop a telework strategy for business continuity.  However in an era of wage freezing, cuts and layoffs telework eligibility could mean the difference between key personnel staying or trying their luck in the private sector. One day a week at home in the DC Area could easily be the equivalent of $1000 or more back into an employee’s pocket.

Threaded into the legislation were requirements about reporting on the participation, providing for accountability and training employees on telework. I wanted to take the time to cover some of the concerns that come with this legislation and dispel the idea that somehow IT organizations are suddenly going to flip a switch and become teleworking hubs overnight. At my agency we recently had snow storms that all but shut down the city yet, well over half of the effected users were able to work at home as if it were business as usual. This did not happen with the flip of a switch and it took a few years of careful planning and painful lessons for us to get in a position to have this kind of success during the recent snow event.

Our solution is Citrix from stem to stern, a user connects to an AGEE and runs via a Virtual Desktop, either XenAPP or XenDesktop. We use Edgesight to monitor and alert on key metrics as well as to provide reporting and accountability.

There are a large number of resources concerning how to set up XenAPP and XenDesktop including how to work with profiles, how to size and scale your systems and I am not going to recreate the wheel here but I do want to go over some concerns that can potentially be forgotten as you plan a transition to having 10-20 percent of your workforce connecting remotely. Also, most remote access throughout the Federal Government is either VPN or Citrix, I want to contrast the benefits and risks of each technology and point out why I think thin computing may be the best answer when it comes to a large scale remote access solution.

Hopefully your agency has Citrix expertise on hand, if not, please do not be afraid to reach out to Citrix Partners who can work with your incumbent IT Staff or Systems Integrators  such as Perot, Lockheed, IBM, EDS, etc.  These guys are fiends at implementation of Citrix XenAPP and XenDesktop and will help train/transition your staff.

Bandwidth:
Prior to my latest non-fiber provider I had used both AT&T Uverse and FIOS. Both of these vendors provided 14+ MB download speeds. My current provider gives me about a 10MB download. This is great for surfing the web, delivering rich content on websites and watching movies on Netfix. For remote access solutions, these new high speed broadband connections can sap your agencies bandwidth post-haste. You have to ask yourself, is my agency ready to become an ASP?  I am currently setting up a Citrix SSL VPN for my agency and as part of the testing I went o my local CIFS share and downloaded a 100mb file, my speed actually got up to 5mb per second! I was thrilled to see how fast the file came down. Now, bring on 1000-3000 of my friends, all of us using VPN and what we have is a meltdown as my agencies’ bandwidth rapidly dwindles. While I was able to get up to 5mb down on my VPN connection, my equally productive, Citrix ICA Session hovers between 20K and 60K. Will my YouTube experience be the same? No, but it is good enough and I am consuming at least 125 times less bandwidth.

The table and subsequent chart below were taken from this website showing the number of government employees at a number of DC area agencies. According to Citrix Online in an article here, 61% of all government employees are in a “telework eligible” position. So for example in the table below you see that the department of Veterans Affairs has 8000 DC Area employees.

If 61% of the VA Employees are telework eligible and the work at home one day a week, that means 8000 employees times .61 divided by 5 would mean that 976 employees would be teleworking per day.

Agency

Employees in thousands

Metro DC Area employees in thousands

Executive departments

1,664

238

Defense, total

652

68

Army

244

20

Navy

175

25

Air Force

149

6

Other

84

17

Veterans Affairs

280

8

Homeland Security

171

23

Justice

108

24

Treasury

88

12

Agriculture

82

8

Interior

67

7

Health and Human Services

64

30

Transportation

55

9

Commerce

39

20

Labor

16

6

Energy

15

5

State

15

12

Housing and Urban Development

9

3

Education

4

3

 

To calculate the bandwidth I used 1MB as the reference for VPN, I feel like this is pretty low but I think you would have to at least earmark 1MB per person if you were to scale out a VPN Solution. I used 60KB for ICA, that is generally pretty accurate for a normal ICA Session that does not have heavy graphics. So with that you can see the difference in providing remote access via full VPN vs. ICA. In the case of the VA we can see that around 1GB would be needed to support 976 users via VPN and they would need around 60MB to support the same number of users via ICA. From a bandwidth perspective that is a huge savings.

Agency

1000’s empl

In Metro DC

20% Teleworkers

 VPN BW
in GB

 ICA BW
In GB

Army

244

20

2440

2.50

0.14

Navy

175

25

3050

3.12

0.18

Air Force

149

6

732

0.75

0.04

Other

84

17

2074

2.12

0.12

Veterans Affairs

280

8

976

1.00

0.06

Homeland Security

171

23

2806

2.87

0.16

Justice

108

24

2928

3.00

0.17

Treasury

88

12

1464

1.50

0.09

Agriculture

82

8

976

1.00

0.06

Interior

67

7

854

0.87

0.05

Health and Human Services

64

30

3660

3.75

0.21

Transportation

55

9

1098

1.12

0.06

Commerce

39

20

2440

2.50

0.14

Labor

16

6

732

0.75

0.04

Energy

15

5

610

0.62

0.04

State

15

12

1464

1.50

0.09

Housing and Urban Development

9

3

366

0.37

0.02

Education

4

3

366

0.37

0.02

 

Bandwidth Cart showing bandwidth requirements for VPN at 1MB vs. ICA at 60KB.

I am not trying to scare anyone with the bandwidth comparisons rather I am trying to drive home the paradigm shift that must take place in terms of what you deliver externally. You agency must be ready to transition from delivering just web content and maybe Remote Access to a few hundred users to becoming a service provider to several hundred remote users. Do you have the bandwidth to support 20% of your eligible workforce working remotely? I know 60KB looks a lot better than 1MB plus performance of client/server applications are going to be considerably better because transactions can occur on the switched network.

And finally, I want to quickly touch on your switched infrastructure. While you may have a campus of 2500 users they are likely distributed across as many as 10-20 switches and bandwidth is more than enough per person. While the ICA Bandwidth from the XenAPP or XenDesktop machine to the end user may only be 60K, from the XenApp/XenDesktop system to downstream applications, it is full SMB, TCP, SSL, HTTP, RTSP, etc. If you are going from supporting 2500 users across 20 switches to supporting 2500 users on two to four switches you need to make sure that the those switches can handle the sudden influx of usage. You need to treat your “Remote Campus” just like any other campus you have and you will need bandwidth similar to that of a core switch.

Security:
Another big challenge to a large scale remote access solution is security. I think the current status quo is that most VPN users are IT Staff and a few other select users that the agency allows to have VPN Access. Even with today’s endpoint analysis, ensuring a computer is a Government Asset, has virus software and even encryption software is no guarantee that they will not have some sort of malware. Cyveillance.com states that AV Vendors detect, on average, less than 19% of malware attacks. 0-day malware will almost certainly go undetected on your government issued workstation if it gets on there and the VPN Tunnel becomes a definite INFOSEC concern. This is another good reason to use ICA as it differs in many ways from VPN outside of its lower bandwidth usage.

The ICA Protocol sends screen refreshes over the wire on port 1494 or port 2598. Using the FIPS Compliant AGEE MPX 9700 series you can drastically reduce your attack surface by forcing SSL to the appliance and only allowing ICA protocols to traverse the network. This means no information ever leaves the internal network, only screen refreshes. Agencies can use Smart Access policies to determine whether or not users can print, save data locally or paste text onto their own systems. This, in effect, creates a secure kiosk that keeps data from leaving the network unless it is explicitly allowed. Is there still a role for VPN?, absolutely, for Sys Admins, Network and INFOSEC staff, there will always be a need for VPN but for the general mass populous, Citrix with ICA can deliver a full desktop and run applications on the switched network providing considerably higher level of security along with better overall performance. .

NOTE: During the snow event our Netscaler 9700 MPX had over 5000 connections on it and the impact on the CPU and memory was less than 5%. The device is new and I believe this is the first real test of the FIPS multicore models that Citrix Netcaler has. I would say this is a pretty stout machine!

Support:
Okay, so you have your secure Remote Access Solution, now you have to figure out how to support it. At my agency, the “Remote Access” campus is the 2nd largest at nearly 4000 users a day and over 10,000 users a month. Most campuses have at least 5-10 level II engineers supporting desktop related issues as well as general user questions. Most Citrix teams are made up of 3-6 engineers that I have seen so this begs the question. Can you support 10,000 users with 3-6 Engineers and still get anything done? Keeping your Level III staff out of the Desktop support business is going to take some careful planning and I think is a step that is often overlooked in the VDI/Virtualization realm. For starters, most of my colleagues have not been Desktop Technicians for 7-10 years. We needed a way to ensure that the end users could continue to call the Service Desk as they always have and get the help they need and avoid introducing a “blind spot” into our support strategy. One of my “Soapbox” issues with VDI deployments is the lack of consideration given to Desktop support during the implementation. I often wonder if the fact that VDI is so dominated by Architects and Engineers without being sold to the Desktop staff is the reason it has not skyrocketed after being called the next big thing by Gartner and other IT Pundits. Architects’, Engineers and Sys Admins may not be the only relevant audience in the VDI discussion, in fact, it may be possible that they are not even the MOST relevant audience in the discussion.

(Stop ranting and move on John). Okay for our deployment we realized that first, the users were remote so there WAS no desktop support person to come help them and two, we needed a better and more skilled Level One Service Desk to be able to support the influx of remote users. We engaged in what was, at the time, a unique training regimen for the Service Desk staff. Basically, a remote user who cannot get connected by the person who answers the phone, won’t be able to work or the call will get escalated to your Level III engineers. This will cause considerable dissatisfaction with the end users as well as Engineers who get overwhelmed with escalations. We have a 90% first call resolution rate as a result of extensive training of our call center. Further, the rate at which the end user can be helped by the first person they talk to on the phone is going to be directly proportional to the success of your remote access endeavor. Our training focused on a number of routine tasks, client installation, routine connectivity issues and credential related issues (reset paswords, etc) but it also focused on what the common calls were. To accomplish this we integrated business intelligence (SSRS) to provide a visual representation of our Service Desk call data.  Keep in mind, regardless of how talented your team is and how well engineered your solution is, the people answering the phone are the “Virtual face” of your system and they need to believe in it just as much as you do.

Monitoring the Level one calls concerning Citrix was a huge step in the QA of our system and was another major reason for our growth. By monitoring our calls we were able to build out focused training strategies as well as provide ourselves with situational awareness of our system. What we noticed was that 1-2 percent of all users would call the service desk with any number of standard issues regardless of how stable the system was. That means that if you suddenly have 1500 teleworkers each day, you will receive an additional 15-30 service desk calls that day. Keep this in mind as some call centers are already staffed pretty lean. 30 calls a day is likely another body’s worth of work. Other benefits of monitoring our level one calls was to check after a change to make sure we did not see a spike in calls. The basic rule was to assign a “Pit Boss” each day to monitor our Call dashboard and ensure that everything is running smooth. The standard rule is to look at a call and ask yourself “could we make a system change to prevent this call?” If yes, than take it into consideration and if not then don’t worry about it. As I said, 1-2% would always call no matter what (passwords, User Errors, etc). By monitoring the calls we were able to grow by over 50% over the next two years while reducing our call volume by nearly the same number.

Other important tools we use are Edgesight to look at historical data concerning a users Latency and which systems they logged into, GotoAssist so that the users could support end users out in the field in the same manner as a Desktop technician. Several Custom Powershell scripts to get key metrics from XenAPP and SQL Server Reporting Services, part of Edgesight, to create custom dashboards and integrate other data sources to provide a holistic vision of the entire environment.

Conclusion:
There are telework think tanks and pundits all over the internet right now. I know the amount of information right now is pretty overwhelming. I am trying to supplement some of that information with some real-world experience of moving from a fledgling Citrix farm to the 2nd largest campus at a large federal agency. As I stated, treat your telework environment as a campus. Find out what support your population has at the desktop and make sure you can get as close to that as possible remotely. Again, the person answering the phone HAS to be able to get them back online or things will go downhill from there. Watch your support calls and take an active interest in your systems impact on your call centers and service desk. Work with them and sell them on the system and be supportive of their concerns. Right now, if we make a mistake, there will be 100 calls to the service desk in less than 30 minutes. Understand the impact of 100 service desk calls in 30 minutes and understand that when Remote Access is down, a whole campus is down.

Thanks for reading.

John