Advertisements

Category Archives: Citrix SQL Queries

Project Poindexter:VPN Logs

Total Information Awareness with your Netscaler/AGEE

Harvesting VPN Logs with the Netscaler:
When I first heard about Total Information Awareness I was a little concerned. Like a lot of my current team, I am one of those libertarians who really isn’t keen on his personal life being correlated and analyzed and a program that is overseen by unelected officials. That said, as an individual responsible for the security and integrity of information systems as well as a person who’s own personally identifiable information is in the databases of my bank, doctor and employer, I do believe I am entitled to know what is going on and I would like to think the stewards of my information are also informed of what is going on with regards to my own data. For this reason, I decided to start looking into how I could better monitor activity on my Netscaler and I wanted to provide an accompanying guide to my SCIFNET post/video showing how you can compartmentalize sensitive data using the VPX or a regular MPX class Netscaler.

Most engineers are fully aware that the Netscaler platform is capable of sending information to a syslog server. This in and of itself is not that significant as many network/Unix based appliances can syslog. What I want to discuss in this post is how to use a very cheap syslog server to set up a fully functional log consolidation system that includes parsing specific records and writing them to a relational database.

I find a certain amount of frustration with today’s six figure price tag event correlation systems and if you can only respond to a breach by doing “Find and Next” on a 90GB ASCII file, needless to say, that is not the most agile way to respond and not where it needs to be to react to an INFOSEC related incident. As with the Admiral Poindexter’s vision, proper analysis of events can be an instrumental tool in the defense of your information systems.

Below is an example of a typical VPN log from your Netscaler/AGEE appliance:
06/15/2010:05:59:38 ns PPE-0 : SSLVPN HTTPREQUEST 94167 : Context wireless@192.168.1.50 – SessionId: 5- http://www.veoh.com User wireless : Group(s) SCIF-NET USERS : Vserver 192.168.1.100:443 – 06/15/2010:05:59:38 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Using KIWI Syslog server’s parsing capability, I will actually parse this data and write it into a SQL Server database to allow for very easy queries and eventually dashboards showing accountability and key data.

I have had engineers ask me how to get things like Client IP Address and what they have accessed. I will provide a parsing script that will pull from the example above, the following: (As in the case of the log above)

Context: wireless@192.168.1.50
Destination: http://www.veoh.com
Payload: GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270
*I have also included “Assigned_IP” in case any of you assign ip addresses instead of NATing. If you are able to get the Destination of where a user was going, the need to account for every IP Address may become less important but some folks insist on not NATing their users. If so, the parse script will grab their IP’s as well.

And just to show you that I do have the data you can see in the screen print below of the SQL Query:

Uh, John…who cares?
Well, most of the time you really shouldn’t need to do a lot of tracking of where your users are going but in some higher security environments being able to account for where users have gone could be very important. Say you hosted http://www.veoh.com (a site I hate but for the purpose of this lab, their malware…err…client was installed on the laptop I was testing with) and someone said that the system had been compromised. You could immediately obtain every user ID and IP Address that accessed that site and what the payload that they ran against it was. You would see the XSS or SQL Injection string immediately. You would also note a system that had malware and was trying to get in over one of the SMB “Whipping boys” (445, 135-139).

Parsing data vs. just throwing it all into a flat file and waiting for an auditor to ask for it?
As I stated previously, the ability to have your data in a relational database can give you a number of advantages, not just pretty tables and eventually dashboards but you also open the door to the following:

  • Geospatial analysis of incoming IP Addresses (by cross referencing context with geospatial data from iptolocation.com or other free geospatial ip-to-location data.
  • An actual count of the number of concurrent users on a system within a block of time including historical reporting and trending.
  • The number of times a “Deny” policy has been tripped and who tripped it. If you are compartmentalizing your data and you want to know who tried to access something they are not allowed to.
  • Your sensitive data is on wiki leaks and you want to know every user who accessed the resource the data resides on, when and what ports they used?
  • And lastly, find out who is going ” \\webserver\c$” to your web server instead of “http://webserver”

So what do I log?
Well, I log basically everything but for VPN I log three different events into two different tables, I log all HTTP based traffic, normal UDP/TCP based connections and I also have a separate table for all of my “DENIED_BY_POLICY” Events.

Here is an example of an HTTPREQUEST log:
06/15/2010:11:59:58 ns PPE-0 : SSLVPN HTTPREQUEST 110352 : Context wireless@192.168.1.50 – SessionId: 5- http://www.veoh.com User wireless : Group(s) SCIF-NET USERS : Vserver 192.168.1.100:443 – 06/15/2010:11:59:58 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Here is an example of TCP/UDPFlow statistics:
06/15/2010:12:18:16 ns PPE-0 : SSLVPN UDPFLOWSTAT 111065 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Client_ip 192.168.1.50 – Nat_ip 192.168.1.85 – Vserver 192.168.1.100:443 – Source 127.100.0.5:53052 – Destination 239.255.255.250:1900 – Start_time “06/15/2010:12:15:32 ” – End_time “06/15/2010:12:18:16 ” – Duration 00:02:44 – Total_bytes_send 1729 – Total_bytes_recv 0 – Access Allowed – Group(s) “SCIF-NET USERS”

Here is an example of a DENIED_BY_POLICY event: (Over HTTP)
06/15/2010:10:17:14 ns PPE-0 : SSLVPN HTTP_RESOURCEACCESS_DENIED 106151 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Vserver 192.168.1.100:443 – Total_bytes_send 420 – Remote_host pt.veoh.com – Denied_url POST /tracker/update.jsp – Denied_by_policy “Problem-Site” – Group(s) “SCIF-NET USERS”

Let’s talk a little about the “DENIED_BY_POLICY” logs

Here is a Scenario: I have a problem website that I do not want any of my users to go to so I create a policy called “Problem-Site” denying access to the IP of the problem site.

For the log above, I parse the following:
Context:
wireless@192.168.1.50
Destination: pt.veoh.com
Policy: Problem-Site
Payload: POST /tracker/update.jsp

I also log non-http denies as well, these appear like the following:
06/14/2010:21:08:03 ns PPE-0 : SSLVPN NONHTTP_RESOURCEACCESS_DENIED 69761 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Client_ip 192.168.1.50 – Nat_ip “Mapped Ip” – Vserver 192.168.1.100:443 – Source 192.168.1.50:50343 – Destination 10.10.10.30:139 – Total_bytes_send 291 – Total_bytes_recv 0 – Denied_by_policy “TOP-SECRET-DENY” – Group(s) “SCIF-NET USERS”

Here is a Scenario: You read a story in “wired.com” about some kid who tried to give a bunch of sensitive data to a hacker or even wiki leaks and you are concerned about your own data being accessed without authorization. You want to monitor all attempts to get unauthorized access and you want to note them, or, since they are in SQL Server w/reporting services, create a dashboard that goes RED when a particular policy is tripped.

Another scenario would be to actually monitor successes and note the “Context”, if most users who access data provided by the “TOP-SECRET-ALLOW” policy come from a specific network ID, say 10.105.28.0/24 and you start seeing access from 10.111.13.68 then you can see if a user ID has been compromised, you can also query and see how often a user accesses data from which IP Addresses. If someone’s account is compromised, it would show up as coming from another IP as it is less likely that they are sitting at the user’s terminal.

In the log above I parse the following:
Context:
wireless@192.168.1.50
Destination: 10.10.10.30:139 (note the :139 indicating an attempt to use SMB)
Policy: TOP-SECRET-DENY
Payload: (Blank if not HTTP)

Below is an example of Reporting Services dashboard that refreshes every minute:(Note, I have a particular Policy that turns red in this dashboard to alert me of an important breach attempt)

Time Appliance Context Destination Policy Payload
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30 :3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30 :3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:3389 TOP-SECRET-DENY
 
12:37 192.168.1.75 wireless@192.168.1.50 pt.veoh.com Problem-Site POST /tracker/update.jsp
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:139 TOP-SECRET-DENY   

 

What You need:

  • You need an incumbent SQL Server Environment, you need Reporting Services if you want dashboards (If you have edgesight you should already have this)
  • You need to be able to set up an ODBC Connection, remember if it is a 64-bit server/workstation you need to use the ODBC tool in %Systemroot%\sysWOW64
  • You need to be able to set up a database connection in Reporting Services
  • $245 bucks for a full version of KIWI, if you buy a Netscaler you can afford a full version of KIWI, I will cover several solutions that will make this the best $245 you have ever spent.

How to set it up:
Once you brow beat your cheap boss into spending the $245 on KIWI you perform the following steps:

Go to http://www.ctxsupport.com/forums/showthread.php?36-Parsing-Scripts-for-VPN-Data-Mining-on-AGEE and download all of the files. (Follow the instructions in the post)

Create a Database called Syslog with a username and password that has DBO privileges and create an ODBC Data Source on the server hosting KIWI for the syslog database and name it syslogd.

After renaming Netscaler.txt to Netscaler.ini go to KIWI and import the ini file.

On each rule, go to the “Write to SQL” Action and click “Create Table”

On each rule, go to the “Parse Data” Action and click “Browse” to upload the parsing script that goes with each rule. (Check all checkboxes under “Read and Write”

Conlcusion:
Once this is done you will be able to collect a ton of information that is very useful and it beats the hell out of a 90GB ASCII file or just writing everything into a single event correlation system without the ability to query on certain columns. All of the parsing scripts write the entire log to the msgtext column so you still have the original log if there is every any questions. Being able to parse key information in a specific column will give you a considerably higher level of agility when searching for information about a particular user, IP Address, destination or Security Policy.

If there is a worm that is sending a particular payload over http, you are one query away from finding out every infected IP Address. If an auditor asks you how many users have accessed a sensitive server you are a query away from providing that information. I will supplement this post with a video of the entire setup from start to finish on citrix.utipu.com within the next two weeks (Hopefullly).

Also, I tried this in a home based lab (I cannot use my logs from work) so please, if you have any issues getting it to work, let me know so I can set up better instructions. And keep in mind, I have not looked at this with ICAPROXY logs, I am hoping to do that ASAP, there may be a supplement to this that includes a different script and maybe a different table for ICAPROXY logs. I am waiting on an enhancement request before I tackle ICAProxy logs (They will come across as “SSLVPN” but the log does look different than standard VPN logs).

And most importantly, I am not a Developer, I am a poor-man’s DBA and am a marginal scripter at best, if you can write a better parsing script please let me know!!

Thanks for reading

John Smith

Advertisements

Edgeisight Under the Hood: Part 2 (Will be moved to Edgesightunderthehood.com)

Okay, so in this blog posting I want to continue covering a few more views in Edgesight that I like to run ad hoc queries against.  Today’s view is called   vw_es_archive_application_network_performance.  This view provides information network delay, server delay, xenapp server, process name and downstream hosts that your XenApp servers communicate with.  I have used this table to check delays of the executables such as winlogon.exe to check delay between this process and our domain controllers.  I will cover checking delays by process name, xen_app server and downstream host.  

 The first part will be to demonstrate how to find Network and Server delay of specific downstream hosts as well as how to measure the average XenAPP Servers delay.  Then in the second part I want to answer one of the questions from the first posting.  

 Down Stream Delay:
I actually got to present on Edgesight during Synergy 2008 and one of the key points that I tried to drive home is how Edgesight helps you with the never ending B.S. Witch hunts that always seem to occur when someone’s application is “running slow on Citrix”.  I would say that less than 30 % of what I actually investigate ends up being an actual XenAPP issue.  I will go over a few ad hoc queries that will give you the average delay of your down stream hosts and will give you the average delay experienced by each XenAPP Server allowing you to see if you have a specific XenAPP box that may be having some issues.   

The first ad hoc query has to do with downstream hosts, this will return the downstream host and the Network/Server delay.  I have set this query to filter any downstream host that does not have at least 100 records and a server delay of at least 300 miliseconds.  You can edit/remove the “Having” clause to suit your environment.        

select distinct hostname, sum(network_delay_sum)/sum(record_count) as “Network Delay”, sum(server_delay_sum)/sum(record_count) as “Server Delay”
from vw_es_archive_application_network_performance
group by hostname
having sum(record_count) > 100
and sum(server_delay_sum)/sum(record_count) > 300
order by sum(server_delay_sum)/sum(record_count) desc 

 

In English: “Give me the Network and Server delay of every downstream host that has at least 100 records (packets?) and a server latency of at least 300ms” 

 XenAPP Server Delay: 
It is a good idea to monitor your XenAPP Server delay, this will tell you if there is a particular XenAPP Server that is having a layer 1 or layer 2 issue.  This is a quick query that will show you the average delay of your XenAPP Servers.   

select distinct machine_name, sum(network_delay_sum)/sum(record_count) as “Network Delay”, sum(server_delay_sum)/sum(record_count) as “Server Delay”
from vw_es_archive_application_network_performance
group by machine_name
order by sum(server_delay_sum)/sum(record_count) desc  

 

Note: You will also see “Edgesight for Endpoints” client data in this table as well.  

 

Executable  Delay:
This query shows the delay associated  individual executables.  You may check outlook.exe to see if you have a delay in a downstream Exchange server or, in my case, check winlogon.exe for delays to domain controllers.  

 select distinct exe_name, sum(network_delay_sum)/sum(record_count) as “Network Delay”, sum(server_delay_sum)/sum(record_count) as “Server Delay”
from vw_es_archive_application_network_performance
group by exe_name
order by sum(server_delay_sum)/sum(record_count) desc  

Session Statistics:
Last week I got a a question about session counts and I wanted to answer it in this post, here was the question: 

 “I’m looking for a custom report showing the application usage (Published Apps, not processes) on a hourly, daily and monthly base and a custom report showing the concurrent sessions on a hourly, daily and monthly base.”  

The view I used for this was vw_ctrx_archive_client_start_perf declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @begin = ’00’
set @end = ’23’
set @app = ‘%Outlook%’
select convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′ as “Time”, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) = @today-1
and published_application like ‘%’+@app+’%’
group by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′
order by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′ 

 In English: Give me every application on an hourly basis for a specific application.  On this report substitute %APPNAME% for whichever app you want to see.  Note that this is an hourly report so the time format is set to 108.   

 Daily Application Usage:
In the same view I change the query above just a little to accommodate a query by day.

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @app = ‘%Outlook%’
select convert(varchar(10),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
and published_application like ‘%’+@app+’%’
group by convert(varchar(10),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(10),dateadd(hh,-4,time_stamp), 111)

 Monthly Application Usage:
Depending on how long you have your retention set (min is 30 days) this query may or may not work for you but this is the number of unique sessions per application for a month.

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @app = ‘%Outlook%’
select convert(varchar(7),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
and published_application like ‘%’+@app+’%’
group by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)

Application Matrix:
SQL Server Reporting Services will let you create a matrix, these two queries are for daily and monthly which will let you sort as follows:

  Date 1 Date2 Date3 Date4 Date5
Outlook Count1 Count2 Count3 Count4 Count5
Word Count1 Count2 Count3 Count4 Count5
Oracle Financials Count1 Count2 Count3 Count4 Count5
Statistical APP Count1 Count2 Count3 Count4 Count5
Custom APP-A Count1 Count2 Count3 Count4 Count5

 

  This has been the report method that has made my management the happiest so I use the Matrix tool with SSRS as often as possible.  Remember, if you have Edgesight, you have SSRS and setting up reports is no harder than an Access Database.

Here are the queries

 

First The Daily Matrix:

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
select convert(varchar(10),dateadd(hh,-4,time_stamp), 111) as “Date”, published_application, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(10),dateadd(hh,-4,time_stamp), 111), published_application
order by convert(varchar(10),dateadd(hh,-4,time_stamp), 111), count(distinct sessid) desc 

Then the Monthly Matrix:
declare @today datetime
set @today = convert(varchar,getdate(),111)
select convert(varchar(7),dateadd(hh,-4,time_stamp), 111) as “Date”, published_application, count(distinct sessid)
from vw_ctrx_archive_client_start_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(7),dateadd(hh,-4,time_stamp), 111), published_application
order by convert(varchar(7),dateadd(hh,-4,time_stamp), 111), count(distinct sessid) desc 

 Concurrent Session Statistics:
A colleague of mine, Alain Assaf, set up a system that gives you this info every five minutes and is almost in real time, go to wagthereal.wordpress.com to see it.  Keep in mind that Edgesight is not real time data so if you set up a private dashboard for it, you may have to wait for it to refresh. 

The vw_ctrx_archive_client_start_perf view appears to give us only start times of specific published applications.  Perhaps the most used view of any of my reports is vw_ctrx_archive_ica_roundtrip_perf.  For this set of queries, I will count concurrent sessions but I will also go into ICA Delay’s for clients in my last post on Edgesight Under the Hood:

I will try to answer the users question on concurrent sessions with three pretty basic queries for hourly, daily and monthly usage:

Hourly Users:
declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
set @begin = ’00’
set @end = ’23’
select convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′ as “Time”, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) = @today-3
group by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′
order by convert(varchar(2),dateadd(hh,-4,time_stamp), 108)+’:00′

 

Daily Users:
declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
select convert(varchar(10),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(10),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(10),dateadd(hh,-4,time_stamp), 111) 

 Monthly Users:

declare @begin varchar
declare @end varchar
declare @today datetime
declare @app varchar
set @today = convert(varchar,getdate(),111)
select convert(varchar(7),dateadd(hh,-4,time_stamp), 111) as “Date”, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp), 111) > @today-30
group by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)
order by convert(varchar(7),dateadd(hh,-4,time_stamp), 111)  

 Conclusion:  
For the most part, I have vetted all of these queries, you may get varying results, if so, check for payload errors, licensing, etc.  I would really like to see some better documentation on the data model, most of these were basically done by running the query and checking it against the EdgeSight canned reports to see if my SWAG about how they did their calculations was correct.  All of the queries I ran here I checked and looked to be accurate.  If you are going to bet the farm on any of these queries to the brass in your organization, vet my numbers….

My next post will deal with ICA latency and delay issues for individual users and servers.

Thanks for reading!

John

       

Digital Epidemiology: Edgesight Under the hood (Will be moved to EdgesightUndertheHood.com)

Okay, so no flat files, parsing or kiwi syslogging today.  Today I want to talk about Edgesight 5.x.  If any of you have attempted to reverse engineer Edgesight yet you have probably noticed that the tables are a lost cause.  All of the key data that you will want to try to harvest is located in the “Views”.  I Want to do a few blog posts on each of my favorite views and how you can pull statistics from them instantly via query analyzer.  I will start by saying Citrix has created an outstanding front end delivered via the web interface.  I am in no way knocking that interface, there are just times when the canned reports just don’t do it for you.  Until the engineers at Citrix get their hands on a Crystal Ball, there will always be a use for good ole-fashion ad hoc queries.  I am going to go over a few key queries from the vw_ctrx_archive_ica_roundtrip_perf  view from your Edgesight Database and how you can open query analyzer and gather these statistics post haste, or, if you are adept with Reporting Services, set up reports for yourself.  I have pitched to the Synergy 2010 group that they let me host a breakout covering how to integrate some of what I do with SQL Server Reporting Services, I think I can cover a lot in a 90 minute session and let engineers take something away from the session that they can use in their own environments. So, as I stated, the view of the day is  “vw_ctrx_archive_ica_roundtrip_perf” so open your SQL Server Management Studio and log into the SQL Server hosting your database with an account that has “Datareader” privilages.  If you admin account does not work, your Edgesight service account will likely suffice if your organization allows services accounts to be  used in that mannor.          

@Today:
The @Today variable is for the existing day.  That means that if you want to check between yesterday and the day before you would change “convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today-2” to “convert(varchar(10),dateadd(hh,-4,time_stamp),111) between @today-2 and @today-1″      

 Find the number of ICA SEssions by server by time of day
About this query: 
In this query we declare 3 variables, two of which you must edit.  The @begin and @end variables must have the time of day that you want to search.  So, if you wanted to know the number of unique users for each server between 8AM and 2PM, you would enter ’08’ for @begin and ’14’ for @end.           

declare @begin varchar
declare @end varchar
declare @today datetime
set @today = convert(varchar,getdate(),111)
set @begin = ’14’
set @end = ’23’
select machine_name, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
whereconvert(varchar(2),dateadd(hh,-4,time_stamp),108) between @begin and @end
and convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today2
group by machine_name
order by count(distinct [user]) desc
         

 Find ICA Lantency by user by day
About this query:
This query will show you the ICA Latency for each user and sort it by the user with the worst latency.  If you wanted to check sessions on a specific server, you would add the following above the “Group By” statement:  ‘and machine_name = ‘%netbiosNameOfXenAPPServer%’
        

 declare @today datetime
set @today = convert(varchar,getdate(),111)
select [user], sum(network_latency_sum)/sum(network_latency_cnt) as “Latency”
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today1
group by [user]
order by sum(network_latency_sum)/sum(network_latency_cnt) desc          

        

ICA Latency by Server: 
About this query:
This query will show you the latency by server for a given day.  This can be handy if you want to keep tabs on server health.  If you note high latency for a particular server for a specific day you may need to look and see if there was a user connection that skewed the results or if all sessions on that server had issues. 
         

 declare @today datetime
set @today = convert(varchar,getdate(),111)
select machine_name, sum(network_latency_sum)/sum(network_latency_cnt) as “Latency”
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp),111) > @today1
group by machine_name
order by sum(network_latency_sum)/sum(network_latency_cnt) desc
         

Find total sessions by server by farm:
About this Query:
If you have more than one farm, than you can specify the farm name in this query to get the number of connections per server by farm name.  For users in very large environments with multiple farms may find it handy to query by farm name.  
          

 declare @today datetime
set @today = convert(varchar,getdate(),111)
select machine_name, count(distinct [user])
from vw_ctrx_archive_ica_roundtrip_perf
where convert(varchar(10),dateadd(hh,-4,time_stamp),111)= @today3
and xen_farm_name = ‘%FarmName%’
group by machine_name
order by count(distinct [user]) desc
       

Conclusion:
There are at least four views that I like to work with directly, I also integrate all of my queries, including the variables, into SQL Server Reporting Services letting me customize my reports for my specific needs.  The eventual goal is to provide our operations and support team with a proactive list of users with high latency so that we can call them and let them know that we noticed they were having issues.  My next post will cover how to look at problematic downstream hosts that cause you to get a bunch of calls saying it’s Citrix’s fault!!  I apologize for the lack of examples, I am limited to w hat I can show in my environment.  As I stated, I am hoping to show all of this integration, including custom SQL Reports,  at Synergy 2010.         

 If you have a specific query that you want, post it as a comment and I will reply with the SQL Query that gets you as close as I can.       

Thanks for reading!        

John