Category Archives: Netscaler

ICASTART, ICAEND “ICA-LIKE!!!”

In 2008 I had a conversation with Jay Tomlin asking him if he would put in an enhancement for ICA Logging on the AGEE. Basically we wanted the ability to see the external IP Addresses of our customers coming through the Access Gateway. As you are likely aware, what you get in the logs are the IP Addresses bound to the workstation and not the external IP Address that they are coming through. In the last ten years, it has become increasingly rare for an end user to actually plug their computer directly into the internet and more often, they are proxied behind a Netgear, Cisco/Linksys, and Buffalo switch. This makes reporting on where the users are coming from somewhat challenging.

Somewhere between 9.2 and 9.3 the requested enhancement was added and it included other very nice metrics as well. The two syslog events I want to talk about are ICASTART and ICAEND.

ICASTART:
The ICASTART event contains some good information in addition to the external IP. Below you see a sample of the ICASTART log.

12/09/2012:14:40:46 GMT ns 0-PPE-0 : SSLVPN ICASTART 540963 0 : Source 192.168.1.98:62362 – Destination 192.168.1.82:2598 – username:domainname mhayes:Xentrifuge – applicationName Desktop – startTime “12/09/2012:14:40:46 GMT” – connectionId 81d1

As you can see, if you are a log monger, this is a VERY nice log!! (Few can appreciate this) With the exception of the credentials everything is very easy to parse and place into those nice SQL Columns I like. If you have Splunk, parsing is even easier and you don’t have to worry about how the columns line up.

ICAEND:
The ICAEND even actually has quite a bit more information and were it not for the need to report ICA Sessions in real time, this is the only log you will need. Below is the ICAEND log.

12/09/2012:14:41:12 GMT ns 0-PPE-0 : SSLVPN ICAEND_CONNSTAT 541032 0 : Source 192.168.1.98:62362 – Destination 192.168.1.82:2598 – username:domainname mhayes:Xentrifuge – startTime “12/09/2012:14:40:46 GMT” – endTime “12/09/2012:14:41:12 GMT” – Duration 00:00:26 – Total_bytes_send 9363 – Total_bytes_recv 587588 – Total_compressedbytes_send 0 – Total_compressedbytes_recv 0 – Compression_ratio_send 0.00% – Compression_ratio_recv 0.00% – connectionId 81d16

Again, another gorgeous log that is very easy to parse and put into some useful information.

Logging the Data:
So, this was going to be my inaugural Splunk blog but I didn’t get off my ass and so my eval of Splunk expired and I have to wait 30 days to use it again (file that under “phuck”). So today we will be going over logging the data with the standard KIWI/SQL (basically a poor man’s Splunk) method.

So the way we log the data, if you haven’t been doing this already, is we configure the Netscaler to send logs to the KIWI Syslog server and we use the custom data source within KIWI to configure a SQL Logging rule. We then create the table, parse the data with a parsing script and voila, instant business intelligence.

Creating the custom KIWI Rule:

First, create the rule “ICA-START/END” with a descriptive filter configured as you see below.

Next you will optionally configure a Display action but more importantly you will configure the Script that parses the data.

Paste the following text (Below) into a file named Script_Parse_AGEE-ICA.txt and save it in the scripts directory of your KIWI install.

Function Main()

Main = “OK”

Dim MyMsg
Dim UserName
Dim Application
Dim SourceIP
Dim DestinationIP
Dim StartTime
Dim EndTime
Dim Duration
Dim SentBytes
Dim RecBytes
Dim ConnectionID

With Fields

UserName = “”
Application = “”
SourceIP = “”
DestinationIP = “”
StartTime = “”
EndTime = “”    
Duration = “”
SentBytes = “”
RecBytes = “”
ConnectionID = “”

MyMsg = .VarCleanMessageText

If ( Instr( MyMsg, “ICAEND_CONNSTAT” ) ) Then
SrcBeg = Instr( MyMsg, “Source”) + 6
SrcEnd = Instr( SrcBeg, MyMsg, “:”)
SourceIP = Mid( MyMsg, SrcBeg, SrcEnd – SrcBeg)

DstBeg = Instr( MyMsg, “Destination”) + 11
DstEnd = Instr( DstBeg, MyMsg, “:”)
DestinationIP = Mid( MyMsg, DstBeg, DstEnd – DstBeg)

UserBeg = Instr( MyMsg, “domainname”) + 10
UserEnd = Instr( UserBeg, MyMsg, “-“)
UserName = Mid( MyMsg, UserBeg, UserEnd – UserBeg)

StartBeg = Instr( MyMsg, “startTime “) + 11
StartEnd = Instr( StartBeg, MyMsg, ” “)
StartTime = Mid( MyMsg, StartBeg, StartEnd – StartBeg)

EndBeg = Instr( MyMsg, “endTime “) + 9
EndEnd = Instr( EndBeg, MyMsg, ” “)
EndTime = Mid( MyMsg, EndBeg, EndEnd – EndBeg)

DurBeg = Instr( MyMsg, “Duration “) + 9
DurEnd = Instr( DurBeg, MyMsg, ” “)
Duration = Mid( MyMsg, DurBeg, DurEnd – DurBeg)

SentBeg = Instr( MyMsg, “Total_bytes_send “) + 17
SentEnd = Instr( SentBeg, MyMsg, ” “)
SentBytes = Mid( MyMsg, SentBeg, SentEnd – SentBeg)    

RecBeg = Instr( MyMsg, “Total_bytes_recv “) + 17
RecEnd = Instr( RecBeg, MyMsg, ” “)
RecBytes = Mid( MyMsg, RecBeg, RecEnd – RecBeg)

ConBeg = Instr( MyMsg, “connectionId”) +12
ConnectionID = Mid( MyMsg, ConBeg)

Application = “NA”

end if

If ( Instr( MyMsg, “ICASTART” ) ) Then
SrcBeg = Instr( MyMsg, “Source”) + 6
SrcEnd = Instr( SrcBeg, MyMsg, “:”)
SourceIP = Mid( MyMsg, SrcBeg, SrcEnd – SrcBeg)

DstBeg = Instr( MyMsg, “Destination”) + 11
DstEnd = Instr( DstBeg, MyMsg, “:”)
DestinationIP = Mid( MyMsg, DstBeg, DstEnd – DstBeg)

UserBeg = Instr( MyMsg, “domainname”) + 10
UserEnd = Instr( UserBeg, MyMsg, “-“)
UserName = Mid( MyMsg, UserBeg, UserEnd – UserBeg)

AppBeg = Instr( MyMsg, “applicationName”) + 15
AppEnd = Instr( AppBeg, MyMsg, “-“)
Application = Mid( MyMsg, AppBeg, AppEnd – AppBeg)    

StartBeg = Instr( MyMsg, “startTime “) + 11
StartEnd = Instr( StartBeg, MyMsg, ” “)
StartTime = Mid( MyMsg, StartBeg, StartEnd – StartBeg)

ConBeg = Instr( MyMsg, “connectionId”) +12
ConnectionID = Mid( MyMsg, ConBeg)

EndTime = “NA”
Duration = “NA”
SentByes = “NA”    
RecBytes = “NA”

end if

.VarCustom01 = UserName
.VarCustom02 = Application
.VarCustom03 = SourceIP
.VarCustom04 = DestinationIP
.VarCustom05 = StartTime
.VarCustom06 = EndTime
.VarCustom07 = Duration
.VarCustom08 = SentBytes
.VarCustom09 = RecBytes
.VarCustom10 = ConnectionID

End With

End Function

Next you will create the custom DB format exactly as follows:
(IMPORTANT: NOT SHOWN Make sure you check “MsgDateTime” in this dialog box near the top)

Then you will create a new “Action” called “Log to SQL” and select the Custom DB Format and name the table AGEE_ICA and select “Create Table”. If you have not yet, build your connect string by clicking the box with the three periods at the top “…”

Then watch for ICASTART and ICAEND instances.

Then look at the data in your SQL Server:

Now you can report in real-time on external utilization by the following:

  • Utilization by IP Range
  • Utilization by Domain
  • Utilization by UserID
  • Utilization by time of day
  • Average Session Duration
  • You can tell if someone worked or not (“Yeah, I was on Citrix from 9AM to 5PM”)

Most of the queries you can reverse engineer from Edgesight Under the hood but if there is a specific query you are after just email me.

I get the average session duration with the following query:

select
avg(datepart(mi,cast([duration] as datetime)))
from syslog.dbo.agee_ica
where duration <> ‘NA’

 I tried to put everything in one table as you can see from the SQL Data Columns and the parsing script but you can split it up into separate tables if you want.

Thanks for reading!

John

Advertisements

Gratuitous Speculation: Cisco looks at Acquiring Netscaler from Citrix

Today istockanalyst and Network World speculated that Cisco would acquire the Citrix Networking (formerly ANG) line of products. Since the acquisition in 2003, Citrix has tried to brand itself as a Networking company AND a virtualization company. I recall talking with my Sales Manager and hearing her tell me how she needed to try to sell Netscalers to our incumbent Networking team where I was working at the time.

I have often referred to Network teams in organizations as the “Cisco Mafia” and explained to her that talking to the Network team about anything that was not F5, Juniper or Cisco may not bear a great deal of fruit. I recall several battles just to get my Netscalers implemented because I was “Load Balancing” on something other than Cisco and F5. I explained to them that what I had was a “Layer 7 switch” and that calling it a load balancer is a misnomer. The Netscaler is so many more things than just a load balancer.

Why this might be a bad thing?
Well, that depends, if this is a true acquisition meaning CISCO now “owns” Netscaler I worry what happens to the innovation after the acquisition. The fact is Cisco struggled in this space, at least to beat out Citrix and Big IP. I think this is due largely to the “Networking” mentality and Cisco’s inability to innovate beyond layer 3-4. I am NOT down on Network engineers and I have mad respect for their abilities but I have to point out that the ADC is its own hybrid skill set. Discussions such as Context Switching, XML Cross Site Scripting protection and URL Rewrites are not every day conversations for the guys running your Network. As the ADC has matured, the hybrid skill set needed to support it has also broadened. This has become a bit of a challenge in what seems to be (at least as a grey haired IT guy seeing next generation come in) a world of “specialists”. Can Cisco continue the innovation that exists with the market leaders in this space? If they could, why are they phasing out ACE? Are they even interested in it? If it is true that this will become a 2 billion dollar market, that may be the case. If not, does the Netscaler become another CSS or ACE? The reality is, a lot of companies have the “if you can’t beat ’em, buy ’em” mentality but my worries are what will happen afterward. $2 billion may be all the motivation they need.

Why this might be a good thing?
If this is truly an OEM agreement this could be fantastic for Citrix. I remember when Citrix first started selling Netscalers and I think one of the misunderstood dynamics by the Citrix brass was that they were sending their sales staff into just another meeting. “These aren’t my people” I recall one SE saying. I have fought more than one battle over Netscalers that would not have been necessary had they sported a sleek green Cisco bezel on them. The fact is, when enterprise networking is discussed, as stated, Citrix is the “kid doing his own thing” for those of you who grew up watching Sesame street. They are generally not in the conversation like Cisco, BigIP and Juniper are. Server sales vendors will always be outsiders to networking groups. Oddly, the UCS seems to be widely accepted by server teams but for some reason, it just isn’t the other way around.

Cisco partners and sales engineers can offer a bridge to these Networking groups. The biggest challenge is going to be how they sell it. It isn’t quite as easy as just putting a Cisco bezel on the Netscaler. You still have a great product in F5 and Cisco sales engineers will need to be able to go toe to toe with the current market share leader in that space. Ultimately, not having a stranger in the room may be just what Citrix needs to seize the lion’s share of what is predicted to be a $2 billion dollar market.

Thanks for reading!

John

Doing it Cheap, and Right, with Kiwi Syslog Server, SQL and Netscaler Application Firewall

Last week I noted an interesting blog from the guys at Splunk who have developed a way to parse and display Application Firewall blocks and place them into a nice dashboard. Splunk has been doing some interesting stuff here in the last 12 months or so that Citrix Administrators should take note of, especially if they are feeling the pain of real-time monitoring in their Citrix Environment. First off, they hired/contracted Brandon Shell and Jason Conger to work with them. I can tell you that over the years I have had my share of monitoring “tools” shoved down my throat and the majority of them were NETWORKING tools built by NETWORKING companies to support NETWORKING professionals who then tried to retrofit the product to monitor servers.

The Citrix environment alone has its own quarks when it comes to monitoring had having Brandon and Jason on the Splunk team will pretty much ensure that they will build the absolute most relevant monitoring tool out there for supporting Citrix enterprises. While this is not meant to be a glowing endorsement of Splunk it is an endorsement of the two professionals they have hired to build out their Citrix vision.

This article is to cover how I am doing SOME of what splunk is doing at a fraction (almost free) of the cost that you would spend on monitoring products, including splunk. In the last few years I have posted collecting and logging Netscaler syslogs to SQL Server for the purpose of dashboarding VPN Utilization, Endpoint Analysis Scan Results as well as logging Pix Logs to SQL Server via KIWI as well. In this post, I will show you some of what I have been doing for the last few years with my APP Firewall Logs by putting them into KIWI and then writing them to a SQL Server.

Setting up KIWI:

  1. Set up a Filter for KIWI to catch the APP Firewall Logs:


2. Use this Parsing Script

Function Main()
Main = “OK”

Dim MyMsg
Dim Offense
Dim Action
Dim Source

With Fields

Offense = “”
Action = “”
Source = “”

MyMsg = .VarCleanMessageText

 If ( Instr( MyMsg, “APPFW” ) ) Then
OffenseBeg = Instr( MyMsg, “APPFW”) + 6
OffenseEnd = Instr( OffenseBeg, MyMsg, ” “)
Offense = Mid( MyMsg, OffenseBeg, OffenseEnd – OffenseBeg)
end if

If ( Instr( MyMsg, “<blocked>” ) ) Then
Action = “BLOCKED”
End If

If ( Instr( MyMsg, “<not blocked>” ) ) Then
Action = “NOT BLOCKED”
End If

If ( Instr( MyMsg, “<transformed>” ) ) Then
Action = “TRANSFORMED”
End If

If ( Instr( MyMsg, “.” ) ) Then
SourceBeg = Instr( MyMsg, “: “) +3
SourceEnd = Instr( SourceBeg, MyMsg, ” “)
Source = Mid( MyMsg, SourceBeg, SourceEnd – SourceBeg)
End If

.VarCustom01 = Offense
.VarCustom02 = Action
.VarCustom03 = Source
End With

End Function


Set up Custom Data Connectors:

Configure database connection and create the table:

Once you have created the table you should start to see some data come in as the App Firewall blocks IP’s. I used the free version of Netsparker to generate some blocks and ran the following query and got the results below:

While it is not totally visible, the “MsgText” column includes the entire log, this may be necessary as forensic evidence as some jurisdictions require the entire log, unparsed, for evidence.

So John, why SQL and not just Splunk?
I have heard folks say that Splunk is expensive, and it might be but in the realm of monitoring tools I believe it is likely less expensive than most others. For me I needed the data to be portable so that I could cross reference it with different tables. In my case, I usually reference my sources with a GEO-Spatial table as well as with a Malware Blacklist. If you are in the DMZ currently, it is not a bad idea to collect INTEL on who is performing recon scans or probes against your systems. Having the data in a SQL Server allows me to set up stored procedures that will alert me if specific metrics are met. Also, a preponderance of malfeasance can be escalated to your INFOSEC team and you can be much more proactive in blocking hosts. Below is a query I run that references the GEOIP table that I have. I changed my IP Address to an IP Address from China to show how you can cross reference the data.

You can see where a large number of blocks have come from China (well, not really) and this is something you may want to escalate. Usually, hackers are not dumb enough to try something like this. My experience is that you will need to look for things like a consistent delta between probes, that kind of stuff. At any rate, without portability, this would be tough to do with a flat file database although I do believe Splunk has an API that could ease some of this.

Conclusion:
Data portability, for me, is the plumb here, this goes beyond making a pretty graph and moves forward into the long term battle against OWASP top ten in that you can gather INTEL and position yourself to know which IP’s are risky and which IP’s just have some bot or malware on them. Ultimately, if you are not a “SQL-phile” or a programming hack like me this may not be the best option. Folks like Sam Jacobs are a great resource as he is another guy who is very adept at getting Syslogs into SQL. I expect with the additions of Conger and Shell you will see great things come out of Splunk for monitoring Citrix environments. There are a number of new and cool things that they are doing that I need to “get hip” to. If you are struggling with your budget and have some basic SQL Skills, this may be a great option for you to get similar metrics and reporting that you can get with Splunk for a fraction of the price.

I apologize for the delay in posts, I plan on getting at least one post a month up for both xen-trifuge and Edgesight under the Hood.

Take care

John

Project Poindexter:VPN Logs

Total Information Awareness with your Netscaler/AGEE

Harvesting VPN Logs with the Netscaler:
When I first heard about Total Information Awareness I was a little concerned. Like a lot of my current team, I am one of those libertarians who really isn’t keen on his personal life being correlated and analyzed and a program that is overseen by unelected officials. That said, as an individual responsible for the security and integrity of information systems as well as a person who’s own personally identifiable information is in the databases of my bank, doctor and employer, I do believe I am entitled to know what is going on and I would like to think the stewards of my information are also informed of what is going on with regards to my own data. For this reason, I decided to start looking into how I could better monitor activity on my Netscaler and I wanted to provide an accompanying guide to my SCIFNET post/video showing how you can compartmentalize sensitive data using the VPX or a regular MPX class Netscaler.

Most engineers are fully aware that the Netscaler platform is capable of sending information to a syslog server. This in and of itself is not that significant as many network/Unix based appliances can syslog. What I want to discuss in this post is how to use a very cheap syslog server to set up a fully functional log consolidation system that includes parsing specific records and writing them to a relational database.

I find a certain amount of frustration with today’s six figure price tag event correlation systems and if you can only respond to a breach by doing “Find and Next” on a 90GB ASCII file, needless to say, that is not the most agile way to respond and not where it needs to be to react to an INFOSEC related incident. As with the Admiral Poindexter’s vision, proper analysis of events can be an instrumental tool in the defense of your information systems.

Below is an example of a typical VPN log from your Netscaler/AGEE appliance:
06/15/2010:05:59:38 ns PPE-0 : SSLVPN HTTPREQUEST 94167 : Context wireless@192.168.1.50 – SessionId: 5- http://www.veoh.com User wireless : Group(s) SCIF-NET USERS : Vserver 192.168.1.100:443 – 06/15/2010:05:59:38 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Using KIWI Syslog server’s parsing capability, I will actually parse this data and write it into a SQL Server database to allow for very easy queries and eventually dashboards showing accountability and key data.

I have had engineers ask me how to get things like Client IP Address and what they have accessed. I will provide a parsing script that will pull from the example above, the following: (As in the case of the log above)

Context: wireless@192.168.1.50
Destination: http://www.veoh.com
Payload: GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270
*I have also included “Assigned_IP” in case any of you assign ip addresses instead of NATing. If you are able to get the Destination of where a user was going, the need to account for every IP Address may become less important but some folks insist on not NATing their users. If so, the parse script will grab their IP’s as well.

And just to show you that I do have the data you can see in the screen print below of the SQL Query:

Uh, John…who cares?
Well, most of the time you really shouldn’t need to do a lot of tracking of where your users are going but in some higher security environments being able to account for where users have gone could be very important. Say you hosted http://www.veoh.com (a site I hate but for the purpose of this lab, their malware…err…client was installed on the laptop I was testing with) and someone said that the system had been compromised. You could immediately obtain every user ID and IP Address that accessed that site and what the payload that they ran against it was. You would see the XSS or SQL Injection string immediately. You would also note a system that had malware and was trying to get in over one of the SMB “Whipping boys” (445, 135-139).

Parsing data vs. just throwing it all into a flat file and waiting for an auditor to ask for it?
As I stated previously, the ability to have your data in a relational database can give you a number of advantages, not just pretty tables and eventually dashboards but you also open the door to the following:

  • Geospatial analysis of incoming IP Addresses (by cross referencing context with geospatial data from iptolocation.com or other free geospatial ip-to-location data.
  • An actual count of the number of concurrent users on a system within a block of time including historical reporting and trending.
  • The number of times a “Deny” policy has been tripped and who tripped it. If you are compartmentalizing your data and you want to know who tried to access something they are not allowed to.
  • Your sensitive data is on wiki leaks and you want to know every user who accessed the resource the data resides on, when and what ports they used?
  • And lastly, find out who is going ” \\webserver\c$” to your web server instead of “http://webserver&#8221;

So what do I log?
Well, I log basically everything but for VPN I log three different events into two different tables, I log all HTTP based traffic, normal UDP/TCP based connections and I also have a separate table for all of my “DENIED_BY_POLICY” Events.

Here is an example of an HTTPREQUEST log:
06/15/2010:11:59:58 ns PPE-0 : SSLVPN HTTPREQUEST 110352 : Context wireless@192.168.1.50 – SessionId: 5- http://www.veoh.com User wireless : Group(s) SCIF-NET USERS : Vserver 192.168.1.100:443 – 06/15/2010:11:59:58 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Here is an example of TCP/UDPFlow statistics:
06/15/2010:12:18:16 ns PPE-0 : SSLVPN UDPFLOWSTAT 111065 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Client_ip 192.168.1.50 – Nat_ip 192.168.1.85 – Vserver 192.168.1.100:443 – Source 127.100.0.5:53052 – Destination 239.255.255.250:1900 – Start_time “06/15/2010:12:15:32 ” – End_time “06/15/2010:12:18:16 ” – Duration 00:02:44 – Total_bytes_send 1729 – Total_bytes_recv 0 – Access Allowed – Group(s) “SCIF-NET USERS”

Here is an example of a DENIED_BY_POLICY event: (Over HTTP)
06/15/2010:10:17:14 ns PPE-0 : SSLVPN HTTP_RESOURCEACCESS_DENIED 106151 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Vserver 192.168.1.100:443 – Total_bytes_send 420 – Remote_host pt.veoh.com – Denied_url POST /tracker/update.jsp – Denied_by_policy “Problem-Site” – Group(s) “SCIF-NET USERS”

Let’s talk a little about the “DENIED_BY_POLICY” logs

Here is a Scenario: I have a problem website that I do not want any of my users to go to so I create a policy called “Problem-Site” denying access to the IP of the problem site.

For the log above, I parse the following:
Context:
wireless@192.168.1.50
Destination: pt.veoh.com
Policy: Problem-Site
Payload: POST /tracker/update.jsp

I also log non-http denies as well, these appear like the following:
06/14/2010:21:08:03 ns PPE-0 : SSLVPN NONHTTP_RESOURCEACCESS_DENIED 69761 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Client_ip 192.168.1.50 – Nat_ip “Mapped Ip” – Vserver 192.168.1.100:443 – Source 192.168.1.50:50343 – Destination 10.10.10.30:139 – Total_bytes_send 291 – Total_bytes_recv 0 – Denied_by_policy “TOP-SECRET-DENY” – Group(s) “SCIF-NET USERS”

Here is a Scenario: You read a story in “wired.com” about some kid who tried to give a bunch of sensitive data to a hacker or even wiki leaks and you are concerned about your own data being accessed without authorization. You want to monitor all attempts to get unauthorized access and you want to note them, or, since they are in SQL Server w/reporting services, create a dashboard that goes RED when a particular policy is tripped.

Another scenario would be to actually monitor successes and note the “Context”, if most users who access data provided by the “TOP-SECRET-ALLOW” policy come from a specific network ID, say 10.105.28.0/24 and you start seeing access from 10.111.13.68 then you can see if a user ID has been compromised, you can also query and see how often a user accesses data from which IP Addresses. If someone’s account is compromised, it would show up as coming from another IP as it is less likely that they are sitting at the user’s terminal.

In the log above I parse the following:
Context:
wireless@192.168.1.50
Destination: 10.10.10.30:139 (note the :139 indicating an attempt to use SMB)
Policy: TOP-SECRET-DENY
Payload: (Blank if not HTTP)

Below is an example of Reporting Services dashboard that refreshes every minute:(Note, I have a particular Policy that turns red in this dashboard to alert me of an important breach attempt)

Time Appliance Context Destination Policy Payload
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30 :3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30 :3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:3389 TOP-SECRET-DENY
 
12:37 192.168.1.75 wireless@192.168.1.50 pt.veoh.com Problem-Site POST /tracker/update.jsp
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:139 TOP-SECRET-DENY   

 

What You need:

  • You need an incumbent SQL Server Environment, you need Reporting Services if you want dashboards (If you have edgesight you should already have this)
  • You need to be able to set up an ODBC Connection, remember if it is a 64-bit server/workstation you need to use the ODBC tool in %Systemroot%\sysWOW64
  • You need to be able to set up a database connection in Reporting Services
  • $245 bucks for a full version of KIWI, if you buy a Netscaler you can afford a full version of KIWI, I will cover several solutions that will make this the best $245 you have ever spent.

How to set it up:
Once you brow beat your cheap boss into spending the $245 on KIWI you perform the following steps:

Go to http://www.ctxsupport.com/forums/showthread.php?36-Parsing-Scripts-for-VPN-Data-Mining-on-AGEE and download all of the files. (Follow the instructions in the post)

Create a Database called Syslog with a username and password that has DBO privileges and create an ODBC Data Source on the server hosting KIWI for the syslog database and name it syslogd.

After renaming Netscaler.txt to Netscaler.ini go to KIWI and import the ini file.

On each rule, go to the “Write to SQL” Action and click “Create Table”

On each rule, go to the “Parse Data” Action and click “Browse” to upload the parsing script that goes with each rule. (Check all checkboxes under “Read and Write”

Conlcusion:
Once this is done you will be able to collect a ton of information that is very useful and it beats the hell out of a 90GB ASCII file or just writing everything into a single event correlation system without the ability to query on certain columns. All of the parsing scripts write the entire log to the msgtext column so you still have the original log if there is every any questions. Being able to parse key information in a specific column will give you a considerably higher level of agility when searching for information about a particular user, IP Address, destination or Security Policy.

If there is a worm that is sending a particular payload over http, you are one query away from finding out every infected IP Address. If an auditor asks you how many users have accessed a sensitive server you are a query away from providing that information. I will supplement this post with a video of the entire setup from start to finish on citrix.utipu.com within the next two weeks (Hopefullly).

Also, I tried this in a home based lab (I cannot use my logs from work) so please, if you have any issues getting it to work, let me know so I can set up better instructions. And keep in mind, I have not looked at this with ICAPROXY logs, I am hoping to do that ASAP, there may be a supplement to this that includes a different script and maybe a different table for ICAPROXY logs. I am waiting on an enhancement request before I tackle ICAProxy logs (They will come across as “SSLVPN” but the log does look different than standard VPN logs).

And most importantly, I am not a Developer, I am a poor-man’s DBA and am a marginal scripter at best, if you can write a better parsing script please let me know!!

Thanks for reading

John Smith

The Digital SCIF: Compartmentalizing Sensitive data with Access Gateway Enterprise Edition (SCIFNET)

 

A little over six months ago Citrix released the Netscaler VPX virtual appliance and I was immediately thrilled with the potential to create my own virtual lab using XenServer and internal Xen networks on the hypervisor for downstream hosts. What I noticed was that I could locate resources inside a hypervisor’s black network and make them available externally via a VIP or a secure tunnel via a VPN connection. This lead me to believe that a resource that is, for all intents and purposes, off the public internal network can live safely on this network and never be exposed to the corporate network giving administrators another layer to further compartmentalized sensitive data off of an internal network. The compartmentalizing of sensitive data made me think of a military/DOD term called “skiff” or more appropriately Sensitive Compartmentalized Information Facility or a more appropriate acronym, SCIF. With a SCIF, all access, work and manipulation associated with specific sensitive information occurs within the confines of a specific building. What I am proposing is that you can use an Access Gateway Enterprise Edition to grant access to specific resources following this same model providing secure access, accountability and ensure that the only way to get to that data is via a gauntlet of two-factor authentication, application firewalls and endpoint analysis prior to the 2nd level of policy based access to internal resources that are only accessed via this secure tunnel.

SCIFNET: (“skiff-net”)

Placing a VPN in front of resources is not necessarily new, while VPN’s are most commonly used for remote access, there are instances where an administrator will use a VPN to secure a wireless network or to provide secure access to sensitive information. What I will describe in this is the next level where not only access is restricted but how the AGEE can integrate with the existing identity management framework as well as provide extensive logging and policy based access providing a least privileged model on a per resource basis.

Why put my data in a SCIF?

Currently your internal network is protected either by a NATed firewall, internal ACL’s etc. More mature networks have already layered their services by specific networks placing Oracle servers in one Network, Web Servers in another, SQL Servers in still another network, etc. As the security screws get tightened year after year we find that segmenting our services to particular networks may not be enough. Imagine if a database resided on a server that was completely invisible to the internal network that did not even have a default gateway assigned to it? No MAC Address to show up in ARP tables? No ports exposed via a NESSUS/SATAN/SARA scan?

In the “glass-half-empty” world of IT Security there are two types of systems, compromised and being-compromised. In 2004, during a particularly heated security discussion I suggested that the only way we could truly secure our systems was to unplug them from the network. With the SCIFNET solution I am proposing, you create an internal Network on your XenServer or ESX Server that does not reside on the internal network. This means that all communications occurs on the bus of the Hypervisor which has gigabit level speeds available on it.

So your SQL Server and Web Server are living inside a hypervisor with no Default Gateway and no ability to route to your internal Network? Great job…now how do you make it available? Well, in an earlier blog I discussed my time working as a County health inspector and when I inspected a convenient store in a particularly bad neighborhood, the shop owner would open a barred window and ask the customer what they wanted, he would take the money and go and get the merchandise and the entire transaction occurred outside his store. In this scenario, his exposure and risk was limited as the person was never allowed to enter the store and potentially rob him or attempt to leave with merchandise he/she did not pay for. SCIFNET works in a similar fashion where by the user connects to an Access Gateway who has a leg in both Networks but unlike a door, it is more like a barred window granting access to internal resources. But even better than my shop owner, I will log each access, I will account for how long they used the resource and I will log all un-authorized access attempts to this resource as well. By inserting a VPX in front of the resource, I am able to provide barred window access to sensitive resources that includes the highest level of accountability and record keeping.

Barred Window Access:

The Netscaler VPX provides for several secure access solutions to ensure anyone entering the secured network passes several forms of authentication, endpoint analysis and application firewall rules. Through each of these, before they even begin to attempt to access internal resources, they are met with a myriad of rules and scans to ensure they are allowed to even attempt access to sensitive data. While I may locate a resource on an internal Network on my hypervisor, I can offer it to the end user in a variety of ways among them via VPN or via AAA Authentication to a VIP. So while my web-server/db-server combo may exist on a completely invisible network inside a hypervisor, I am able to deliver it by creating a VIP on the VPX and offering that VIP to users on the internal Network. I can add a layer of security by forcing AAA Authentication to that VIP as of version 9.x of the Netscaler. If you need to grant non http access to a server that has either sensitive documents or a back end database you can offer a VPN tunnel into the internal network on the hypervisor. With split tunnel turned off, you can ensure that the client is only able to access internal resources while connected to the VPN and keep any outside connections from getting in.

Authentication:

As with the hardware appliance, the VPX allows for two factor authentication using smart cards(HSPD-12), SecurID, LDAP(AD/NDS/eDirectory) and local Authentication. All AAA logs can be sent to an event correlation engine for parsing and accountability to ensure that access attempts are accounted for and breach attempts can be reported and acted on immediately(Custom solution, email me if you are interested in it). Currently, I tested two factor authentication with AD Credentials and SecurID tokens and have used Smart Cards (CAC) Cards in a single authentication mode without any issues.

Endpoint Analysis:

In addition to authenticating users who wish to access sensitive data, you can also set minimum standards of the systems accessing the data. Using the VPX, you can ensure that systems accessing the SCIF have adequate virus signatures, host based firewalls and encryption software. Using Endpoint Analysis, you can ensure that any system meets a pre-selected set of requirements prior to accessing the systems inside. This will ensure that an infected system or a system that possesses an outdated virus signature is not allowed access. You may also only want a select group of systems accessing the SCIF, by putting a watermark in the registry. By scanning for this specific watermark, you can further restrict the number of systems that are allowed access in addition to the number of users.

Application Firewall:

Not everyone purchases this feature, in fact Citrix does not bundle this with the Express edition of the VPX but you can get a 90 day platinum edition that has it. What the application firewall does is allow your front end SSL VPN solution to be protected by a layer 4-7 firewall. By enforcing a “START URL” rule you can ensure that anyone who attempts to access the system by IP is dropped meaning any worm that is on the loose or person looking for port 443 or port 80 over an IP will not be able to access the authentication page. This same solution provides for Buffer Overflow, SQL Injection, Cross-Site Scripting and Custom based URL filter protection. An individual would need to know the exact URL to connect to before they even get a chance to Authenticate and be scanned.

Accessing Sensitive Resources:

 

Okay, you have typed in the correct URL, you have all of the necessary virus updates and watermarks to pass endpoint analysis and you have passed the two factor authentication, now you are free to access whatever you want inside the SCIF correct? No, in fact you have only entered the building, now the actual compartmentalized access control begins to take shape. While most SSL VPN Solutions will offer a similar gauntlet to logging in, once you are in the door, you can attempt to get to any IP address thereafter. The 2nd part of this posting has to do with what can be done after you have authenticated to ensure a user just doesn’t wander around the network looking for vulnerable systems. There are 3 parts to setting this up, Active Directory groups, Authorization Policies and the resources themselves.

Resources:

Resources are defined by IP Address, Network ID and Port. For example, we have a database server that we want to allow a non-web based front end application to connect to. You create an internal Network on the XenServer where you want that resource to go than place the Virtual Machine on the XenServer and assign it to that network. The resource is accessed via the VPX who has a leg in both networks and bridges you from your internal network to the resource. Resources are defined to the AGEE via the Authorization Policy as an IP Address, Network and port. So my SQL Server that I have placed in 10.10.10.0/24 (Already configured) with an IP Address of 10.10.10.15 will be the resource I grant access to.

Authorization Policies:

This is the hierarchy for setting up access, AD Groups are assigned Authorization policies and Authorization policies have resources instantiated as rules. Using the resource above I would create an Authorization policy called “Sensitive DB” and assign the network ID or IP Address and port to that specific policy. You can assign more than one resource to an authorization policy. Once this is done, you can assign the policy to a group which brings us to the Active Directory integration with the AGEE.

Active Directory Group Extraction:

On the AGEE you will create a group that matches, exactly, the name of the group in Active Directory. This process is LDAP extraction so the same should work for eDirectory/NDS, iPlanet/SunOne and openLDAP. So let’s say for the example above we create an AD Group called “SensativeDB”. I create that exact same group on the Netscaler and so log as the user authenticates via Active Directory, the AGEE will check for matching LDAP groups. By assigning an Authorization Policy to a specific group, you can ensure that your access control to the sensitive information is still managed by the incumbent identity management framework and you also ensure that only users in specific groups are given access to sensitive data. The AGEE will act as the doorman ensuring that no one gets access to any area’s they are not supposed to.

Can I add access to resources outside of the SCIF?

Yes, if an outside resource on a different network needed to be made available to you while you were working inside the SCIF than you could accomplish this using the AGEE by setting up a VIP. If you were connected via VPN to the SCIF network (say 10.10.10.0/24) and there was some reference data located on another network than you could create a VIP on the 10.10.10.0/24 network and present external data to the inside with the same security gauntlet that you would present VIP’s to the internal Network. Say you had a group of contractors that you wanted to restrict to a SCIFNET but they also needed access to a web-based time keeping application, you could create an internal VIP and present it to the users inside the SCIF without exposing the entire internal network.

Integrating SCIFNET with VDI:

Initially, I wanted a similar situation as with a SCIF where a person walks into a room and accesses a secure terminal and from there you can access sensitive data on a network. In this manor, I can ensure that the end user is accessing data from what amounts to a glorified dumb terminal. Placing the VDI environment inside the SCIF created some federated services challenges that I have not mastered yet. Namely, you need AD to use XenDesktop and this meant poking a hole to allow for that AD integration. Also, with Endpoint Analysis and the “Barred Window” access offered by AGEE I felt the risk was mitigated. With Split Tunneling off and only allowing VPN traffic once the user connects to the AGEE I felt like we would be pretty safe. Also, you can still use VDI just one on your incumbent internal network instead of inside the SCIF. Otherwise, you need to set up a completely new AD Infrastructure inside the SCIF. I am not well versed enough with ADFS or some of the Simplified.com solutions to be able to adequately address this in this paper.

Can this be done without using a black network or VM’s:

It is likely more experienced readers have already made the connection to this and realized that yes it can be done. For Federal Government Sites, I would recommend putting a Netscaler 9010 with a FIPS module on the Network than set up an entire switched network that is NOT on the internal network but bridged by the AGEE software on the Netscaler. You can still deliver “barred window” access to the physical resources and you do not have the risk of the hypervisor itself becoming compromised. In production, it may be a lot harder to get the VPX based solution approved by security personnel but physically segmenting your resources may be easier to get approved and while I have not seen it in my environment I am quite sure a similar solution currently exists using either PIX or IOS based ACL’s.

Logging and Accountability:

What I like the most about using the AGEE for compartmentalized access is the logging. While a PIX or IOS based ACL will give you an offending IP. Currently, my VPN logs, once parsed and written to SQL, have the userID in addition to the port, source and destination IP Address. This means that I can type the IP Address of a resource into my SQL Reporting Services website and get the date, time, external IP, port and username of every single user who has accessed that resource. Additionally, the AGEE logs policy hits weather they are ALLOWED or DENIED. Once finished parsing, I can, on an hourly, daily or monthly basis check for users who trip the “DENIED” policy. Since I already have the username in my logs, I don’t have to hunt down who had what IP Address. This places me in a position to be more proactive, if I see a large number of ACCESS DENIED logs, I can go in and immediately kill a user’s VPN Session post haste. This also provides the opportunity to log access by user ID. The Digital Epidemiology portion is a whitepaper itself but having a user ID tied to each log makes incident response much faster.

Example:

You have a key resource at 10.10.10.21 that must have a blanket “Deny” applied to it and is only available via exclusive “Allows”. For this you can create an Authorization policy called “TopSecret” and you create a rule for DESTIP==10.10.10.21 with an Action of DENY. You bind this policy to your AD Group and you set it higher than any other policy. This will ensure that if they attempt to get to that server, they will be denied access. What I like about the AGEE logs is that I get a username and the policy that was violated as well as the sourced IP Address. Effective parsing of these log files will allow for you to use event correlation to find out who has attempted to make unauthorized access.

 Example Log file from blocked access:

15:16:39 192.168.1.55     01/03/2010:20:15:40 GMT ns PPE-0 : SSLVPN NONHTTP_RESOURCEACCESS_DENIED 1250215 : Context jsmith@192.168.1.100 – SessionId: 15- User jsmith – Client_ip 192.168.1.100 – Nat_ip “Mapped Ip” – Vserver 192.168.1.50:443 – Source 192.168.1.100:13874 – Destination 10.10.10.21:3389 – Total_bytes_send 298 – Total_bytes_recv 0 – Denied_by_policy “TopSecret” – Group(s) “CITGO VPN Testers”

While many segmented networks will have PIX logs that will give you this information, what I like about these logs is that I can parse them into a database and put each item marked red into a column for date/time, action, context, policy so in my database a query would return the following:

Time Context Destination Policy Action

15:16:39

jsmith@192.168.1.100 10.10.10.21:3389 TopSecret DENIED

 

In this scenario, I can immediately ask jsmith why he/she is trying to access this system. I have a record of the breach attempt and can even configure KIWI to alert me via Email at the exact time the breach occurs.

Likewise, with the AGEE I have a record of the successful attempts as well.

17:13:10     192.168.1.55    01/03/2010:22:12:10 GMT ns PPE-0 : SSLVPN TCPCONNSTAT 1299232 : Context jsmith@192.168.1.100 – SessionId: 16- User jsmith – Client_ip 192.168.1.100 – Nat_ip 10.10.10.4 – Vserver 192.168.1.50:443 – Source 192.168.1.100:36933 – Destination 10.10.10.21:3389 – Start_time “01/03/2010:22:12:10 GMT” – End_time “01/03/2010:22:12:10 GMT” – Duration 00:00:00 – Total_bytes_send 48 – Total_bytes_recv 19 – Total_compressedbytes_send 63 – Total_compressedbytes_recv 39 – Compression_ratio_send 0.00% – Compression_ratio_recv 0.00% – Access Allowed – Group(s) “CITGO VPN Testers”

Note that you do not get a policy named with the log, however all Deny’s should have the policy that denied them.

 

Conclusion:
I plan to include some videos on how to accomplish this, it is relatively simple. This is also not a new concept and networks use IOS based ACL’s to accomplish this but I believe the AGEE be it as a Virtual appliance or physical hardware, provides a much easier solution than an enterprise NAC endeavor. In fact, I have heard some horror stories regarding NAC deployments. In the interim, while NAC continues to mature and organizations ease into their NAC solutions, SCIFNet allows you to perform the same security levels without taunting specter of an enterprise NAC deployment. Compartmentalize sensitive data and place an AGEE in front of it and you have all of the same benefits of Network Access Control at a fraction of the price and overhead.

 To see a video of SCIFNET put to use with a VPX and an internal XenServer Network click here:
http://citrix.utipu.com/app/tip/id/21155/

Thanks for reading

 John

Xen and the art of Digital Epidemiology

In 2003 I started steering my career toward Citrix/VMWare/Virtualization and at the time, aside from being laughed at for running this fledgling product called ESX Server 1.51, most of my environment was Windows based. There were plenty of shrink-wrapped tools to let me consolidate my events and the only Unix I had to worry about was the Linux Kernel on the ESX Server. Now my environment has included a series of new regulatory framework (Sarbanes, CISP, and currently FIPS 140-2). What used to be a Secure Gateway with a single web interface server and my back end XenAPP farm now includes a Gartner leading VPN Appliance, Access Gateway Enterprise Edition, Load balanced(GSLB) web interface servers, an application firewall and XenApp servers hosted on Linux based XenServer and VMWare. So now, when I hear, “A user called and said their XenAPP Session was laggy where the hell do I begin? How do I get a holistic vision of all of the security, performance and stability issues that could come up in this new environment.

As a security engineer in 2004, I started calling event correlation digital epidemiology. Epidemiology is defined as the branch of medicine dealing with the incidence and prevalence of disease in large populations and with detection of the source and cause of epidemics of infectious disease”

I think that this same principal can be applied to system errors, computer based viruses and overall trends. At the root of this is the ability to collate logs from heterogeneous sources into one centralized database. During this series, I hope to go over how to do this without going to your boss and asking for half a million dollars for an event correlation package.

I currently perform the following with a $245 copy of KIWI Syslog Server:(Integrated with SQL Server Reporting Services)

  • Log all Application Firewall Alerts to a SQL Server and present them via an Operations dashboard This includes violation (SQL Injection, XSS, etc), Offending IP and Time of day.
  • Pull STA Logs and provide a dashboard matrix with the number of users, total number of helpdesk calls, percentage of calls (over 2.5% means we have a problem) and the last ten calls (Our operations staff can see that “PROTOCOL DRIVER ERROR” and react before we start getting calls. )
  • I am alerted when key VIP Personnel are having trouble with their SecurID or AD Credentials.
  • I can track the prevalence of any error, I can tell when it started and how often it occurs.
  • My service desk has a tracker application that they can consult when a user cannot connect telling them if their account is locked out, Key fob is expired or if they just fat fingered their password. This has turned a 20 minute call into a 3 minute call.
  • I have a dashboard that tells me the “QFARM /Load” data for every server refreshing every 5 minutes and it turns Yellow at 7500 and red at 8500 letting us know when a server may be about to waffle.

For this part of Digital Epidemiologist series I will go over parsing and logging STA Logs, why it was important to me and what you can do with them after getting them into a SQL Server.

Abstract:

A few y ears ago, I was asked “What is the current number of external vs internal users”. This involved a very long, complicated query against RMSummaryDatabase that worked okay but was time consuming. One thing we did realize was that every user who accessed our platform externally came through our CAG/AGEE. This meant that they were issued a ticket by the STA Servers. So we configured logging on the STA Servers and realized a few more things. We also got the application that they launched as well as the IP Address of the server they logged into. So now, if a user says they had a bad Citrix experience, we know where they logged in and what applications they used. While Edgesight does most of our user experience troubleshooting for us, it does not upload in real-time and our STA Solution does. We know right then and there.

By integrating this with SQL Server Reporting Services, we have a poor man’s Thomas Koetzing solution where we can search the utilization of certain applications, users and servers.

For this post we will learn how to set up STA Logging, how to use EPILOG from Intersect Alliance to write the data to a KIWI Syslog Server and then we will learn how to parse and write that to a SQL Server and use some of the queries I have included to gain valuable data that can eventually be used in a SQL Server Reporting Services report.

Setting up STA Logging:

Go to %systemroot%\program files\Citrix\system32 and add the following to the ctxsta.config file:

LogLevel=3
MaxLogCount=10
MaxLogSize=55 (Make sure this size is sufficient).

LogDir=W:\Program Files\Citrix\logs\

In the LogDir folder you will note that the log files created will be named sta2009MMDD.log

What exactly is in the logs:
The logs will show up in the following format: (We are interested in the items in bold where a parse script will pipe them into a database for us. )

INFORMATION 2009/11/22:22:29:32 CSG1305 Request Ticket – Successful. ED0C6898ECA0064389FDD6ABE49A03B9 V4 CGPAddress = 192.168.1.47:2598:localhost:1494 Refreshable = false XData = <?xml version=”1.0″?><!–DOCTYPE CtxConnInfoProtocol SYSTEM “CtxConnInfo.dtd”–><CtxConnInfo version=”1.0″><ServerAddress>192.168.1.47:1494</ServerAddress><UserName>JSMITH</UserName><UserDomain>cdc</UserDomain><ApplicationName>Outlook 2007</ApplicationName><Protocol>ICA</Protocol></CtxConnInfo> ICAAddress = 192.168.1.47:1494

Okay, so I have logs in a flat file….big deal!

The next step involves integrating them with a free open source product called “Epilog” by this totally kick ass company called intersect alliance (www.intersectalliance.com). We will configure epilog to send these flat files to a KIWI syslog server.

So we will go to the Intersect Alliance Download site to get epilog and run through the installation process. Once that is completed you will want to configure your epilog agent to “tail-and-send” your STA Log Files. We will do this by telling it where to get the log file and who to send it to.

After the installation go to START->Programs->Intersect Alliance-> Snare/Epilog for Windows

Under “LOG CONFIGURATION” For STA logs we will use the log type of “Generic” and we will type in the location of the log files and we will tell Epilog to use the format of STA20%-*.log

After configuring the location of logs and type of logs you will want to go to “Network Configuration” and type in the IP Address of your Syslog Server and select port 514 (Syslog users UDP 514).

Once done, go to “Latest Events” and see if you see your syslog data there.


Section III: KIWI SYSLOG SERVER

I assume that most Citrix engineers have access to a SQL Server and since Epilog is free, the only thing in this solution that costs money is KIWI Syslog Server. A whopping $245 in fact. Over the years a number of event correlation solutions have come along, in fact I was at one company where we spent over $600K on a solution that had a nice dashboard and logged files to a flat file database (WTF? Are you kidding me?!). The KIWI Syslog Server will allow you to set up ten custom database connectors and that should be plenty for any CItrix administrator who is integrating XenServer, XenAPP/Windows servers, Netscaler/AGEE, CAG 2000 and Application firewall logs into one centralized database. While you need to have some intermediate SQL Skills, you do not need to be a superstar and the benefits of digital epidemiology are enormous. My hope is to continue blog posts on how I use this solution and hopefully you will see benefits beyond looking at your STA logs.

The first thing we need to do is add a rule called “STA-Logs” and filter for strings that will let KIWI know that the syslog update is an STA Log. We do so by adding two filters. The first one is stating “GenericLog”

The second filter is “<Username>”. The two of these filters will match STA syslog messages.


Now that we have created our filters, it’s time to perform actions. There are two actions we want to perform. We want to parse the script (pull all of the data that was bolded from the log text above) and write that data to a table in a database. You add actions by right-clicking action and selecting “Add Action”

So our first “Action” is to set up a “Run Script” action. I have named mine “Parse Script”.

Here is the script I use to parse the data (Thank you Mark Schill (http://www.cmschill.net/) for showing me how to do this.)

The Script: (This will scrub the raw data into the parts you want, click “Edit Script” and paste).

##############################
Function Main()

Main = “OK”

Dim MyMsg

Dim Status

Dim UserName

Dim Application

Dim ServerIP

With Fields

Status = “”

UserName = “”

Application = “”

ServerIP = “”    

MyMsg = .VarCleanMessageText

If ( Instr( MyMsg, “CtxConnInfo.dtd” ) ) Then

Status = “Successful”

UserBeg = Instr( MyMsg, “<UserName>”) + 10

UserEnd = Instr( UserBeg, MyMsg, “<“)

UserName = Mid( MyMsg, UserBeg, UserEnd – UserBeg)

AppBeg = Instr( MyMsg, “<ApplicationName>”) + 17

AppEnd = Instr( AppBeg, MyMsg, “<“)

Application = Mid( MyMsg, AppBeg, AppEnd – AppBeg)

    
 

SrvBeg = Instr( MyMsg, “<ServerAddress>”) + 15

SrvEnd = Instr( SrvBeg, MyMsg, “</”)

ServerIP = Mid( MyMsg, SrvBeg, SrvEnd – SrvBeg)

End If

.VarCustom01 = Status

.VarCustom02 = UserName

.VarCustom03 = Application

.VarCustom04 = ServerIP

End With

##############################

Now that we can parse the data we need to create a table in a database with the appropriate columns.

The next step is to create the field format and create the table. Make sure the account in the connect string has DBO privileges to the database. Set up the custom field format with the following fields. Ensure that the type is SQL Database.


As you see below, you will need to set up an ODBC Connection for your Syslog Database and you will need to provide a connect string here (yes…in clear text so make sure you know who can log onto the syslog server). When you are all set click “Create Table” and click “Apply”


Hopefully once this is done, you will start filling up your table with STA Log entries with the data from the parse script.

I have included some helpful queries that have been very useful to me: You may also want to integrate this data with SQL Server Reporting Services and with that, you can build a poor man’s Thomas Koetzing tool.

Helpful SQL Queries: (Edit @BEG and @END values)

 

How many users for each day:(Unique users per day)

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-30’
select convert(varchar(10),msgdatetime, 111), count(distinct username)
from sta_logs
where msgdatetime between @beg and @end
group by convert(varchar(10),msgdatetime, 111)
order by convert(varchar(10),msgdatetime, 111)

Top 100 Applications for this month:

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-30’
select top 100 [application], count(application)
from sta_logs
where msgdatetime between @beg and @end
group by application
order by count(application) desc

Usage by the hour: (Unique users for each hour)

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-02′
select convert(varchar(2),msgdatetime,108)+’:00′, count(distinct username)
from sta_logs
where msgdatetime between @beg and @end
group by convert(varchar(2),msgdatetime,108)+’:00′
order by convert(varchar(2),msgdatetime,108)+’:00′

Electronic Stimulus

According to the Baltimore Sun, President Obama has promised to spend $50 billion dollars over the next five years coax hospitals, medical centers and the like to begin the process of offering electronic data.  So nurses, occupational therapist and other allied health personnel as well as Doctors may be carrying something like a Kindle around instead of a clip board.  With this comes an exstension of their existing regulatory framework such as HIPPA, CISP (as no one gets away from a visit to the Doctor without putting the plastic down these days) and future restrictions that will be put in place as a result of pressure from Libertarians and ACLU members. 

Ensuring that none of my personally identifiable information is left on someone’s screen while they walk away from their PC is a very big concern.  As these systems are brought online, ensuring that the data is protected, not so much from hackers, but also from basic behavioral mistakes that could result in someone leaning over a counter and getting my date of birth, social security number and credit card number.

While my security experience is very limited with HIPPA I can say that keeping this information hidden from the wrong eyes is a basic function of any security endeavor.  How vendors, System Integrators and IT personnel can best bridge this gap could have a direct correlation on how successful they are in this space.  How much of that $50 billion over five years will go to IBM? EDS/HP? Perot Systems?  What have you done to show these Systems Integrators as well as smaller partners how your product will help them meet this challenge and how will you deal with a security screw that seems to only get tightened?  Fact is, there are millions and millions of medical documents, and finding out which parts of which documents contain sensitive data is virtually impossible.  One solution is to pattern-match the data and block it so that it is not visible to the wrong people.  You could do this with a DBA who ran ad hoc queries to match the data and replace it with an “X” but then someone in billing may need that data (keep two copies?) not to mention the staggering cost (Y2K Part 2?).  The best way I can think of is to place the data behind a device that can capture the patters in the header and “X” the data out in real time.  Enter the Netscaler Platinum that will not only add compression, authentication, caching and business continuity, but will keep the wrong people from seeing the wrong data.  I am not sure when the money will start flowing but as I understand it, some hospitals having as much as $1.5 million dangled in front of them to meet this challenge.      

In this lab, I present how I used the Netscaler Platinum Application Firewall feature to secure personally identifiable data with a rule called “Safe Object” as well as how to deal with a zero day worm/virus using the “Deny URL” Rule.  This “Safe Object” feature, when coupled with the Netscaler policy engine, will allow you the flexibility to ensure that certain job types (Nurses, Doctors, etc) based on either login (setting authentication on the VIP) or subnet; do not see things like Social Security Numbers, Credit Cards and other sensitive data.  While at the same time, ensuring that information is available to billing and accounts receivable personnel. 

Materials:

For this lab, I used a basic Dell 1950 G6 with a virtualized Netscaler VPX that functioned as a VPN allowing me to establish a secure tunnel to the sensitive data on a non-wired network that resided on that server.  An Apache server on the non-wired network with bogus phone numbers and social security numbers was used as the back end web server.  Again, in a real world scenario, you could either hypervise your web server and place it on a non-wired network as covered in my “VPX Beyond the lab” blog or you could ACL off your web server so that only the MIP/SNIP of the Netscaler was allowed to access your web content. 

See the lab here:
http://citrix.utipu.com/app/tip/id/11733/

Netscaler VPX Beyond the Lab

By John M. Smith

 

Okay, so the Netscaler appliance has been virtualized, so now what?

On May 18th of this year Citrix allowed anyone who wanted to get familiar with their Virtual Netscaler/AGEE to download a beta version of its Netscaler VPX appliance.  This appliance runs on the XenServer 5.0 hypervisor and is a fully functional Netscaler product that includes AGEE, Application Firewall, Caching and basically everything that comes with a Platinum Netscaler.  

So, aside from a lab environment, what can we do with this new Virtualized Netscaler/AGEE?  Well, my Platinum 7000 has one CPU and 1GB of memory in it.  I installed the VPX appliance on my Dell 1950 G9.  It has 16GB of memory and two dual-core CPU’s (4 Processors).  I am sure that the overhead of the hypervisor will be optimized and we will be able to make up for that by pushing an additional proc or more memory to the VPX.  So, out of the box, the VPX has very similar resources (1024 RAM, 1CPU) as my existing 7000.  This leaves me with at least 14GB of RAM with the hypervisor running in the background.  This is where I believe the virtual Netscaler can be leveraged to provide exceptional security and presents an optional “Datacenter-in-a-box” solution.

Securing your incumbent web server:

One of my first jobs out of college was as the local health inspector, while inspecting several stores in a really bad neighborhood, I came across a particular convenient store that had been robbed so many times that you would have to walk up to the window, tell the owner what you wanted and they would go and get it and you would conduct the transaction at the point of entry and were never allowed inside.  The public internet is the digital equivalent to the most run-down, crime-ridden neighborhoods the US.  The Netscaler, with its security features, offers a similar functionality without the considerable delay that my store owner has.  While things have quieted down in the last few years, for awhile there it seemed as though IIS had become the internet’s whipping boy.  Additionally, other web servers such as Apache, iPlanet and the like have also noted security flaws that have left them vulnerable.  I recall back in 2003 when working as a Security Analyst, during a debate with the team about web security saying “Well, if we want true security, I think the solution is to unplug all of the RJ-45 ports on each of the servers”.   All joking aside, despite the use of firewalls, ACL’s and hardened builds you are still at the mercy of the existing exposed port and the code behind it.  The Netscaler’s application firewall is a perfect solution for securing insecure code.  The VPX takes it a step further; I can install my web server into the Xenserver hypervisor and run it on an internal (non-wired) network.  In a way, I can achieve the level of security that was brought up in jest six years ago.  What I have done in my lab with the VPX is bind an internal network on the Hypervisor and provide a multi-homed VPX that can present the internal web server’s content.  In this manor, my OS is never even exposed to the network.  In a way, it functions similar to my shop keeper who refused to let anyone enter the store. 

Why is this a good thing? 

Well, first off, your Operating system is never on the public internet and I am not dependant on another device (firewall, IOS based ACL) to keep other hosts from sending packets to my server.  Over the years, Windows admins have become very adept at securing their IIS boxes with a series of registry-hacks, disabled services and least-privileged security solutions.  While this is great most of the time, on more than one occasion, it has served as a “get-out-of-jail-free” card for a vendor who says…”well, I’m sorry but for this app to work, the IUSR_ account needs full control to the system32 directory AND HKLM”  (Yes, I was ACTUALLY told that once with a piece of FINANCIAL software no doubt!).  If your organization does not have a scripted build solution such as Altiris then these custom builds become time intensive.  While I am not saying hypervising your Web Server and placing it behind the VPX will eliminate the need for Windows security, I do like the idea of my OS never being exposed other than from behind an Netscaler and the arsenal that comes with it.

 

Datacenter in a box:

Well, I gave a GIG to my Netscaler, and four gigs to my web server…I have 10 gigs left…what to do…what to do?  So, while my web server is in a nice safe cocoon, that is all fine and dandy if you just have web content on it.  However; that isn’t how it works today, today your web server references back-end services such as databases, XML/SOA based web services, etc.

 In my current environment, I have a pix firewall in front, my web farm, another pix and then my back end resources.   I think this is a pretty typical setup for most environments and most enterprises.  For SMBs, and less regulated shops, it may be feasible to hypervise the entire environment and locate the back-end services that your web server needs on the same internal (non-wired) network within the Xenserver.  This allows back end transactions to occur on the buss rather than on a network that may have varying levels of performance.  Today’s x64 architecture is extremely fast and can easily outperform the throughput of a switched network.  So rather than traversing a firewall, moving through a few layer 3 hops and a few layer 2 devices to get to your data, the back end data is located on the same buss as the web server.  This puts all communications on the same piece of hardware.  Like I said, some larger enterprise security groups will likely not allow this as this puts everything on one piece of hardware and while you can segment at layer 3 by using the virtual switch that comes with Xenserver, that does not provide the physical segmentation that will sufficiently ease concerns of internal security groups, even if you add another internal network, hypervise a firewall and pace it in between the web server and the back end database servers.  Most security teams are, rightfully, paid to see the glass as half-empty.  That said, on today’s Intel based servers, the amount of RAM you can put in them as well as disk space makes it hard to ignore the potential to put the entire nth-tier application into one piece of hardware where the bus I/O is the only bottleneck you have to worry about.  Securing this model and getting regulatory buy-off will be the major challenge. 

 Working outside of the hardware?

So, let’s say you work in a heavily regulated environment where the datacenter-in-a-box solution is not an option.  You can still hypervise your Netscaler and use it as a security wrapper and still allow the web server to consume SOAP based web services.  The same way that I create a VIP on my external network and present it to internet based IP’s, I can create a VIP internally and present it to my hypervised web server that allows it to connect to and consume web services on my corporate network.  If my web server on my closed system needs to consume XML services located on another host on the corporate intranet, I can create a VIP on the same VPX and present it to my internal network for consumption by internal virtual machines.  Example: I have a VPX that is presenting an external VIP to end users on IP Address 204.222.213.43 that is portal to a web server on an internal (non-wired) XenServer network located at 10.10.10.43.  This same web server consumes XML/SOA-based services on a host located on the corporate network at 192.168.11.37.  In this scenario, I would create an internally facing VIP on 10.10.10.33 that presents these web services to my internal host using the VPX as the broker.  Internet request for my web server are handled by the VPX and routed to 10.10.10.43 and internal SOA communications are handled by the VPX and routed from the internal network to 192.168.11.37.  The Netscaler offers the ability to terminate any TCP based port on a VIP so I can present both web services and SQL listeners via an internal VIP to the protected web server.  Thus my web server can make all of the necessary external calls while remaining inside the protective bubble provided by XenServer and the VPX.      

  

 

In addition to the added security that you get with the Netscaler you also get the ability to do Web 2.0 pushes, content redirection, external authentication and URL rewriting.  Also, as most systems are still running 32bit operating systems, 16gb server can be hypervised to provide two or three instances of a web server front-ended by the Netscaler.  Keep in mind that I can double the resources presently on my Platinum 7000 in my VPX and still have enough resources for a pair of web servers.  Also, the ability to readily allocate resources (RAM, Disk, etc) can let you be much more aggressive with CPU intensive app firewall rules.  While I keep running home to security, the performance benefits of a Netscaler are as important making the VPX an exceptional solution for DMZ based web solutions as well as those applications that require a stock build.  The ability to allow for external authentication would allow CRM companies like Envala and Sales Logic to authenticate end users against their customer’s LDAP/Authentication URL’s prior to delivering their custom content without having to procure Netscaler hardware. 

How do I manage my VM’s?

What I did for my GSLB lab was set up one VPX to function as an Access Gateway and two additional VPX’s to serve as GSLB lab machines.  The VPX AGEE is multi-homed providing VPN access to the internal non-wired network allowing me to patch, RDP and administer internal hosts in the same way you would connect to your corporate network from home.  While I uesed three VPX machines for this CBT, A single VPX can provide load balancing as well as a VPN connection to administrators to manage internal (non-wired) systems.   We have a system in our DMZ that cannot get to a DNS Server and sending it to “update.microsoft.com” is a bit of a hassle every month when we patch.  It remedy this, we created a VIP on the same network that terminated at our WSUS server and edited the security policy to use this VIP for updates.  The same would work for an internal, non-wired network where you would deliver a VIP that provided access to the internal WSUS box, or whichever patching strategy you have, and allow your protected hosts to consult this VIP for patches, updates, etc in the same manor you would allow it to consume external database and web services. 

 Other benefits of the VPX:

As the Citrix sales people will likely attest, trying to market a network solution that does not have a “CISCO” bezel on it can be tough.  At times, I find myself running into the “Cisco mafia” who will insist on an XSS/HSM solution.  And while the technical battle was long over by version 7 of the Netscaler, the Marketing battle seems to wage on.  While I think Cisco makes great products and I greatly appreciate the network engineers that lay the very foundation we communicate on, I feel like the entire web-based load balancing technology should have never been in the hands of Network engineers in the first place.  While my connectivity mates can explain BGP peers, OSPF route convergence and subnet on a grease board faster than I can use the calculator, explaining a URL rewrite to them, or an HTTP Call out or SOAP can produce a deer in the headlights look.  Additionally, while I am passionate about Application delivery, some connectivity folks can be a bit “eh…” when it comes to load balancing.  Also, anyone that calls a Netscaler a Load Balancer does not really know what a Netscaler is.    

 If the resource cost of the hypervisor can be marginalized (which I believe it can, in fact booting from BSD and launching from a flash drive, isn’t it kind of a VM already?) then you will be putting the same server in the rack that you have always put in the rack.  The fact that it is running XenServer with a web farm, fronted by an application switch will go largely unnoticed.  While I am not for doing anything under the radar as that will get you fired in some shops, I tend to feel like marketing this as a secure wrapper for web services is a pretty good way to deliver this solution without raising the hackles of the connectivity staff. 

Business Continuity is another benefit of this solution.  In 2003, I pleaded with VMWare to make something called “VMWare-lite” where I could hypervise all of my servers and I told them that they had accidentally built a great business continuity solution.  Currently, if any of my Netscaler hardware fails, I have to send out for a replacement.  With the VPX, I don’t have to worry about having a new chassis sent to me in the event of a failure. 

Conclusion:

The benefits of the Netscaler speaks for itself, I think the VPX will go a long way in helping web/server admins get their feet wet with this technology.  I would expect most web server administrators to take to this technology like a duck to water.  If it is possible to bundle this solution as an add-on then we may be able to see a change in how web services/content is delivered.  The future indicates that hypervisors are here to stay, the ability to secure your web environment behind the VPX, could boost XenServer in this space and put enterprise application delivery at the fingertips of server administrators everywhere.  The big question will be how quickly and to what extent this is adopted but at a minimum, allowing the VPX to run on incumbent vendor hardware will be a big step in standardized environments.  I, for one, like the virtualized Netscaler/CAG and look forward to using it beyond the lab.

GSLB Lab using VPX: (Sorry, utipu is done for good, I upgraded so I could have videos on my blog)

https://xen-trifuge.com/citrix-training/setting-up-gslb-on-netscaler/