Category Archives: Security

ICASTART, ICAEND “ICA-LIKE!!!”

In 2008 I had a conversation with Jay Tomlin asking him if he would put in an enhancement for ICA Logging on the AGEE. Basically we wanted the ability to see the external IP Addresses of our customers coming through the Access Gateway. As you are likely aware, what you get in the logs are the IP Addresses bound to the workstation and not the external IP Address that they are coming through. In the last ten years, it has become increasingly rare for an end user to actually plug their computer directly into the internet and more often, they are proxied behind a Netgear, Cisco/Linksys, and Buffalo switch. This makes reporting on where the users are coming from somewhat challenging.

Somewhere between 9.2 and 9.3 the requested enhancement was added and it included other very nice metrics as well. The two syslog events I want to talk about are ICASTART and ICAEND.

ICASTART:
The ICASTART event contains some good information in addition to the external IP. Below you see a sample of the ICASTART log.

12/09/2012:14:40:46 GMT ns 0-PPE-0 : SSLVPN ICASTART 540963 0 : Source 192.168.1.98:62362 – Destination 192.168.1.82:2598 – username:domainname mhayes:Xentrifuge – applicationName Desktop – startTime “12/09/2012:14:40:46 GMT” – connectionId 81d1

As you can see, if you are a log monger, this is a VERY nice log!! (Few can appreciate this) With the exception of the credentials everything is very easy to parse and place into those nice SQL Columns I like. If you have Splunk, parsing is even easier and you don’t have to worry about how the columns line up.

ICAEND:
The ICAEND even actually has quite a bit more information and were it not for the need to report ICA Sessions in real time, this is the only log you will need. Below is the ICAEND log.

12/09/2012:14:41:12 GMT ns 0-PPE-0 : SSLVPN ICAEND_CONNSTAT 541032 0 : Source 192.168.1.98:62362 – Destination 192.168.1.82:2598 – username:domainname mhayes:Xentrifuge – startTime “12/09/2012:14:40:46 GMT” – endTime “12/09/2012:14:41:12 GMT” – Duration 00:00:26 – Total_bytes_send 9363 – Total_bytes_recv 587588 – Total_compressedbytes_send 0 – Total_compressedbytes_recv 0 – Compression_ratio_send 0.00% – Compression_ratio_recv 0.00% – connectionId 81d16

Again, another gorgeous log that is very easy to parse and put into some useful information.

Logging the Data:
So, this was going to be my inaugural Splunk blog but I didn’t get off my ass and so my eval of Splunk expired and I have to wait 30 days to use it again (file that under “phuck”). So today we will be going over logging the data with the standard KIWI/SQL (basically a poor man’s Splunk) method.

So the way we log the data, if you haven’t been doing this already, is we configure the Netscaler to send logs to the KIWI Syslog server and we use the custom data source within KIWI to configure a SQL Logging rule. We then create the table, parse the data with a parsing script and voila, instant business intelligence.

Creating the custom KIWI Rule:

First, create the rule “ICA-START/END” with a descriptive filter configured as you see below.

Next you will optionally configure a Display action but more importantly you will configure the Script that parses the data.

Paste the following text (Below) into a file named Script_Parse_AGEE-ICA.txt and save it in the scripts directory of your KIWI install.

Function Main()

Main = “OK”

Dim MyMsg
Dim UserName
Dim Application
Dim SourceIP
Dim DestinationIP
Dim StartTime
Dim EndTime
Dim Duration
Dim SentBytes
Dim RecBytes
Dim ConnectionID

With Fields

UserName = “”
Application = “”
SourceIP = “”
DestinationIP = “”
StartTime = “”
EndTime = “”    
Duration = “”
SentBytes = “”
RecBytes = “”
ConnectionID = “”

MyMsg = .VarCleanMessageText

If ( Instr( MyMsg, “ICAEND_CONNSTAT” ) ) Then
SrcBeg = Instr( MyMsg, “Source”) + 6
SrcEnd = Instr( SrcBeg, MyMsg, “:”)
SourceIP = Mid( MyMsg, SrcBeg, SrcEnd – SrcBeg)

DstBeg = Instr( MyMsg, “Destination”) + 11
DstEnd = Instr( DstBeg, MyMsg, “:”)
DestinationIP = Mid( MyMsg, DstBeg, DstEnd – DstBeg)

UserBeg = Instr( MyMsg, “domainname”) + 10
UserEnd = Instr( UserBeg, MyMsg, “-“)
UserName = Mid( MyMsg, UserBeg, UserEnd – UserBeg)

StartBeg = Instr( MyMsg, “startTime “) + 11
StartEnd = Instr( StartBeg, MyMsg, ” “)
StartTime = Mid( MyMsg, StartBeg, StartEnd – StartBeg)

EndBeg = Instr( MyMsg, “endTime “) + 9
EndEnd = Instr( EndBeg, MyMsg, ” “)
EndTime = Mid( MyMsg, EndBeg, EndEnd – EndBeg)

DurBeg = Instr( MyMsg, “Duration “) + 9
DurEnd = Instr( DurBeg, MyMsg, ” “)
Duration = Mid( MyMsg, DurBeg, DurEnd – DurBeg)

SentBeg = Instr( MyMsg, “Total_bytes_send “) + 17
SentEnd = Instr( SentBeg, MyMsg, ” “)
SentBytes = Mid( MyMsg, SentBeg, SentEnd – SentBeg)    

RecBeg = Instr( MyMsg, “Total_bytes_recv “) + 17
RecEnd = Instr( RecBeg, MyMsg, ” “)
RecBytes = Mid( MyMsg, RecBeg, RecEnd – RecBeg)

ConBeg = Instr( MyMsg, “connectionId”) +12
ConnectionID = Mid( MyMsg, ConBeg)

Application = “NA”

end if

If ( Instr( MyMsg, “ICASTART” ) ) Then
SrcBeg = Instr( MyMsg, “Source”) + 6
SrcEnd = Instr( SrcBeg, MyMsg, “:”)
SourceIP = Mid( MyMsg, SrcBeg, SrcEnd – SrcBeg)

DstBeg = Instr( MyMsg, “Destination”) + 11
DstEnd = Instr( DstBeg, MyMsg, “:”)
DestinationIP = Mid( MyMsg, DstBeg, DstEnd – DstBeg)

UserBeg = Instr( MyMsg, “domainname”) + 10
UserEnd = Instr( UserBeg, MyMsg, “-“)
UserName = Mid( MyMsg, UserBeg, UserEnd – UserBeg)

AppBeg = Instr( MyMsg, “applicationName”) + 15
AppEnd = Instr( AppBeg, MyMsg, “-“)
Application = Mid( MyMsg, AppBeg, AppEnd – AppBeg)    

StartBeg = Instr( MyMsg, “startTime “) + 11
StartEnd = Instr( StartBeg, MyMsg, ” “)
StartTime = Mid( MyMsg, StartBeg, StartEnd – StartBeg)

ConBeg = Instr( MyMsg, “connectionId”) +12
ConnectionID = Mid( MyMsg, ConBeg)

EndTime = “NA”
Duration = “NA”
SentByes = “NA”    
RecBytes = “NA”

end if

.VarCustom01 = UserName
.VarCustom02 = Application
.VarCustom03 = SourceIP
.VarCustom04 = DestinationIP
.VarCustom05 = StartTime
.VarCustom06 = EndTime
.VarCustom07 = Duration
.VarCustom08 = SentBytes
.VarCustom09 = RecBytes
.VarCustom10 = ConnectionID

End With

End Function

Next you will create the custom DB format exactly as follows:
(IMPORTANT: NOT SHOWN Make sure you check “MsgDateTime” in this dialog box near the top)

Then you will create a new “Action” called “Log to SQL” and select the Custom DB Format and name the table AGEE_ICA and select “Create Table”. If you have not yet, build your connect string by clicking the box with the three periods at the top “…”

Then watch for ICASTART and ICAEND instances.

Then look at the data in your SQL Server:

Now you can report in real-time on external utilization by the following:

  • Utilization by IP Range
  • Utilization by Domain
  • Utilization by UserID
  • Utilization by time of day
  • Average Session Duration
  • You can tell if someone worked or not (“Yeah, I was on Citrix from 9AM to 5PM”)

Most of the queries you can reverse engineer from Edgesight Under the hood but if there is a specific query you are after just email me.

I get the average session duration with the following query:

select
avg(datepart(mi,cast([duration] as datetime)))
from syslog.dbo.agee_ica
where duration <> ‘NA’

 I tried to put everything in one table as you can see from the SQL Data Columns and the parsing script but you can split it up into separate tables if you want.

Thanks for reading!

John

Advertisements

Project Poindexter:VPN Logs

Total Information Awareness with your Netscaler/AGEE

Harvesting VPN Logs with the Netscaler:
When I first heard about Total Information Awareness I was a little concerned. Like a lot of my current team, I am one of those libertarians who really isn’t keen on his personal life being correlated and analyzed and a program that is overseen by unelected officials. That said, as an individual responsible for the security and integrity of information systems as well as a person who’s own personally identifiable information is in the databases of my bank, doctor and employer, I do believe I am entitled to know what is going on and I would like to think the stewards of my information are also informed of what is going on with regards to my own data. For this reason, I decided to start looking into how I could better monitor activity on my Netscaler and I wanted to provide an accompanying guide to my SCIFNET post/video showing how you can compartmentalize sensitive data using the VPX or a regular MPX class Netscaler.

Most engineers are fully aware that the Netscaler platform is capable of sending information to a syslog server. This in and of itself is not that significant as many network/Unix based appliances can syslog. What I want to discuss in this post is how to use a very cheap syslog server to set up a fully functional log consolidation system that includes parsing specific records and writing them to a relational database.

I find a certain amount of frustration with today’s six figure price tag event correlation systems and if you can only respond to a breach by doing “Find and Next” on a 90GB ASCII file, needless to say, that is not the most agile way to respond and not where it needs to be to react to an INFOSEC related incident. As with the Admiral Poindexter’s vision, proper analysis of events can be an instrumental tool in the defense of your information systems.

Below is an example of a typical VPN log from your Netscaler/AGEE appliance:
06/15/2010:05:59:38 ns PPE-0 : SSLVPN HTTPREQUEST 94167 : Context wireless@192.168.1.50 – SessionId: 5- http://www.veoh.com User wireless : Group(s) SCIF-NET USERS : Vserver 192.168.1.100:443 – 06/15/2010:05:59:38 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Using KIWI Syslog server’s parsing capability, I will actually parse this data and write it into a SQL Server database to allow for very easy queries and eventually dashboards showing accountability and key data.

I have had engineers ask me how to get things like Client IP Address and what they have accessed. I will provide a parsing script that will pull from the example above, the following: (As in the case of the log above)

Context: wireless@192.168.1.50
Destination: http://www.veoh.com
Payload: GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270
*I have also included “Assigned_IP” in case any of you assign ip addresses instead of NATing. If you are able to get the Destination of where a user was going, the need to account for every IP Address may become less important but some folks insist on not NATing their users. If so, the parse script will grab their IP’s as well.

And just to show you that I do have the data you can see in the screen print below of the SQL Query:

Uh, John…who cares?
Well, most of the time you really shouldn’t need to do a lot of tracking of where your users are going but in some higher security environments being able to account for where users have gone could be very important. Say you hosted http://www.veoh.com (a site I hate but for the purpose of this lab, their malware…err…client was installed on the laptop I was testing with) and someone said that the system had been compromised. You could immediately obtain every user ID and IP Address that accessed that site and what the payload that they ran against it was. You would see the XSS or SQL Injection string immediately. You would also note a system that had malware and was trying to get in over one of the SMB “Whipping boys” (445, 135-139).

Parsing data vs. just throwing it all into a flat file and waiting for an auditor to ask for it?
As I stated previously, the ability to have your data in a relational database can give you a number of advantages, not just pretty tables and eventually dashboards but you also open the door to the following:

  • Geospatial analysis of incoming IP Addresses (by cross referencing context with geospatial data from iptolocation.com or other free geospatial ip-to-location data.
  • An actual count of the number of concurrent users on a system within a block of time including historical reporting and trending.
  • The number of times a “Deny” policy has been tripped and who tripped it. If you are compartmentalizing your data and you want to know who tried to access something they are not allowed to.
  • Your sensitive data is on wiki leaks and you want to know every user who accessed the resource the data resides on, when and what ports they used?
  • And lastly, find out who is going ” \\webserver\c$” to your web server instead of “http://webserver&#8221;

So what do I log?
Well, I log basically everything but for VPN I log three different events into two different tables, I log all HTTP based traffic, normal UDP/TCP based connections and I also have a separate table for all of my “DENIED_BY_POLICY” Events.

Here is an example of an HTTPREQUEST log:
06/15/2010:11:59:58 ns PPE-0 : SSLVPN HTTPREQUEST 110352 : Context wireless@192.168.1.50 – SessionId: 5- http://www.veoh.com User wireless : Group(s) SCIF-NET USERS : Vserver 192.168.1.100:443 – 06/15/2010:11:59:58 GET /service/getUpdate.xml?clientGUID=01BACADF-CE85-48CD-8270-B8A183C27464&VEOH_GUIDE_AUTH=am1zYXpib3k6MTI3ODAyODkyMTM1NzpyZWdp – –

Here is an example of TCP/UDPFlow statistics:
06/15/2010:12:18:16 ns PPE-0 : SSLVPN UDPFLOWSTAT 111065 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Client_ip 192.168.1.50 – Nat_ip 192.168.1.85 – Vserver 192.168.1.100:443 – Source 127.100.0.5:53052 – Destination 239.255.255.250:1900 – Start_time “06/15/2010:12:15:32 ” – End_time “06/15/2010:12:18:16 ” – Duration 00:02:44 – Total_bytes_send 1729 – Total_bytes_recv 0 – Access Allowed – Group(s) “SCIF-NET USERS”

Here is an example of a DENIED_BY_POLICY event: (Over HTTP)
06/15/2010:10:17:14 ns PPE-0 : SSLVPN HTTP_RESOURCEACCESS_DENIED 106151 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Vserver 192.168.1.100:443 – Total_bytes_send 420 – Remote_host pt.veoh.com – Denied_url POST /tracker/update.jsp – Denied_by_policy “Problem-Site” – Group(s) “SCIF-NET USERS”

Let’s talk a little about the “DENIED_BY_POLICY” logs

Here is a Scenario: I have a problem website that I do not want any of my users to go to so I create a policy called “Problem-Site” denying access to the IP of the problem site.

For the log above, I parse the following:
Context:
wireless@192.168.1.50
Destination: pt.veoh.com
Policy: Problem-Site
Payload: POST /tracker/update.jsp

I also log non-http denies as well, these appear like the following:
06/14/2010:21:08:03 ns PPE-0 : SSLVPN NONHTTP_RESOURCEACCESS_DENIED 69761 : Context wireless@192.168.1.50 – SessionId: 5- User wireless – Client_ip 192.168.1.50 – Nat_ip “Mapped Ip” – Vserver 192.168.1.100:443 – Source 192.168.1.50:50343 – Destination 10.10.10.30:139 – Total_bytes_send 291 – Total_bytes_recv 0 – Denied_by_policy “TOP-SECRET-DENY” – Group(s) “SCIF-NET USERS”

Here is a Scenario: You read a story in “wired.com” about some kid who tried to give a bunch of sensitive data to a hacker or even wiki leaks and you are concerned about your own data being accessed without authorization. You want to monitor all attempts to get unauthorized access and you want to note them, or, since they are in SQL Server w/reporting services, create a dashboard that goes RED when a particular policy is tripped.

Another scenario would be to actually monitor successes and note the “Context”, if most users who access data provided by the “TOP-SECRET-ALLOW” policy come from a specific network ID, say 10.105.28.0/24 and you start seeing access from 10.111.13.68 then you can see if a user ID has been compromised, you can also query and see how often a user accesses data from which IP Addresses. If someone’s account is compromised, it would show up as coming from another IP as it is less likely that they are sitting at the user’s terminal.

In the log above I parse the following:
Context:
wireless@192.168.1.50
Destination: 10.10.10.30:139 (note the :139 indicating an attempt to use SMB)
Policy: TOP-SECRET-DENY
Payload: (Blank if not HTTP)

Below is an example of Reporting Services dashboard that refreshes every minute:(Note, I have a particular Policy that turns red in this dashboard to alert me of an important breach attempt)

Time Appliance Context Destination Policy Payload
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30 :3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30 :3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:3389 TOP-SECRET-DENY  
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:3389 TOP-SECRET-DENY
 
12:37 192.168.1.75 wireless@192.168.1.50 pt.veoh.com Problem-Site POST /tracker/update.jsp
12:37 192.168.1.75 wireless@192.168.1.50 10.10.10.30:139 TOP-SECRET-DENY   

 

What You need:

  • You need an incumbent SQL Server Environment, you need Reporting Services if you want dashboards (If you have edgesight you should already have this)
  • You need to be able to set up an ODBC Connection, remember if it is a 64-bit server/workstation you need to use the ODBC tool in %Systemroot%\sysWOW64
  • You need to be able to set up a database connection in Reporting Services
  • $245 bucks for a full version of KIWI, if you buy a Netscaler you can afford a full version of KIWI, I will cover several solutions that will make this the best $245 you have ever spent.

How to set it up:
Once you brow beat your cheap boss into spending the $245 on KIWI you perform the following steps:

Go to http://www.ctxsupport.com/forums/showthread.php?36-Parsing-Scripts-for-VPN-Data-Mining-on-AGEE and download all of the files. (Follow the instructions in the post)

Create a Database called Syslog with a username and password that has DBO privileges and create an ODBC Data Source on the server hosting KIWI for the syslog database and name it syslogd.

After renaming Netscaler.txt to Netscaler.ini go to KIWI and import the ini file.

On each rule, go to the “Write to SQL” Action and click “Create Table”

On each rule, go to the “Parse Data” Action and click “Browse” to upload the parsing script that goes with each rule. (Check all checkboxes under “Read and Write”

Conlcusion:
Once this is done you will be able to collect a ton of information that is very useful and it beats the hell out of a 90GB ASCII file or just writing everything into a single event correlation system without the ability to query on certain columns. All of the parsing scripts write the entire log to the msgtext column so you still have the original log if there is every any questions. Being able to parse key information in a specific column will give you a considerably higher level of agility when searching for information about a particular user, IP Address, destination or Security Policy.

If there is a worm that is sending a particular payload over http, you are one query away from finding out every infected IP Address. If an auditor asks you how many users have accessed a sensitive server you are a query away from providing that information. I will supplement this post with a video of the entire setup from start to finish on citrix.utipu.com within the next two weeks (Hopefullly).

Also, I tried this in a home based lab (I cannot use my logs from work) so please, if you have any issues getting it to work, let me know so I can set up better instructions. And keep in mind, I have not looked at this with ICAPROXY logs, I am hoping to do that ASAP, there may be a supplement to this that includes a different script and maybe a different table for ICAPROXY logs. I am waiting on an enhancement request before I tackle ICAProxy logs (They will come across as “SSLVPN” but the log does look different than standard VPN logs).

And most importantly, I am not a Developer, I am a poor-man’s DBA and am a marginal scripter at best, if you can write a better parsing script please let me know!!

Thanks for reading

John Smith

The Digital SCIF: Compartmentalizing Sensitive data with Access Gateway Enterprise Edition (SCIFNET)

 

A little over six months ago Citrix released the Netscaler VPX virtual appliance and I was immediately thrilled with the potential to create my own virtual lab using XenServer and internal Xen networks on the hypervisor for downstream hosts. What I noticed was that I could locate resources inside a hypervisor’s black network and make them available externally via a VIP or a secure tunnel via a VPN connection. This lead me to believe that a resource that is, for all intents and purposes, off the public internal network can live safely on this network and never be exposed to the corporate network giving administrators another layer to further compartmentalized sensitive data off of an internal network. The compartmentalizing of sensitive data made me think of a military/DOD term called “skiff” or more appropriately Sensitive Compartmentalized Information Facility or a more appropriate acronym, SCIF. With a SCIF, all access, work and manipulation associated with specific sensitive information occurs within the confines of a specific building. What I am proposing is that you can use an Access Gateway Enterprise Edition to grant access to specific resources following this same model providing secure access, accountability and ensure that the only way to get to that data is via a gauntlet of two-factor authentication, application firewalls and endpoint analysis prior to the 2nd level of policy based access to internal resources that are only accessed via this secure tunnel.

SCIFNET: (“skiff-net”)

Placing a VPN in front of resources is not necessarily new, while VPN’s are most commonly used for remote access, there are instances where an administrator will use a VPN to secure a wireless network or to provide secure access to sensitive information. What I will describe in this is the next level where not only access is restricted but how the AGEE can integrate with the existing identity management framework as well as provide extensive logging and policy based access providing a least privileged model on a per resource basis.

Why put my data in a SCIF?

Currently your internal network is protected either by a NATed firewall, internal ACL’s etc. More mature networks have already layered their services by specific networks placing Oracle servers in one Network, Web Servers in another, SQL Servers in still another network, etc. As the security screws get tightened year after year we find that segmenting our services to particular networks may not be enough. Imagine if a database resided on a server that was completely invisible to the internal network that did not even have a default gateway assigned to it? No MAC Address to show up in ARP tables? No ports exposed via a NESSUS/SATAN/SARA scan?

In the “glass-half-empty” world of IT Security there are two types of systems, compromised and being-compromised. In 2004, during a particularly heated security discussion I suggested that the only way we could truly secure our systems was to unplug them from the network. With the SCIFNET solution I am proposing, you create an internal Network on your XenServer or ESX Server that does not reside on the internal network. This means that all communications occurs on the bus of the Hypervisor which has gigabit level speeds available on it.

So your SQL Server and Web Server are living inside a hypervisor with no Default Gateway and no ability to route to your internal Network? Great job…now how do you make it available? Well, in an earlier blog I discussed my time working as a County health inspector and when I inspected a convenient store in a particularly bad neighborhood, the shop owner would open a barred window and ask the customer what they wanted, he would take the money and go and get the merchandise and the entire transaction occurred outside his store. In this scenario, his exposure and risk was limited as the person was never allowed to enter the store and potentially rob him or attempt to leave with merchandise he/she did not pay for. SCIFNET works in a similar fashion where by the user connects to an Access Gateway who has a leg in both Networks but unlike a door, it is more like a barred window granting access to internal resources. But even better than my shop owner, I will log each access, I will account for how long they used the resource and I will log all un-authorized access attempts to this resource as well. By inserting a VPX in front of the resource, I am able to provide barred window access to sensitive resources that includes the highest level of accountability and record keeping.

Barred Window Access:

The Netscaler VPX provides for several secure access solutions to ensure anyone entering the secured network passes several forms of authentication, endpoint analysis and application firewall rules. Through each of these, before they even begin to attempt to access internal resources, they are met with a myriad of rules and scans to ensure they are allowed to even attempt access to sensitive data. While I may locate a resource on an internal Network on my hypervisor, I can offer it to the end user in a variety of ways among them via VPN or via AAA Authentication to a VIP. So while my web-server/db-server combo may exist on a completely invisible network inside a hypervisor, I am able to deliver it by creating a VIP on the VPX and offering that VIP to users on the internal Network. I can add a layer of security by forcing AAA Authentication to that VIP as of version 9.x of the Netscaler. If you need to grant non http access to a server that has either sensitive documents or a back end database you can offer a VPN tunnel into the internal network on the hypervisor. With split tunnel turned off, you can ensure that the client is only able to access internal resources while connected to the VPN and keep any outside connections from getting in.

Authentication:

As with the hardware appliance, the VPX allows for two factor authentication using smart cards(HSPD-12), SecurID, LDAP(AD/NDS/eDirectory) and local Authentication. All AAA logs can be sent to an event correlation engine for parsing and accountability to ensure that access attempts are accounted for and breach attempts can be reported and acted on immediately(Custom solution, email me if you are interested in it). Currently, I tested two factor authentication with AD Credentials and SecurID tokens and have used Smart Cards (CAC) Cards in a single authentication mode without any issues.

Endpoint Analysis:

In addition to authenticating users who wish to access sensitive data, you can also set minimum standards of the systems accessing the data. Using the VPX, you can ensure that systems accessing the SCIF have adequate virus signatures, host based firewalls and encryption software. Using Endpoint Analysis, you can ensure that any system meets a pre-selected set of requirements prior to accessing the systems inside. This will ensure that an infected system or a system that possesses an outdated virus signature is not allowed access. You may also only want a select group of systems accessing the SCIF, by putting a watermark in the registry. By scanning for this specific watermark, you can further restrict the number of systems that are allowed access in addition to the number of users.

Application Firewall:

Not everyone purchases this feature, in fact Citrix does not bundle this with the Express edition of the VPX but you can get a 90 day platinum edition that has it. What the application firewall does is allow your front end SSL VPN solution to be protected by a layer 4-7 firewall. By enforcing a “START URL” rule you can ensure that anyone who attempts to access the system by IP is dropped meaning any worm that is on the loose or person looking for port 443 or port 80 over an IP will not be able to access the authentication page. This same solution provides for Buffer Overflow, SQL Injection, Cross-Site Scripting and Custom based URL filter protection. An individual would need to know the exact URL to connect to before they even get a chance to Authenticate and be scanned.

Accessing Sensitive Resources:

 

Okay, you have typed in the correct URL, you have all of the necessary virus updates and watermarks to pass endpoint analysis and you have passed the two factor authentication, now you are free to access whatever you want inside the SCIF correct? No, in fact you have only entered the building, now the actual compartmentalized access control begins to take shape. While most SSL VPN Solutions will offer a similar gauntlet to logging in, once you are in the door, you can attempt to get to any IP address thereafter. The 2nd part of this posting has to do with what can be done after you have authenticated to ensure a user just doesn’t wander around the network looking for vulnerable systems. There are 3 parts to setting this up, Active Directory groups, Authorization Policies and the resources themselves.

Resources:

Resources are defined by IP Address, Network ID and Port. For example, we have a database server that we want to allow a non-web based front end application to connect to. You create an internal Network on the XenServer where you want that resource to go than place the Virtual Machine on the XenServer and assign it to that network. The resource is accessed via the VPX who has a leg in both networks and bridges you from your internal network to the resource. Resources are defined to the AGEE via the Authorization Policy as an IP Address, Network and port. So my SQL Server that I have placed in 10.10.10.0/24 (Already configured) with an IP Address of 10.10.10.15 will be the resource I grant access to.

Authorization Policies:

This is the hierarchy for setting up access, AD Groups are assigned Authorization policies and Authorization policies have resources instantiated as rules. Using the resource above I would create an Authorization policy called “Sensitive DB” and assign the network ID or IP Address and port to that specific policy. You can assign more than one resource to an authorization policy. Once this is done, you can assign the policy to a group which brings us to the Active Directory integration with the AGEE.

Active Directory Group Extraction:

On the AGEE you will create a group that matches, exactly, the name of the group in Active Directory. This process is LDAP extraction so the same should work for eDirectory/NDS, iPlanet/SunOne and openLDAP. So let’s say for the example above we create an AD Group called “SensativeDB”. I create that exact same group on the Netscaler and so log as the user authenticates via Active Directory, the AGEE will check for matching LDAP groups. By assigning an Authorization Policy to a specific group, you can ensure that your access control to the sensitive information is still managed by the incumbent identity management framework and you also ensure that only users in specific groups are given access to sensitive data. The AGEE will act as the doorman ensuring that no one gets access to any area’s they are not supposed to.

Can I add access to resources outside of the SCIF?

Yes, if an outside resource on a different network needed to be made available to you while you were working inside the SCIF than you could accomplish this using the AGEE by setting up a VIP. If you were connected via VPN to the SCIF network (say 10.10.10.0/24) and there was some reference data located on another network than you could create a VIP on the 10.10.10.0/24 network and present external data to the inside with the same security gauntlet that you would present VIP’s to the internal Network. Say you had a group of contractors that you wanted to restrict to a SCIFNET but they also needed access to a web-based time keeping application, you could create an internal VIP and present it to the users inside the SCIF without exposing the entire internal network.

Integrating SCIFNET with VDI:

Initially, I wanted a similar situation as with a SCIF where a person walks into a room and accesses a secure terminal and from there you can access sensitive data on a network. In this manor, I can ensure that the end user is accessing data from what amounts to a glorified dumb terminal. Placing the VDI environment inside the SCIF created some federated services challenges that I have not mastered yet. Namely, you need AD to use XenDesktop and this meant poking a hole to allow for that AD integration. Also, with Endpoint Analysis and the “Barred Window” access offered by AGEE I felt the risk was mitigated. With Split Tunneling off and only allowing VPN traffic once the user connects to the AGEE I felt like we would be pretty safe. Also, you can still use VDI just one on your incumbent internal network instead of inside the SCIF. Otherwise, you need to set up a completely new AD Infrastructure inside the SCIF. I am not well versed enough with ADFS or some of the Simplified.com solutions to be able to adequately address this in this paper.

Can this be done without using a black network or VM’s:

It is likely more experienced readers have already made the connection to this and realized that yes it can be done. For Federal Government Sites, I would recommend putting a Netscaler 9010 with a FIPS module on the Network than set up an entire switched network that is NOT on the internal network but bridged by the AGEE software on the Netscaler. You can still deliver “barred window” access to the physical resources and you do not have the risk of the hypervisor itself becoming compromised. In production, it may be a lot harder to get the VPX based solution approved by security personnel but physically segmenting your resources may be easier to get approved and while I have not seen it in my environment I am quite sure a similar solution currently exists using either PIX or IOS based ACL’s.

Logging and Accountability:

What I like the most about using the AGEE for compartmentalized access is the logging. While a PIX or IOS based ACL will give you an offending IP. Currently, my VPN logs, once parsed and written to SQL, have the userID in addition to the port, source and destination IP Address. This means that I can type the IP Address of a resource into my SQL Reporting Services website and get the date, time, external IP, port and username of every single user who has accessed that resource. Additionally, the AGEE logs policy hits weather they are ALLOWED or DENIED. Once finished parsing, I can, on an hourly, daily or monthly basis check for users who trip the “DENIED” policy. Since I already have the username in my logs, I don’t have to hunt down who had what IP Address. This places me in a position to be more proactive, if I see a large number of ACCESS DENIED logs, I can go in and immediately kill a user’s VPN Session post haste. This also provides the opportunity to log access by user ID. The Digital Epidemiology portion is a whitepaper itself but having a user ID tied to each log makes incident response much faster.

Example:

You have a key resource at 10.10.10.21 that must have a blanket “Deny” applied to it and is only available via exclusive “Allows”. For this you can create an Authorization policy called “TopSecret” and you create a rule for DESTIP==10.10.10.21 with an Action of DENY. You bind this policy to your AD Group and you set it higher than any other policy. This will ensure that if they attempt to get to that server, they will be denied access. What I like about the AGEE logs is that I get a username and the policy that was violated as well as the sourced IP Address. Effective parsing of these log files will allow for you to use event correlation to find out who has attempted to make unauthorized access.

 Example Log file from blocked access:

15:16:39 192.168.1.55     01/03/2010:20:15:40 GMT ns PPE-0 : SSLVPN NONHTTP_RESOURCEACCESS_DENIED 1250215 : Context jsmith@192.168.1.100 – SessionId: 15- User jsmith – Client_ip 192.168.1.100 – Nat_ip “Mapped Ip” – Vserver 192.168.1.50:443 – Source 192.168.1.100:13874 – Destination 10.10.10.21:3389 – Total_bytes_send 298 – Total_bytes_recv 0 – Denied_by_policy “TopSecret” – Group(s) “CITGO VPN Testers”

While many segmented networks will have PIX logs that will give you this information, what I like about these logs is that I can parse them into a database and put each item marked red into a column for date/time, action, context, policy so in my database a query would return the following:

Time Context Destination Policy Action

15:16:39

jsmith@192.168.1.100 10.10.10.21:3389 TopSecret DENIED

 

In this scenario, I can immediately ask jsmith why he/she is trying to access this system. I have a record of the breach attempt and can even configure KIWI to alert me via Email at the exact time the breach occurs.

Likewise, with the AGEE I have a record of the successful attempts as well.

17:13:10     192.168.1.55    01/03/2010:22:12:10 GMT ns PPE-0 : SSLVPN TCPCONNSTAT 1299232 : Context jsmith@192.168.1.100 – SessionId: 16- User jsmith – Client_ip 192.168.1.100 – Nat_ip 10.10.10.4 – Vserver 192.168.1.50:443 – Source 192.168.1.100:36933 – Destination 10.10.10.21:3389 – Start_time “01/03/2010:22:12:10 GMT” – End_time “01/03/2010:22:12:10 GMT” – Duration 00:00:00 – Total_bytes_send 48 – Total_bytes_recv 19 – Total_compressedbytes_send 63 – Total_compressedbytes_recv 39 – Compression_ratio_send 0.00% – Compression_ratio_recv 0.00% – Access Allowed – Group(s) “CITGO VPN Testers”

Note that you do not get a policy named with the log, however all Deny’s should have the policy that denied them.

 

Conclusion:
I plan to include some videos on how to accomplish this, it is relatively simple. This is also not a new concept and networks use IOS based ACL’s to accomplish this but I believe the AGEE be it as a Virtual appliance or physical hardware, provides a much easier solution than an enterprise NAC endeavor. In fact, I have heard some horror stories regarding NAC deployments. In the interim, while NAC continues to mature and organizations ease into their NAC solutions, SCIFNet allows you to perform the same security levels without taunting specter of an enterprise NAC deployment. Compartmentalize sensitive data and place an AGEE in front of it and you have all of the same benefits of Network Access Control at a fraction of the price and overhead.

 To see a video of SCIFNET put to use with a VPX and an internal XenServer Network click here:
http://citrix.utipu.com/app/tip/id/21155/

Thanks for reading

 John

Xen and the art of Digital Epidemiology

In 2003 I started steering my career toward Citrix/VMWare/Virtualization and at the time, aside from being laughed at for running this fledgling product called ESX Server 1.51, most of my environment was Windows based. There were plenty of shrink-wrapped tools to let me consolidate my events and the only Unix I had to worry about was the Linux Kernel on the ESX Server. Now my environment has included a series of new regulatory framework (Sarbanes, CISP, and currently FIPS 140-2). What used to be a Secure Gateway with a single web interface server and my back end XenAPP farm now includes a Gartner leading VPN Appliance, Access Gateway Enterprise Edition, Load balanced(GSLB) web interface servers, an application firewall and XenApp servers hosted on Linux based XenServer and VMWare. So now, when I hear, “A user called and said their XenAPP Session was laggy where the hell do I begin? How do I get a holistic vision of all of the security, performance and stability issues that could come up in this new environment.

As a security engineer in 2004, I started calling event correlation digital epidemiology. Epidemiology is defined as the branch of medicine dealing with the incidence and prevalence of disease in large populations and with detection of the source and cause of epidemics of infectious disease”

I think that this same principal can be applied to system errors, computer based viruses and overall trends. At the root of this is the ability to collate logs from heterogeneous sources into one centralized database. During this series, I hope to go over how to do this without going to your boss and asking for half a million dollars for an event correlation package.

I currently perform the following with a $245 copy of KIWI Syslog Server:(Integrated with SQL Server Reporting Services)

  • Log all Application Firewall Alerts to a SQL Server and present them via an Operations dashboard This includes violation (SQL Injection, XSS, etc), Offending IP and Time of day.
  • Pull STA Logs and provide a dashboard matrix with the number of users, total number of helpdesk calls, percentage of calls (over 2.5% means we have a problem) and the last ten calls (Our operations staff can see that “PROTOCOL DRIVER ERROR” and react before we start getting calls. )
  • I am alerted when key VIP Personnel are having trouble with their SecurID or AD Credentials.
  • I can track the prevalence of any error, I can tell when it started and how often it occurs.
  • My service desk has a tracker application that they can consult when a user cannot connect telling them if their account is locked out, Key fob is expired or if they just fat fingered their password. This has turned a 20 minute call into a 3 minute call.
  • I have a dashboard that tells me the “QFARM /Load” data for every server refreshing every 5 minutes and it turns Yellow at 7500 and red at 8500 letting us know when a server may be about to waffle.

For this part of Digital Epidemiologist series I will go over parsing and logging STA Logs, why it was important to me and what you can do with them after getting them into a SQL Server.

Abstract:

A few y ears ago, I was asked “What is the current number of external vs internal users”. This involved a very long, complicated query against RMSummaryDatabase that worked okay but was time consuming. One thing we did realize was that every user who accessed our platform externally came through our CAG/AGEE. This meant that they were issued a ticket by the STA Servers. So we configured logging on the STA Servers and realized a few more things. We also got the application that they launched as well as the IP Address of the server they logged into. So now, if a user says they had a bad Citrix experience, we know where they logged in and what applications they used. While Edgesight does most of our user experience troubleshooting for us, it does not upload in real-time and our STA Solution does. We know right then and there.

By integrating this with SQL Server Reporting Services, we have a poor man’s Thomas Koetzing solution where we can search the utilization of certain applications, users and servers.

For this post we will learn how to set up STA Logging, how to use EPILOG from Intersect Alliance to write the data to a KIWI Syslog Server and then we will learn how to parse and write that to a SQL Server and use some of the queries I have included to gain valuable data that can eventually be used in a SQL Server Reporting Services report.

Setting up STA Logging:

Go to %systemroot%\program files\Citrix\system32 and add the following to the ctxsta.config file:

LogLevel=3
MaxLogCount=10
MaxLogSize=55 (Make sure this size is sufficient).

LogDir=W:\Program Files\Citrix\logs\

In the LogDir folder you will note that the log files created will be named sta2009MMDD.log

What exactly is in the logs:
The logs will show up in the following format: (We are interested in the items in bold where a parse script will pipe them into a database for us. )

INFORMATION 2009/11/22:22:29:32 CSG1305 Request Ticket – Successful. ED0C6898ECA0064389FDD6ABE49A03B9 V4 CGPAddress = 192.168.1.47:2598:localhost:1494 Refreshable = false XData = <?xml version=”1.0″?><!–DOCTYPE CtxConnInfoProtocol SYSTEM “CtxConnInfo.dtd”–><CtxConnInfo version=”1.0″><ServerAddress>192.168.1.47:1494</ServerAddress><UserName>JSMITH</UserName><UserDomain>cdc</UserDomain><ApplicationName>Outlook 2007</ApplicationName><Protocol>ICA</Protocol></CtxConnInfo> ICAAddress = 192.168.1.47:1494

Okay, so I have logs in a flat file….big deal!

The next step involves integrating them with a free open source product called “Epilog” by this totally kick ass company called intersect alliance (www.intersectalliance.com). We will configure epilog to send these flat files to a KIWI syslog server.

So we will go to the Intersect Alliance Download site to get epilog and run through the installation process. Once that is completed you will want to configure your epilog agent to “tail-and-send” your STA Log Files. We will do this by telling it where to get the log file and who to send it to.

After the installation go to START->Programs->Intersect Alliance-> Snare/Epilog for Windows

Under “LOG CONFIGURATION” For STA logs we will use the log type of “Generic” and we will type in the location of the log files and we will tell Epilog to use the format of STA20%-*.log

After configuring the location of logs and type of logs you will want to go to “Network Configuration” and type in the IP Address of your Syslog Server and select port 514 (Syslog users UDP 514).

Once done, go to “Latest Events” and see if you see your syslog data there.


Section III: KIWI SYSLOG SERVER

I assume that most Citrix engineers have access to a SQL Server and since Epilog is free, the only thing in this solution that costs money is KIWI Syslog Server. A whopping $245 in fact. Over the years a number of event correlation solutions have come along, in fact I was at one company where we spent over $600K on a solution that had a nice dashboard and logged files to a flat file database (WTF? Are you kidding me?!). The KIWI Syslog Server will allow you to set up ten custom database connectors and that should be plenty for any CItrix administrator who is integrating XenServer, XenAPP/Windows servers, Netscaler/AGEE, CAG 2000 and Application firewall logs into one centralized database. While you need to have some intermediate SQL Skills, you do not need to be a superstar and the benefits of digital epidemiology are enormous. My hope is to continue blog posts on how I use this solution and hopefully you will see benefits beyond looking at your STA logs.

The first thing we need to do is add a rule called “STA-Logs” and filter for strings that will let KIWI know that the syslog update is an STA Log. We do so by adding two filters. The first one is stating “GenericLog”

The second filter is “<Username>”. The two of these filters will match STA syslog messages.


Now that we have created our filters, it’s time to perform actions. There are two actions we want to perform. We want to parse the script (pull all of the data that was bolded from the log text above) and write that data to a table in a database. You add actions by right-clicking action and selecting “Add Action”

So our first “Action” is to set up a “Run Script” action. I have named mine “Parse Script”.

Here is the script I use to parse the data (Thank you Mark Schill (http://www.cmschill.net/) for showing me how to do this.)

The Script: (This will scrub the raw data into the parts you want, click “Edit Script” and paste).

##############################
Function Main()

Main = “OK”

Dim MyMsg

Dim Status

Dim UserName

Dim Application

Dim ServerIP

With Fields

Status = “”

UserName = “”

Application = “”

ServerIP = “”    

MyMsg = .VarCleanMessageText

If ( Instr( MyMsg, “CtxConnInfo.dtd” ) ) Then

Status = “Successful”

UserBeg = Instr( MyMsg, “<UserName>”) + 10

UserEnd = Instr( UserBeg, MyMsg, “<“)

UserName = Mid( MyMsg, UserBeg, UserEnd – UserBeg)

AppBeg = Instr( MyMsg, “<ApplicationName>”) + 17

AppEnd = Instr( AppBeg, MyMsg, “<“)

Application = Mid( MyMsg, AppBeg, AppEnd – AppBeg)

    
 

SrvBeg = Instr( MyMsg, “<ServerAddress>”) + 15

SrvEnd = Instr( SrvBeg, MyMsg, “</”)

ServerIP = Mid( MyMsg, SrvBeg, SrvEnd – SrvBeg)

End If

.VarCustom01 = Status

.VarCustom02 = UserName

.VarCustom03 = Application

.VarCustom04 = ServerIP

End With

##############################

Now that we can parse the data we need to create a table in a database with the appropriate columns.

The next step is to create the field format and create the table. Make sure the account in the connect string has DBO privileges to the database. Set up the custom field format with the following fields. Ensure that the type is SQL Database.


As you see below, you will need to set up an ODBC Connection for your Syslog Database and you will need to provide a connect string here (yes…in clear text so make sure you know who can log onto the syslog server). When you are all set click “Create Table” and click “Apply”


Hopefully once this is done, you will start filling up your table with STA Log entries with the data from the parse script.

I have included some helpful queries that have been very useful to me: You may also want to integrate this data with SQL Server Reporting Services and with that, you can build a poor man’s Thomas Koetzing tool.

Helpful SQL Queries: (Edit @BEG and @END values)

 

How many users for each day:(Unique users per day)

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-30’
select convert(varchar(10),msgdatetime, 111), count(distinct username)
from sta_logs
where msgdatetime between @beg and @end
group by convert(varchar(10),msgdatetime, 111)
order by convert(varchar(10),msgdatetime, 111)

Top 100 Applications for this month:

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-30’
select top 100 [application], count(application)
from sta_logs
where msgdatetime between @beg and @end
group by application
order by count(application) desc

Usage by the hour: (Unique users for each hour)

declare @BEG datetime
declare @END datetime
set @BEG = ‘2009-11-01’
set @END = ‘2009-11-02′
select convert(varchar(2),msgdatetime,108)+’:00′, count(distinct username)
from sta_logs
where msgdatetime between @beg and @end
group by convert(varchar(2),msgdatetime,108)+’:00′
order by convert(varchar(2),msgdatetime,108)+’:00′

Will that be Paper or Panic?

According to the New  York Times, 8/10 doctors still use paper record keeping.  As I stated in an earlier blog, the stimulus package will spend a “ga-jillion” dollars on converting paper records to electronic medical records.   Techworld.com cited in an article in 2007 that “a key tenet of HIPAA’s data privacy and security requirements is a need for data access accountability, i.e. the ability to understand ‘who is doing what to which data and by what means?’ “ 
 
In my previous post I talked about how one could secure personally identifiable information by placing the data behind the Netscaler Application Firewall to block or “X” out Social Security Numbers and Phone Numbers.  In this post I will discuss a new feature in the Netscaler 9 product called AAA Traffic Management. This new feature will allow you to impose Authentication, Accountability and Authorization on downstream data that may be on servers that do not live within your AD Domain infrastructure. Regardless of what platform the content lives on and which identity management system they are using, you can force users to authenticate and have their access logged meeting several regulatory rules and ensuring the ability to see “who’s doing what to which data”.

 Deployment Scenarios:

 Scenario 1:
The incumbent identity management solution for Company A, a publicly held company on the NYSE, is Active Directory.  They recently acquired another company who was not public and not subject to the regulatory framework that Company A is and lacks any security measures on key data that now must be secured.  To make matters worse, much of their data resides on an OS390 that has a 3rd party web server.

 Solution:

  • You can quickly make this data available by creating a service on the Netscaler that maps to the OS390 web server. 
  • When you create the VIP to present the data, enable authentication and bind a AAA Traffic Management VIP.
  • Create an LDAP Authentication policy that leverages your existing AD Domain Controllers. 

Now when users connect to the VIP on the Netscaler they are redirected to the Authentication VIP and forced to log in with the domain credentials.  This will help limit the number of logins that they have as well as the amount of RACF administration that needs to be done.  Also, the Netscaler will syslog all access to this data. 

Scenario 2:

You ARE a local doctor who is moving to electronic data by scanning files into a database and making them available via a PDF archive.  You are bound by HIPAA to account for every single person who looks at that data.  You place the PDF’s on a web server, index them and allow end users to access them but cannot report on who accessed what PDF archives.

 Solution:

  • Again, we deliver the web server via a VIP on the Netscaler and enable authentication
  • Ensure that everyone who accesses the data has to provide one or two-factor authentication

Now every binary file, including the PDF’s, that are accessed is logged into the syslog database or event correlation engine. 

Scenario 3:

You a web server in the DMZ that has a few corporate presentations that you want your staff to be able to access but you do not want to be available to the general public.  Since the system is in the DMZ you cannot provide AD Authentication but you want to account for everyone who accesses the presentations and you do not want to use an impersonation account or replicate your existing AD Database with ADAM or DirXML.

Solution:

  • Yet again, place the presentations behind a Netscaler and create a VIP to present web server housing the presentations.
  • Create an authentication policy using Secure LDAP over TCP 636. 
  • Set up an ACL allowing the NSIP to traverse the firewall to a domain controller (or in my case, a VIP consuming several domain controllers)
  • Bind the authentication policy to an Authentication VIP. 
  • Configure the VIP for the presentations to use the FQDN of the Authentication VIP.

Scenario 4:

You are a CRM vendor like envala or Sales Logix and you have a customer who wants to access their Customer database hosted using SaaS (Cloud Computing).  They would like users to log in against their LDAP server to access the CRM data so that identity management can be handled on their end.  That way if a salesman leaves they can disable their account with out the fear of them logging into their CRM database and stealing leads or the delay in removing that account while the create a support ticket.  Also, since they are consuming this as a SaaS solution, they want you to provide logs of who accessed the system.     

Solution:

  • Have them make their AD Domain Controller available securely via LDAPs on TCP 636 or they could also use a netscaler to provide a VIP that brokers to the same domain controller.  They can also set up an ACL allowing your NSIP to traverse their firewall for authentication. 
  • Create an authentication policy using Secure LDAP over TCP 636 and point the Server to the customer’s LDAP server. 
  • Set up an Authentication VIP assigning the policy you created for the customer to ensure that it consumes the appropriate LDAP server. 
  • Create a VIP on the Netscaler that front-ends their CRM website.
  • Configure the VIP for the presentations to use the FQDN of the customer’s Authentication VIP.

 Figure A: (Shows external users being redirected to an an external Authentication source via Policy) 

  

Conclusion:

As I stated previously, my experience with HIPAA is limited and much of the accountability has been accommodated by back end database programming, including down to the actual record.  However; as the security screws become tighter and tighter, on a collision course is continued access of data with “IUSR_” or “apache” accounts and the mandate(s) for accountability and the demand to be able to report on who accessed what.  I believe that the AAA Traffic management feature provides a great tool enabling you to impose your identity management solution to any web based content regardless of platform.  Additionally, you get the ability to perform endpoint analysis on incoming clients who can be interrogated for specific registry entries, services and files that can be hidden in a system to ensure that only certain computer systems can access certain files.  Having been part of a paper-to-electronic transition that did not go so well several years ago, I can attest that having tools that can bridge the regulatory gap between legacy systems and today’s heavily gaurded environments will make life a lot easier.

See this technology in action at

http://citrix.utipu.com/app/tip/id/12073/

Electronic Stimulus

According to the Baltimore Sun, President Obama has promised to spend $50 billion dollars over the next five years coax hospitals, medical centers and the like to begin the process of offering electronic data.  So nurses, occupational therapist and other allied health personnel as well as Doctors may be carrying something like a Kindle around instead of a clip board.  With this comes an exstension of their existing regulatory framework such as HIPPA, CISP (as no one gets away from a visit to the Doctor without putting the plastic down these days) and future restrictions that will be put in place as a result of pressure from Libertarians and ACLU members. 

Ensuring that none of my personally identifiable information is left on someone’s screen while they walk away from their PC is a very big concern.  As these systems are brought online, ensuring that the data is protected, not so much from hackers, but also from basic behavioral mistakes that could result in someone leaning over a counter and getting my date of birth, social security number and credit card number.

While my security experience is very limited with HIPPA I can say that keeping this information hidden from the wrong eyes is a basic function of any security endeavor.  How vendors, System Integrators and IT personnel can best bridge this gap could have a direct correlation on how successful they are in this space.  How much of that $50 billion over five years will go to IBM? EDS/HP? Perot Systems?  What have you done to show these Systems Integrators as well as smaller partners how your product will help them meet this challenge and how will you deal with a security screw that seems to only get tightened?  Fact is, there are millions and millions of medical documents, and finding out which parts of which documents contain sensitive data is virtually impossible.  One solution is to pattern-match the data and block it so that it is not visible to the wrong people.  You could do this with a DBA who ran ad hoc queries to match the data and replace it with an “X” but then someone in billing may need that data (keep two copies?) not to mention the staggering cost (Y2K Part 2?).  The best way I can think of is to place the data behind a device that can capture the patters in the header and “X” the data out in real time.  Enter the Netscaler Platinum that will not only add compression, authentication, caching and business continuity, but will keep the wrong people from seeing the wrong data.  I am not sure when the money will start flowing but as I understand it, some hospitals having as much as $1.5 million dangled in front of them to meet this challenge.      

In this lab, I present how I used the Netscaler Platinum Application Firewall feature to secure personally identifiable data with a rule called “Safe Object” as well as how to deal with a zero day worm/virus using the “Deny URL” Rule.  This “Safe Object” feature, when coupled with the Netscaler policy engine, will allow you the flexibility to ensure that certain job types (Nurses, Doctors, etc) based on either login (setting authentication on the VIP) or subnet; do not see things like Social Security Numbers, Credit Cards and other sensitive data.  While at the same time, ensuring that information is available to billing and accounts receivable personnel. 

Materials:

For this lab, I used a basic Dell 1950 G6 with a virtualized Netscaler VPX that functioned as a VPN allowing me to establish a secure tunnel to the sensitive data on a non-wired network that resided on that server.  An Apache server on the non-wired network with bogus phone numbers and social security numbers was used as the back end web server.  Again, in a real world scenario, you could either hypervise your web server and place it on a non-wired network as covered in my “VPX Beyond the lab” blog or you could ACL off your web server so that only the MIP/SNIP of the Netscaler was allowed to access your web content. 

See the lab here:
http://citrix.utipu.com/app/tip/id/11733/