According to the New York Times, 8/10 doctors still use paper record keeping. As I stated in an earlier blog, the stimulus package will spend a “ga-jillion” dollars on converting paper records to electronic medical records. Techworld.com cited in an article in 2007 that “a key tenet of HIPAA’s data privacy and security requirements is a need for data access accountability, i.e. the ability to understand ‘who is doing what to which data and by what means?’ “
In my previous post I talked about how one could secure personally identifiable information by placing the data behind the Netscaler Application Firewall to block or “X” out Social Security Numbers and Phone Numbers. In this post I will discuss a new feature in the Netscaler 9 product called AAA Traffic Management. This new feature will allow you to impose Authentication, Accountability and Authorization on downstream data that may be on servers that do not live within your AD Domain infrastructure. Regardless of what platform the content lives on and which identity management system they are using, you can force users to authenticate and have their access logged meeting several regulatory rules and ensuring the ability to see “who’s doing what to which data”.
The incumbent identity management solution for Company A, a publicly held company on the NYSE, is Active Directory. They recently acquired another company who was not public and not subject to the regulatory framework that Company A is and lacks any security measures on key data that now must be secured. To make matters worse, much of their data resides on an OS390 that has a 3rd party web server.
- You can quickly make this data available by creating a service on the Netscaler that maps to the OS390 web server.
- When you create the VIP to present the data, enable authentication and bind a AAA Traffic Management VIP.
- Create an LDAP Authentication policy that leverages your existing AD Domain Controllers.
Now when users connect to the VIP on the Netscaler they are redirected to the Authentication VIP and forced to log in with the domain credentials. This will help limit the number of logins that they have as well as the amount of RACF administration that needs to be done. Also, the Netscaler will syslog all access to this data.
You ARE a local doctor who is moving to electronic data by scanning files into a database and making them available via a PDF archive. You are bound by HIPAA to account for every single person who looks at that data. You place the PDF’s on a web server, index them and allow end users to access them but cannot report on who accessed what PDF archives.
- Again, we deliver the web server via a VIP on the Netscaler and enable authentication
- Ensure that everyone who accesses the data has to provide one or two-factor authentication
Now every binary file, including the PDF’s, that are accessed is logged into the syslog database or event correlation engine.
You a web server in the DMZ that has a few corporate presentations that you want your staff to be able to access but you do not want to be available to the general public. Since the system is in the DMZ you cannot provide AD Authentication but you want to account for everyone who accesses the presentations and you do not want to use an impersonation account or replicate your existing AD Database with ADAM or DirXML.
- Yet again, place the presentations behind a Netscaler and create a VIP to present web server housing the presentations.
- Create an authentication policy using Secure LDAP over TCP 636.
- Set up an ACL allowing the NSIP to traverse the firewall to a domain controller (or in my case, a VIP consuming several domain controllers)
- Bind the authentication policy to an Authentication VIP.
- Configure the VIP for the presentations to use the FQDN of the Authentication VIP.
You are a CRM vendor like envala or Sales Logix and you have a customer who wants to access their Customer database hosted using SaaS (Cloud Computing). They would like users to log in against their LDAP server to access the CRM data so that identity management can be handled on their end. That way if a salesman leaves they can disable their account with out the fear of them logging into their CRM database and stealing leads or the delay in removing that account while the create a support ticket. Also, since they are consuming this as a SaaS solution, they want you to provide logs of who accessed the system.
- Have them make their AD Domain Controller available securely via LDAPs on TCP 636 or they could also use a netscaler to provide a VIP that brokers to the same domain controller. They can also set up an ACL allowing your NSIP to traverse their firewall for authentication.
- Create an authentication policy using Secure LDAP over TCP 636 and point the Server to the customer’s LDAP server.
- Set up an Authentication VIP assigning the policy you created for the customer to ensure that it consumes the appropriate LDAP server.
- Create a VIP on the Netscaler that front-ends their CRM website.
- Configure the VIP for the presentations to use the FQDN of the customer’s Authentication VIP.
Figure A: (Shows external users being redirected to an an external Authentication source via Policy)
As I stated previously, my experience with HIPAA is limited and much of the accountability has been accommodated by back end database programming, including down to the actual record. However; as the security screws become tighter and tighter, on a collision course is continued access of data with “IUSR_” or “apache” accounts and the mandate(s) for accountability and the demand to be able to report on who accessed what. I believe that the AAA Traffic management feature provides a great tool enabling you to impose your identity management solution to any web based content regardless of platform. Additionally, you get the ability to perform endpoint analysis on incoming clients who can be interrogated for specific registry entries, services and files that can be hidden in a system to ensure that only certain computer systems can access certain files. Having been part of a paper-to-electronic transition that did not go so well several years ago, I can attest that having tools that can bridge the regulatory gap between legacy systems and today’s heavily gaurded environments will make life a lot easier.
See this technology in action at
According to the Baltimore Sun, President Obama has promised to spend $50 billion dollars over the next five years coax hospitals, medical centers and the like to begin the process of offering electronic data. So nurses, occupational therapist and other allied health personnel as well as Doctors may be carrying something like a Kindle around instead of a clip board. With this comes an exstension of their existing regulatory framework such as HIPPA, CISP (as no one gets away from a visit to the Doctor without putting the plastic down these days) and future restrictions that will be put in place as a result of pressure from Libertarians and ACLU members.
Ensuring that none of my personally identifiable information is left on someone’s screen while they walk away from their PC is a very big concern. As these systems are brought online, ensuring that the data is protected, not so much from hackers, but also from basic behavioral mistakes that could result in someone leaning over a counter and getting my date of birth, social security number and credit card number.
While my security experience is very limited with HIPPA I can say that keeping this information hidden from the wrong eyes is a basic function of any security endeavor. How vendors, System Integrators and IT personnel can best bridge this gap could have a direct correlation on how successful they are in this space. How much of that $50 billion over five years will go to IBM? EDS/HP? Perot Systems? What have you done to show these Systems Integrators as well as smaller partners how your product will help them meet this challenge and how will you deal with a security screw that seems to only get tightened? Fact is, there are millions and millions of medical documents, and finding out which parts of which documents contain sensitive data is virtually impossible. One solution is to pattern-match the data and block it so that it is not visible to the wrong people. You could do this with a DBA who ran ad hoc queries to match the data and replace it with an “X” but then someone in billing may need that data (keep two copies?) not to mention the staggering cost (Y2K Part 2?). The best way I can think of is to place the data behind a device that can capture the patters in the header and “X” the data out in real time. Enter the Netscaler Platinum that will not only add compression, authentication, caching and business continuity, but will keep the wrong people from seeing the wrong data. I am not sure when the money will start flowing but as I understand it, some hospitals having as much as $1.5 million dangled in front of them to meet this challenge.
In this lab, I present how I used the Netscaler Platinum Application Firewall feature to secure personally identifiable data with a rule called “Safe Object” as well as how to deal with a zero day worm/virus using the “Deny URL” Rule. This “Safe Object” feature, when coupled with the Netscaler policy engine, will allow you the flexibility to ensure that certain job types (Nurses, Doctors, etc) based on either login (setting authentication on the VIP) or subnet; do not see things like Social Security Numbers, Credit Cards and other sensitive data. While at the same time, ensuring that information is available to billing and accounts receivable personnel.
For this lab, I used a basic Dell 1950 G6 with a virtualized Netscaler VPX that functioned as a VPN allowing me to establish a secure tunnel to the sensitive data on a non-wired network that resided on that server. An Apache server on the non-wired network with bogus phone numbers and social security numbers was used as the back end web server. Again, in a real world scenario, you could either hypervise your web server and place it on a non-wired network as covered in my “VPX Beyond the lab” blog or you could ACL off your web server so that only the MIP/SNIP of the Netscaler was allowed to access your web content.
See the lab here:
By John M. Smith
Okay, so the Netscaler appliance has been virtualized, so now what?
On May 18th of this year Citrix allowed anyone who wanted to get familiar with their Virtual Netscaler/AGEE to download a beta version of its Netscaler VPX appliance. This appliance runs on the XenServer 5.0 hypervisor and is a fully functional Netscaler product that includes AGEE, Application Firewall, Caching and basically everything that comes with a Platinum Netscaler.
So, aside from a lab environment, what can we do with this new Virtualized Netscaler/AGEE? Well, my Platinum 7000 has one CPU and 1GB of memory in it. I installed the VPX appliance on my Dell 1950 G9. It has 16GB of memory and two dual-core CPU’s (4 Processors). I am sure that the overhead of the hypervisor will be optimized and we will be able to make up for that by pushing an additional proc or more memory to the VPX. So, out of the box, the VPX has very similar resources (1024 RAM, 1CPU) as my existing 7000. This leaves me with at least 14GB of RAM with the hypervisor running in the background. This is where I believe the virtual Netscaler can be leveraged to provide exceptional security and presents an optional “Datacenter-in-a-box” solution.
Securing your incumbent web server:
One of my first jobs out of college was as the local health inspector, while inspecting several stores in a really bad neighborhood, I came across a particular convenient store that had been robbed so many times that you would have to walk up to the window, tell the owner what you wanted and they would go and get it and you would conduct the transaction at the point of entry and were never allowed inside. The public internet is the digital equivalent to the most run-down, crime-ridden neighborhoods the US. The Netscaler, with its security features, offers a similar functionality without the considerable delay that my store owner has. While things have quieted down in the last few years, for awhile there it seemed as though IIS had become the internet’s whipping boy. Additionally, other web servers such as Apache, iPlanet and the like have also noted security flaws that have left them vulnerable. I recall back in 2003 when working as a Security Analyst, during a debate with the team about web security saying “Well, if we want true security, I think the solution is to unplug all of the RJ-45 ports on each of the servers”. All joking aside, despite the use of firewalls, ACL’s and hardened builds you are still at the mercy of the existing exposed port and the code behind it. The Netscaler’s application firewall is a perfect solution for securing insecure code. The VPX takes it a step further; I can install my web server into the Xenserver hypervisor and run it on an internal (non-wired) network. In a way, I can achieve the level of security that was brought up in jest six years ago. What I have done in my lab with the VPX is bind an internal network on the Hypervisor and provide a multi-homed VPX that can present the internal web server’s content. In this manor, my OS is never even exposed to the network. In a way, it functions similar to my shop keeper who refused to let anyone enter the store.
Why is this a good thing?
Well, first off, your Operating system is never on the public internet and I am not dependant on another device (firewall, IOS based ACL) to keep other hosts from sending packets to my server. Over the years, Windows admins have become very adept at securing their IIS boxes with a series of registry-hacks, disabled services and least-privileged security solutions. While this is great most of the time, on more than one occasion, it has served as a “get-out-of-jail-free” card for a vendor who says…”well, I’m sorry but for this app to work, the IUSR_ account needs full control to the system32 directory AND HKLM” (Yes, I was ACTUALLY told that once with a piece of FINANCIAL software no doubt!). If your organization does not have a scripted build solution such as Altiris then these custom builds become time intensive. While I am not saying hypervising your Web Server and placing it behind the VPX will eliminate the need for Windows security, I do like the idea of my OS never being exposed other than from behind an Netscaler and the arsenal that comes with it.
Datacenter in a box:
Well, I gave a GIG to my Netscaler, and four gigs to my web server…I have 10 gigs left…what to do…what to do? So, while my web server is in a nice safe cocoon, that is all fine and dandy if you just have web content on it. However; that isn’t how it works today, today your web server references back-end services such as databases, XML/SOA based web services, etc.
In my current environment, I have a pix firewall in front, my web farm, another pix and then my back end resources. I think this is a pretty typical setup for most environments and most enterprises. For SMBs, and less regulated shops, it may be feasible to hypervise the entire environment and locate the back-end services that your web server needs on the same internal (non-wired) network within the Xenserver. This allows back end transactions to occur on the buss rather than on a network that may have varying levels of performance. Today’s x64 architecture is extremely fast and can easily outperform the throughput of a switched network. So rather than traversing a firewall, moving through a few layer 3 hops and a few layer 2 devices to get to your data, the back end data is located on the same buss as the web server. This puts all communications on the same piece of hardware. Like I said, some larger enterprise security groups will likely not allow this as this puts everything on one piece of hardware and while you can segment at layer 3 by using the virtual switch that comes with Xenserver, that does not provide the physical segmentation that will sufficiently ease concerns of internal security groups, even if you add another internal network, hypervise a firewall and pace it in between the web server and the back end database servers. Most security teams are, rightfully, paid to see the glass as half-empty. That said, on today’s Intel based servers, the amount of RAM you can put in them as well as disk space makes it hard to ignore the potential to put the entire nth-tier application into one piece of hardware where the bus I/O is the only bottleneck you have to worry about. Securing this model and getting regulatory buy-off will be the major challenge.
Working outside of the hardware?
So, let’s say you work in a heavily regulated environment where the datacenter-in-a-box solution is not an option. You can still hypervise your Netscaler and use it as a security wrapper and still allow the web server to consume SOAP based web services. The same way that I create a VIP on my external network and present it to internet based IP’s, I can create a VIP internally and present it to my hypervised web server that allows it to connect to and consume web services on my corporate network. If my web server on my closed system needs to consume XML services located on another host on the corporate intranet, I can create a VIP on the same VPX and present it to my internal network for consumption by internal virtual machines. Example: I have a VPX that is presenting an external VIP to end users on IP Address 18.104.22.168 that is portal to a web server on an internal (non-wired) XenServer network located at 10.10.10.43. This same web server consumes XML/SOA-based services on a host located on the corporate network at 192.168.11.37. In this scenario, I would create an internally facing VIP on 10.10.10.33 that presents these web services to my internal host using the VPX as the broker. Internet request for my web server are handled by the VPX and routed to 10.10.10.43 and internal SOA communications are handled by the VPX and routed from the internal network to 192.168.11.37. The Netscaler offers the ability to terminate any TCP based port on a VIP so I can present both web services and SQL listeners via an internal VIP to the protected web server. Thus my web server can make all of the necessary external calls while remaining inside the protective bubble provided by XenServer and the VPX.
In addition to the added security that you get with the Netscaler you also get the ability to do Web 2.0 pushes, content redirection, external authentication and URL rewriting. Also, as most systems are still running 32bit operating systems, 16gb server can be hypervised to provide two or three instances of a web server front-ended by the Netscaler. Keep in mind that I can double the resources presently on my Platinum 7000 in my VPX and still have enough resources for a pair of web servers. Also, the ability to readily allocate resources (RAM, Disk, etc) can let you be much more aggressive with CPU intensive app firewall rules. While I keep running home to security, the performance benefits of a Netscaler are as important making the VPX an exceptional solution for DMZ based web solutions as well as those applications that require a stock build. The ability to allow for external authentication would allow CRM companies like Envala and Sales Logic to authenticate end users against their customer’s LDAP/Authentication URL’s prior to delivering their custom content without having to procure Netscaler hardware.
How do I manage my VM’s?
What I did for my GSLB lab was set up one VPX to function as an Access Gateway and two additional VPX’s to serve as GSLB lab machines. The VPX AGEE is multi-homed providing VPN access to the internal non-wired network allowing me to patch, RDP and administer internal hosts in the same way you would connect to your corporate network from home. While I uesed three VPX machines for this CBT, A single VPX can provide load balancing as well as a VPN connection to administrators to manage internal (non-wired) systems. We have a system in our DMZ that cannot get to a DNS Server and sending it to “update.microsoft.com” is a bit of a hassle every month when we patch. It remedy this, we created a VIP on the same network that terminated at our WSUS server and edited the security policy to use this VIP for updates. The same would work for an internal, non-wired network where you would deliver a VIP that provided access to the internal WSUS box, or whichever patching strategy you have, and allow your protected hosts to consult this VIP for patches, updates, etc in the same manor you would allow it to consume external database and web services.
Other benefits of the VPX:
As the Citrix sales people will likely attest, trying to market a network solution that does not have a “CISCO” bezel on it can be tough. At times, I find myself running into the “Cisco mafia” who will insist on an XSS/HSM solution. And while the technical battle was long over by version 7 of the Netscaler, the Marketing battle seems to wage on. While I think Cisco makes great products and I greatly appreciate the network engineers that lay the very foundation we communicate on, I feel like the entire web-based load balancing technology should have never been in the hands of Network engineers in the first place. While my connectivity mates can explain BGP peers, OSPF route convergence and subnet on a grease board faster than I can use the calculator, explaining a URL rewrite to them, or an HTTP Call out or SOAP can produce a deer in the headlights look. Additionally, while I am passionate about Application delivery, some connectivity folks can be a bit “eh…” when it comes to load balancing. Also, anyone that calls a Netscaler a Load Balancer does not really know what a Netscaler is.
If the resource cost of the hypervisor can be marginalized (which I believe it can, in fact booting from BSD and launching from a flash drive, isn’t it kind of a VM already?) then you will be putting the same server in the rack that you have always put in the rack. The fact that it is running XenServer with a web farm, fronted by an application switch will go largely unnoticed. While I am not for doing anything under the radar as that will get you fired in some shops, I tend to feel like marketing this as a secure wrapper for web services is a pretty good way to deliver this solution without raising the hackles of the connectivity staff.
Business Continuity is another benefit of this solution. In 2003, I pleaded with VMWare to make something called “VMWare-lite” where I could hypervise all of my servers and I told them that they had accidentally built a great business continuity solution. Currently, if any of my Netscaler hardware fails, I have to send out for a replacement. With the VPX, I don’t have to worry about having a new chassis sent to me in the event of a failure.
The benefits of the Netscaler speaks for itself, I think the VPX will go a long way in helping web/server admins get their feet wet with this technology. I would expect most web server administrators to take to this technology like a duck to water. If it is possible to bundle this solution as an add-on then we may be able to see a change in how web services/content is delivered. The future indicates that hypervisors are here to stay, the ability to secure your web environment behind the VPX, could boost XenServer in this space and put enterprise application delivery at the fingertips of server administrators everywhere. The big question will be how quickly and to what extent this is adopted but at a minimum, allowing the VPX to run on incumbent vendor hardware will be a big step in standardized environments. I, for one, like the virtualized Netscaler/CAG and look forward to using it beyond the lab.
GSLB Lab using VPX: (Sorry, utipu is done for good, I upgraded so I could have videos on my blog)