by Ariel Silverstone

Evolution of Defense In Depth

Oct 01, 2009
MobileSmall and Medium Business

new technologies for defense in depth


Evolution of Defense in Depth

As security professionals will tell you, one of the basic principles of a good security program is the concept of Defense in Depth.  Defense in Depth is arguably the most time-tested principle in Security, and applies to physical security, as well as information security.  Defense in Depth builds on a concept of a hardened “core”, where one places their “crown jewels”.  This core is then surrounded by castle walls and motes, with ever increasing generality of defense.

Defense in Depth is a great concept, but it comes at a price.  Just as the area covered is wider from layer to layer, so is the cost associated with protecting with against more plentiful and less and less specific threats.  A firewall, for example, that typically acts as the last line of defense on the enterprise perimeter, has to protect against a great many varieties of threats, while a server-room door has to “only” be concerned with physical access.

The Server in The Castle

The Server Room in The Center of The Castle

Another flaw in the Defense in Depth design is its inherent difficulty to implement vis-ŕ-vis the three basic tenets of security: Confidentiality, Integrity and Availability.   Why?   Because most forms of defense create increasing Confidentiality, but make Integrity more difficult to implement and manage.  Any increase in defense, of course, makes the concept of Availability that much harder to provide to the users.

A difficulty that I myself encountered many times is the applicability of Defense in Depth to my “layer 8” problem – the users. If users are not trained properly, if they are not aware of information protection needs, methods, and the “why?” of it, they become a liability, rather than an asset, towards data security.  If you are like me, you find the need to increase our moat-to-user-ratio on an ongoing base harder to design, implement, manage, and pay for.   Many of us resign ourselves to the proverbial “this is reality” and define our demarcation line as a physical device, such as a router, an access point, a firewall or a webserver.  There are potentially two things “wrong” with doing so:

  1. We are basically saying  “we are a target just waiting to be attacked” and
  2. We allow most barbarians (in the form of rogue traffic, networks and devices) to hit our gates

If we continue to do so, we will have approached a mathematical certainty of being hacked, or at least DDoS’ed out of the Net.   I really prefer NOT to draw analogies here to the real world, and we all know which those are.

Not only is the problem above big enough to cause some to lose sleep, but imagine what happens when we move to a Cloud topology… there we have nothing but moats and walls and front doors.   These front doors can be any browser, on any device, anywhere in the world.   How do you protect yourself against that?   Speaking of losing sleep – I love coffee, but this is ridiculous.

Clouds, Doors and Windows

Like any solution that might involve our entire user set, which may include Internet users, rather than pure corporate users, any solution must be:

  1. Easy to teach (i.e. close-to-zero learning curve)
  2. Easy to implement
  3. Applicable to the widest range of platforms possible
  4. Have a small delivery and storage footprint
  5. Easy to manage and maintain

Not asking for much, am I? 

Knowing how rapidly threats involve “in the wild”, I also want a tool that does not go the normal route of “black listing”.  I am more and more convinced that we need tools, in our world of security, that no longer compare bad signatures or behavior to a database (which is how most antivirus and firewalls, for example, act) and we need to go the “white-list” route.   I will write about that in the future.   Yes, I want a tool that will be controlled by me and allow me to choose which domains can be accessed, and under what (time or other) conditions can such an access occur.  Let’s add those to my “dream list”:

  1. White list based
  2. Conditional access

To make matters even more interesting, I want control over certain user functions.  (We want, after all, to reduce the number of barbarians and the number of roads leading to our castle, don’t we?)  We want to make sure that the people that request a resource are authorized to even request it.

For example, I would like some files to be able to be read and written, but not printed.  Or that I be able to control launching certain tools, such as IM or browsers from within the session.   And finally (?) I want a bullet proof audit trail.  Why?  SoX, GLBA and HIPAA, to name but a few.

  1. Selective access to file functions
  2. Audit trail

What should we do?

Until now, I did not see any solution to this quandary.  Other than Awareness and Training, there was not a whole lot that could be done.   Even MSSPs would tell you – they are there for a reason, which is “people will attack you”.

Thanks to my friend Andreas Wuchner, the CISO of Novartis, I ran (head first, mind you!) into a newly launched company called Quaresso.  Launched by a group of smart people with backgrounds in networking and security, in “Protect OnQ” they created both a new product and a service.  Working together, these allow us to do a few things:

  • Firstly, they allow us, the people responsible for the data’s protection, to select who will be allowed to knock on our doors and with what.  Simply put, if you so choose, people without the proper tool will not even be allowed access to your castle.  “Not allowed on the island”, if you would.  And this permission is manageable in real time.
  • Then, you can select not only which browser is allowed to knock at your door, but also to choose what (and what NOT) that browser is allowed to contain: add-ins, plug-ins, encryption settings, printing ability (or not), security zone setting, and the list goes on.  This effectively extends the defense-in-depth to the actual browser session!!
  • If this was not enough, you are able to control THE ROUTE that your users take to reach you.  While it may seem either unimportant or even impossible, controlling a browser’s allowed connections has the ability to protect against man-in-the-middle attacks, to name just one example.  To prevent DNS hijacking and man-in-the-middle attacks, the knowledge and selection of the route is also critical.
  • Zero-day (zero minute, really) malware protection – if it is not known, it does not get transported.  Simple and neat! 
  • And the final cherry on the icing?  Remember all those virii, trojans, key loggers and co.?  Due to the implementation of the “armored” browser, data can no longer leak from it to the rest of the operating system.   All passwords and personal information typed into a protected browser session remains confidential and un-recordable.   I know I will sleep better.

I tested the tool in several scenarios.    The only drawback seems the need to install another icon on the users’ screen.  I particularly loved it when, while running the tool, I had a sniffer on and could sense no data passing from the browser unencrypted.  So much for data leakage via this route!

So…. Let’s compare this tool to my wish list.


Wish List Item

Protect On-Q Delivers


Easy to teach (close-to-zero learning curve)

Yes, being browser based and basically requiring a ‘click’


Easy to implement

Yes. The user is required to download an add-on or a link to their desktop and allow it to run.  The tool does NOT require admin rights on the installing system.

For web applications, a simple check-if-present mechanism allows the application to be On-Q aware.


Applicable to the widest range of platforms possible

Yes.  The tool, being browser and Java/ActiveX based, allows implementation on most publicly available browser.  And since most ship with any computer nowadays, they are built in.


Have a small delivery and storage footprint

Yes.  The package I tested was less than 450KB in size.


Easy to manage and maintain

Yes.  They offer a partnering console as a tool to monitor/manage/update the remote pieces.


White list based

Yes.It is not only a design philosophy, but an administrator from  The Bank of Atlantis, for example, can allow a specific use-only of that tool, to only selected systems within a selected domain, if he so chooses. Nifty. Imagine allowing remote users ONLY to access a certain system, but not payroll, for example?


Conditional access

Almost.  The domain selectivity is in place and working. The time/location is not yet implemented and may be, depending on industry demand.  This variable is relegated to the accessed system for now.


Selective access to file functions

Yes and by two separate mechanisms:

Firstly, the control over which browser add-on is present, allows tools like PDF browsing and key-loggers to be excluded.Secondly, the tool can control file operations of the browser.  So, for example, you can choose to provide the ability to remotely (as in on the user’s site) print or not.


Audit trail

Yes.  Extensive auditing is available and, because what I saw was an early product, new reports are being developed continuously.

The tool does all of these, while requiring zero learning curve to the users.  By allowing the users to use the same browser they are used to and by clicking as they normally would.   No new software, no new directions, nothing.   We now protect another layer of Defense in Depth and greatly increased our control of who comes knocking at our doors.

Try it and let me know what you think.  

 Ariel Silverstone