You have your eye on a new piece of security technology or service and you want to evaluate it before deciding whether to commit to the effort of a full deployment. Alternatively, you may already be committed to full-scale deployment but wondering where to start. So where should you deploy it first to test it most effectively and have the greatest impact?
Human nature, caution and conventional wisdom dictate that you should put it in a lab environment or in a low-importance section of your network. That is sensible, isn’t it? The change board will give you less hassle and if there is a problem, you are going to get less flack, aren’t you?
But will that approach give you most information and practical experience about the new system’s deployment difficulties, effectiveness in your environment and what it will detect? Will it give you the maximum protection as soon as possible?
Any tool that gives you fresh insight on the behavior of your systems tends to find something interesting. Those of us who have deployed such things have the stories to go with them – from mundane discoveries such as finding that all servers in one network had the wrong DNS settings and were thus being slowed down, to critical detections of previously unobserved persistent attackers.
However, there is an argument to be made for deploying this new tool on your production systems, close to your crown jewels. These are the things you really want to protect and the environment in which it really needs to work. Yes, this approach is higher risk, but it is also higher benefit. Will a deployment on a low throughput, obscure bit of network really tell you much? On the other hand, couldn’t one real detection on your primary systems during the evaluation period convince you and your management of the system’s value?
Granted, this may not be a sensible suggestion for inline systems that process all traffic, but with the right technology it can work. Many security technologies monitor traffic and provide alerts rather than enforce actions — or at least they have a mode in which they can act in this way. A new security solution deployed on a span port or network tap may actually pose more risk to production traffic in terms of confidentiality than in disruption or performance. It is also easy to turn off or detach such solutions by removing the span connection. Other security tools rely on collecting logs from your existing devices. Building an architecture that allows forking and diverting the streams of log events can support easy introduction of such types of new security tooling.
As an example, consider the evaluation of a new security monitoring tool, perhaps one with user and entity behaviour analytics (UEBA). Will you get much information from deploying it on a test/staging environment that will typically have a small number of users and occasional traffic? Or would you get a better sense of its value from connecting it to your production active directory, primary applications and remote access system? Wouldn’t that give you a better idea of how easily it can be connected, how well it copes with actual production loads and whether it can really differentiate between normal and suspicious behaviour?
Designing taps such as those mentioned above into your network and log architectures future-proofs your environment, making it easier to evaluate other products down the road and deploy them into final production. It can also help in emergencies, as incident response teams wishing to deploy their tooling will be looking for very similar facilities overseeing your most critical systems.
So next time you have a new security system to test, think about ignoring conventional wisdom and throwing (some) caution to the wind. Sometimes the radical step is the right one. Deploying security tools on your crown jewels first may be the optimal approach.