Technological brownfields: You can’t secure what you can’t see

Global economic growth may now be stalling, but the cost of data breaches keeps going up (as it were, like gangbusters). The 13th annual 2018 Cost of a Data Breach Study: Global Overview, released in mid-July by IBM and Ponemon Institute, shows that the global average cost of a data breach is now $3.86 million USD — up 6.4% over last year. In the US, Canada, and Western Europe, cost per breach is even higher: between $4 million and almost $8 million USD. The number of records compromised per breach is growing as well; the cost per record compromised now verges on $150 USD, globally — up almost 5% since 2017.

These scary figures represent hard costs of detection, escalation, notification, legal and expert fees, fines, givebacks to customers, judgments, plus soft costs of business disruption, damage to reputation and other elements. For organisations in, or doing business with Europe, these already-staggering downsides may be magnified by General Data Regulation Protection (GDPR) overheads, liabilities, and possible fines. Unsurprising, therefore, that 60% of small businesses fold within six months of a data breach, or that the viability of even much larger enterprises can be shaken. To mitigate this risk, organisations small and large are often encouraged to take four logical steps: 1) audit security status, 2) assign accountability (i.e., pick and empower a security lead), 3) identify and inventory the so-called “crown jewels” (e.g., databases containing personally identifiable information), and 4) implement processes and solutions to defend these at first priority. With the average cost of security automation now running around $2.88 million (according to IBM/Ponemon), this seems like a no-brainer investment.

But there’s a catch. If your organisation is like most, your critical database crown jewels are stored in the middle of a technological brownfield: an interdependent mix of new, older, and purely legacy systems that together present an enormous attack surface. Most successful hacks work not by compromising critical databases directly, but by subverting unhardened peripheral assets and leveraging their logical or physical location and entanglement with mission-critical business services. Hacked brownfield assets may let attackers acquire secrets left in plain sight (e.g., privileged SSH keys copied to jumpboxes), attack database management APIs from recognised IP addresses, or simply run a host of brute-force exploits, unobserved.

Automate everything or get left behind

John Jainschigg, content strategy lead at Opsview, explains to Information Age why it’s vital to treat monitoring as a part of DevOps automation

The bigger your brownfield, moreover, the bigger your problem. Sam Bisbee, CTO of security firm Threat Stack, was quoted in eWeek as noting: “A thousand hosts that each have one failed login is harder to spot than a thousand failed logins on a single host.” The larger and more complex your environment, the more opportunities there are to penetrate, and the easier for hackers to evade discovery. An important consideration, since the longer black hats are allowed to range free, the higher the eventual cost per incident. The IBM/Ponemon report calculates that hacks persisting over 30 days cost $1 million USD more to resolve than breaches discovered more quickly. Sadly, average compromise durations range from around 60 to almost 200 days.

So how big is your brownfield? Chances are good you don’t know exactly. Documentation of older systems may be incomplete. Generations of manual changes may go unrecorded. Key people leave, compromising institutional knowledge. Human attention also flags: Gartner estimates, for example, that 30% to 50% of public cloud costs are wasted, partly because provisioned VMs are abandoned or left idle. All these uncatalogued assets increase your liability.
The takeaway: directly securing critical databases and implementing solutions for data-loss prevention are hugely important — but only part of a complete solution. The other part involves inventorying your brownfield, figuring out what’s running in there, and applying processes to reduce attack surface, harden the remainder, and make it observable.

IT monitoring: Don’t monitor yourself into a madhouse

John Jainschigg, content strategy lead at Opsview, argues in Information Age that if done right, IT monitoring provides clarity and promotes operational effectiveness. Done wrong, however, it can make your staff crazy and limit business growth

Enterprise IT monitoring can be a cornerstone of this more-enlightened and pragmatic security strategy. As this Information Age article detailed, mature IT monitoring solutions can help you:

Discover and catalogue your entire IT resource inventory, on-premises and in the cloud. Enterprise-grade IT monitoring systems typically include multiple subsystems for discovering hosts and network equipment within IP address ranges and acquiring metadata on Windows equipment using Windows Discovery Service, WMI, Active Directory and other mechanisms. Further autodiscovery logic can sometimes inventory the software running on servers, determine its configuration, and map relationships and dependencies among resources, for example, revealing which hosts and software components co-reside on each subnet, or cooperate as nodes in a cluster. Also sometimes included are utilities for importing host inventory from legacy monitoring platforms, Configuration Management Databases (CMDBs), and other repositories.

Once your inventory is completely catalogued, you’re in a far better position to begin reducing brownfield attack surface. You can quickly retire unused systems. Eliminate exposure to the public internet for all systems that don’t require it. Update credentials. Harden everything. Now that your monitoring system is a practical “single source of truth” for everything in your hybrid IT infrastructure, it can work as a central repository for this truth as you progress in making changes.

Alert effectively on many security-relevant anomalies. Strictly speaking, IT monitoring systems are complementary to information security solutions, particularly when it comes to providing detailed information about short-duration abnormal connections, suspicious database accesses, attempts to exfiltrate valuable files, and other subtle evidence of misbehavior. But a monitoring system can still be configured to detect and alert on a host of suspicious anomalies, bringing them quickly to operator attention. Examples include unexpected activity occurring on normally-mostly-idle systems; sudden appearance of large traffic volumes and performance hits on key business services (e.g., from dDOS attacks); or too many failed login attempts. Some of the relevant service checks are included in monitoring packs for operating systems and applications or can be set up with point-and-click ease using Business Service Monitoring (BSM) features. All you need to do is set appropriate alerting thresholds.

The value of visibility in your data centre

John Jainschigg, content strategy lead at Opsview, explains to Information Age the value of visibility in your data centre

Integrate security alerts with operations process discipline. Alternatively, integrating your security or DevSecOps solution with your monitoring platform — e.g., via the monitoring platform’s REST interface — lets the security system push alerts to IT Ops through established, centrally-administered channels. Back-end integration between the monitoring engine and operations management tools (e.g., ServiceNow) enables further automation options: triggering creation of issue tickets, opening issue-dedicated collaboration channels (e.g., via Slack), and helping responders stay organised and in touch.

Gain insight, fast. Enterprise IT monitoring is all about gaining insight: drilling down into monitored systems to determine root causes of issues. That includes security issues. Use monitoring to survey the state of thousands of systems at a glance. Quickly trace connectivity between suspected points of entry and target databases and file systems. Check performance of critical systems via BSM, dashboards, and detailed graphs of key metrics, noting deviations from normal performance (called specification-based forensics).

Turn insight into action. Most mature IT monitoring platforms offer convenience features enabling direct access to monitored systems via SSH and embedded VLC or RDP terminal clients — great for putting eyes on security issues, fast. Some IT monitoring platforms permit remote execution (even automated execution) of scripts on monitored hosts: shutting down rogue processes, firewalling traffic types or IP ranges, forcing system restores from backups, redirecting logs to central storage for analysis. The faster you can respond to suspected incidents, the faster you can mitigate, constrain damage and data loss, reduce eventual liability, and save money.

Written by John Jainschigg, Content Strategy Lead at Opsview

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com

Related Topics

GDPR
Hybrid IT
IT monitoring