Home > Articles

  • Print
  • + Share This
This chapter is from the book

Enforcement Tools

Data controls include encryption, data loss prevention, and information rights management. Most enterprises implement enforcement tools to prevent sensitive information from leaving the network. These tools include security information and event management (SIEM) systems, data loss prevention (DLP) systems, network access control (NAC), gateways, and other hardware devices. This section covers these technologies and tools.


Audit controls such as security information and event management (SIEM) systems provide the technological means to show compliance and refine security controls. SIEM tools collect, correlate, and display data feeds that support response activities. SIEMs are the main element in compliance regulations such as SOX, GLBA, PCI, FISMA, and HIPAA. SIEM output is also proactively to detect emerging threats and improve overall security by defining events of interest (EOI) and resulting actions. The purpose of SIEM is to turn a large amount of data into knowledge that can be acted upon. SIEMs are generally part of the overall security operations center (SOC) and have three basic functions:

  • Centrally managing security events

  • Correlating and normalizing events for context and alerting

  • Reporting on data gathered from various applications

Just one IDS sensor or log data source can generate more than 100,000 events each day.

Aggregation is the process by which SIEM systems combine similar events to reduce event volume. Log management aggregates data from many network sources and consolidates the data so that crucial events are not missed. By default, events are usually aggregated based on the source IP, destination IP, and event ID. The purpose of aggregation is to reduce the event data load and improve efficiency. Conversely, if aggregation is incorrectly configured, important information could be lost. Confidence in this aggregated data is enhanced through techniques such as correlation, automated data filtering, and deduplication within the SIEM. Event aggregation alone is not enough to provide useful information in an expeditious manner. A common best practice is to use a correlation engine to automate threat detection and log analysis. The main goal of correlation is to build EOIs that can be flagged by other criteria or that allow for the creation of incident identification. To create EOIs, the correlation engine uses data that was aggregated by the following techniques:

  • Pattern matching

  • Anomaly detection

  • Boolean logic

  • A combination of Boolean logic and context-relevant data

Finding the correct balance in correlation rules is often difficult. Correlation rules that try to catch all possible attacks generate too many alerts and can produce too many false positive alerts.

The SIEM facilitates and automates alert triage to notify analysts of immediate issues. Alerts can be sent via email but are most often sent to a dashboard. SIEM systems generate a large volume of alerts and notifications, so they also provide data visualization tools. From a business perspective, reporting and alerting provide verification of continuous monitoring, auditing, and compliance. Event deduplication improves confidence in aggregated data, data throughput, and storage capacity.

Event deduplication is also important because it provides the capability to audit and collect forensic data. The centralized log management and storage of SIEM systems provide validation for regulatory compliance storage or retention requirements. Regarding forensic data and regulatory compliance, WORM (write once, read many) drives keep log data protected so that evidence cannot be altered. WORM drives permanently protect administrative data. This security measure should be implemented when an administrator with access to logs is under investigation or when an organization needs to meet for regulatory compliance (such as Payment Card Industry Data Security Standard [PCI DSS] Requirement 10).

Some SIEM systems are good at ingesting and querying flow data both in real time and retrospectively. However, with real-time analysis, significant issues are associated with time, including time synchronization, time stamping, and report time lag. For example, if the report takes 45 minutes to run, the analyst is already this far behind real time without taking into consideration the amount of time needed to read and analyze the results.

When designing a SIEM system, the volume of data generated for a single incident must be considered. SIEM systems must aggregate, correlate, and report output from devices such as firewalls, intrusion detection/prevention (IDS/IPS), access controls, and myriad network devices. Answering questions about how much data to log from critical system is important when deciding to use a SIEM system. SIEMs have a high acquisition and maintenance cost. If the daily events number in the millions per day and events are gathered from network devices, endpoints, servers, identity and access control systems, and application servers, a SIEM might be cost-effective. For smaller daily event occurrences, free or more cost-effective tools should be considered.


Data loss is a problem that all organizations face, but it can be especially challenging for global organizations that store a large volume of PII in different legal jurisdictions. Privacy issues differ by country, region, and state. Naturally, organizations implement data loss prevention tools as a way to prevent data loss. Data loss prevention (DLP) is a way of detecting and preventing confidential data from being exfiltrated physically or logically from an organization by accident or on purpose. DLP systems are basically designed to detect and prevent unauthorized use and transmission of confidential information, based on one of the three states of data: in use, in motion, or at rest. DLP systems offer a way to enforce data security policies by providing centralized management for detecting and preventing the unauthorized use and transmission of data that the organization deems confidential. A well-designed DLP strategy allows control over sensitive data, reduces the cost of data breaches, and achieves greater insight into organizational data use. International organizations should ensure that they are in compliance with local privacy regulations before implementing DLP tools and processes.

Protection of data in use is considered to be an endpoint solution. In this case, the application is run on end user workstations or servers in the organization. Endpoint systems also can monitor and control access to physical devices such as mobile devices and tablets. Protection of data in transit is considered to be a network solution, and either a hardware or software solution is installed near the network perimeter to monitor and flag policy violations. Protection of data at rest is considered to be a storage solution and is generally a software solution that monitors how confidential data is stored.

When evaluating DLP solutions, key content-filtering capabilities to look for are high performance, scalability, and the capability to accurately scan nearly anything. High performance is necessary to keep the end user from experiencing lag time and delays. The solution must readily scale as both the volume of traffic and bandwidth needs increase. The tool should also be capable of accurately scanning nearly anything.

Using an endpoint solution, here are some examples of when a user can be alerted to security policy violations, to keep sensitive information from leaving the user’s desktop:

  • Inadvertently emailing a confidential internal document to external recipients

  • Forwarding an email with sensitive information to unauthorized recipients inside or outside the organization

  • Sending attachments such as spreadsheets with PII to an external personal email account

  • Accidentally selecting Reply All and emailing a sensitive document to unauthorized recipients

USB flash drives, iPods, and other portable storage devices are pervasive in the workplace and pose a real threat. They can introduce viruses or malicious code to the network and can store sensitive corporate information. Sensitive information is often stored on thumb and external hard drives, which then are lost or stolen. DLP solutions allow policies for USB blocking. This could be a policy to block the copy of any network information to removable media or a policy to block the use of unapproved USB devices.

Many organizations store sensitive data in the cloud. DLP solutions have expanded from email and local devices to include corporate data stored in the cloud. The organization must know how the cloud is being utilized before making decisions on a DLP solution:

  • What files are being shared outside the organization

  • What files contain sensitive data

  • What abnormal events indicate a threat or compromise

DLP can help with the following issues in cloud implementations:

  • Data migration control

  • Data protection

  • Data leakage

Some deployed cloud services include Office 365, Salesforce, and Box. When implementing DLP policies in the cloud, different policies apply for different cloud services. Some are merely general cloud policies. For example, a general policy centers on device access control. A specific policy for Box, for example, centers on file sharing.

DLP solutions are most successful in private or virtual private clouds. When using a public cloud, DLP solutions might not offer much value because of the lack of control; using an agent-based approach is a better solution. For example, if your DLP solution requires agents or certificates to be installed in cloud applications such as Dropbox or Google Drive, the application will interpret the agent as a man-in-the-middle attack and will not work properly. Best practices for mitigating threats related to data leakage in the cloud include active data monitoring, encryption, policy-based access controls, and centralized administration.


One the most effective ways to protect the network from malicious hosts is to use network access control (NAC). NAC offers a method of enforcement that helps ensure that computers are properly configured. NAC systems are available as software packages or dedicated NAC appliances, although most are dedicated appliances that include both hardware and software. Some of the main uses for NAC follow:

  • Guest network services

  • Endpoint baselining

  • Identity-aware networking

  • Monitoring and containment

The premise behind NAC is to secure the environment by examining the user’s machine and then grant (or not grant) access based on the results. NAC is based on assessment and enforcement. For example, if the user’s computer patches are not up to date and no desktop firewall software is installed, you can decide whether to limit access to network resources. Any host machine that does not comply with your defined policy could be relegated to a remediation server or put on a guest VLAN. The basic components of NAC products follow:

  • Access requestor (AR): The AR is the device that requests access. Assessment of the device can be self-performed or delegated to another system.

  • Policy decision point (PDP): The PDP is the system that assigns a policy based on the assessment. The PDP determines what access should be granted and can be the NAC’s product-management system.

  • Policy enforcement point (PEP): The PEP is the device that enforces the policy. This device can be a switch, firewall, or router.

NAC systems can be integrated into the network in four ways:

  • Inline: Exists as an appliance in the line, usually between the access and the distribution switches

  • Out of band: Intervenes and performs an assessment as hosts come online, and then grants appropriate access

  • Switch-based: Works similarly to inline NAC, except that enforcement occurs on the switch itself

  • Host- or endpoint-based: Relies on an installed host agent to assess and enforce access policy

NAC implementations require design considerations such as an agent or agentless integration. For example, out-of-band designs might or might not use agents, and they can use 802.1X, VLAN steering, or IP subnets. In a NAC system that uses agents, devices are enrolled in the NAC system and an agent is installed on the device. The agent reports back to a NAC policy server. Agents provide detailed information about connected devices to enforce policies. An agent might permanently reside on end devices or it might be dissolvable. If the agent is dissolvable, it provides one-time authentication and then disappears after reporting information to the NAC. Because agents can be spoofed by malware, the organization needs to be vigilant about proper malware protection or should use an agentless NAC solution.

Agents perform more granular health checks on endpoints to ensure a greater level of compliance. When the health check is on a computer or laptop, it is often called a host health check. Health checks monitor availability and performance for proper hardware and application functionality.

Agentless solutions are mainly implemented through embedded code within an Active Directory domain controller. The NAC code verifies that the end device complies with the access policy when a user joins the domain, logs onto, or logs out of the domain. Active Directory scans cannot be scheduled, and the device is scanned only during these three actions. Another instance in which an agentless solution is deployed is through an intrusion prevention system.

Agentless solutions offer less functionality and require fewer resources. A good solution for large, diverse networks, or one in which BYOD is prevalent, is to combine both agent and agentless functionality, but use the agentless solution as a fallback. This is because agents often do not work with all devices and operating systems. An alternative might be to use a downloadable, dissolvable agent; however, some device incompatibility might still arise.

In addition to providing the capability to enforce security policy, contain noncompliant users, and mitigate threats, NAC offers business benefits. These include compliance, a better security posture, and operational cost management.


Gateways perform many functions. At its simplest definition, a router is a gateway because it connects two different networks. Other types of gateways include mail, media, and API gateways. This section covers mail and media gateways.


Although the percentage of spam has been steadily decreasing in the past few years because of better legislative enforcement and improved products, spam is still an enormous problem for corporations. Cisco tracked spam using opt-in customer telemetry and reported that spam email accounts for 65 percent of all sent emails. Spam filters can consist of various filtering technologies, including content, header, blacklist, rule-based, permission, and challenge-response filters. Spam-filtering solutions can be deployed in a number of ways. The most common implementations use an onsite appliance such as a gateway, software installed on each individual device, and hosted or cloud-based vendor solutions.

Email security gateways prevent malicious emails from reaching their destinations. Spam-filtering products work by checking email messages when they arrive. The messages are then either directed to the user’s mailbox or quarantined based on a score value. When the spam score exceeds a certain threshold, the email is sent to the junk folder. In addition to the keyword-scanning methods, which include scoring systems for emails based on multiple criteria, spam filter appliances allow for checksum technology that tracks the number of times a particular message has appeared. They also conduct message authenticity checking, which uses multiple algorithms to verify the authenticity of a message. In addition, the appliance might perform file-type attachment blocking and scanning using the built-in antivirus protection.

Besides spam filtering functions, email gateways can include additional client security controls such as email encryption, advanced content filtering, and DLP capabilities. These capabilities help protect the confidentiality and integrity of emails in transit, enforce regulatory compliance, and protect against data loss.


Media gateways came about as a result of the convergence of telecommunications and data communications. Media gateways act as a bridge between different transmission technologies and add services to end-user connections. At the most basic level, a media gateway is a device that converts data from one format to another. One of the main functions of a media gateway is to convert between different transmission and coding techniques. Examples include a circuit switch, an IP gateway, and a channel bank. Media gateways work at the connectivity layer, serving as a crossing point between different networks where the desired transmission technology can be selected. For example, the media gateway might terminate channels from a circuit-switched network and stream media from a packet-switched network in an IP network. Data input such as audio and video are handled simultaneously.

In businesses, media gateways are used to convert analog communications to VoIP communications. When used in VoIP conversions, they have three main components:

  • Media gateway

  • Media gateway controller or softswitch

  • Signaling gateway

One of the best examples of a media gateway in use is getting broadband cable to phones and laptops. Cable providers such as Dish, Comcast, and Cox use media gateways to distribute content to subscribers throughout their households. Content distribution occurs through a gateway that converts the incoming broadband signal and delivers voice, video, and data services such as high definition and wireless codecs to consumer IP-connected devices.

  • + Share This
  • 🔖 Save To Your Account