Host, Storage, Network, and Application Integration into a Secure Enterprise Architecture
This chapter covers CASP objective 5.1.
Organizations must securely integrate hosts, storage, networks, and applications. It is a security practitioner’s responsibility to ensure that the appropriate security controls are implemented and tested. But this isn’t the only step a security practitioner must take. Security practitioners must also:
- Secure data flows to meet changing business needs.
- Understand standards.
- Understand interoperability issues.
- Understand technical deployment models, including outsourcing, insourcing, managed services, and partnerships.
- Know how to segment and delegate a secure network.
- Analyze logical and physical deployment diagrams of all relevant devices.
- Design a secure infrastructure.
- Integrate secure storage solutions within the enterprise.
- Deploy enterprise application integration enablers.
All these points are discussed in detail in this chapter.
Secure Data Flows to Meet Changing Business Needs
Business needs of an organization may change and require that security devices or controls be deployed in a different manner to protect data flow. As a security practitioner, you should be able to analyze business changes, how they affect security, and then deploy the appropriate controls.
To protect data during transmission, security practitioners should identify confidential and private information. Once this data has been properly identified, the following analysis steps should occur:
- Determine which applications and services access the information.
- Document where the information is stored.
- Document which security controls protect the stored information.
- Determine how the information is transmitted.
Analyze whether authentication is used when accessing the information.
- If it is, determine whether the authentication information is securely transmitted.
- If it is not, determine whether authentication can be used.
- Analyze enterprise password policies, including password length, password complexity, and password expiration.
Determine whether encryption is used to transmit data.
- If it is, ensure that the level of encryption is appropriate and that the encryption algorithm is adequate.
- If it is not, determine whether encryption can be used.
- Ensure that the encryption keys are protected.
Security practitioners should adhere to the defense-in-depth principle to ensure that the CIA of data is ensured across its entire life cycle. Applications and services should be analyzed to determine whether more secure alternatives can be used or whether inadequate security controls are deployed. Data at rest may require encryption to provide full protection and appropriate access control lists (ACLs) to ensure that only authorized users have access. For data transmission, secure protocols and encryption should be employed to prevent unauthorized users from being able to intercept and read data. The most secure level of authentication possible should be used in the enterprise. Appropriate password and account policies can protect against possible password attacks.
Finally, security practitioners should ensure that confidential and private information is isolated from other information, including locating the information on separate physical servers and isolating data using virtual LANs (VLANs). Disable all unnecessary services, protocols, and accounts on all devices. Make sure that all firmware, operating systems, and applications are kept up-to-date, based on the vendor recommendations and releases.
When new technologies are deployed based on the changing business needs of the organization, security practitioners should be diligent to ensure that they understand all the security implications and issues with the new technology. Deploying a new technology before proper security analysis has occurred can result in security breaches that affect more than just the newly deployed technology. Remember that changes are inevitable! How you analyze and plan for these changes is what will set you apart from other security professionals.
Standards describe how policies will be implemented within an organization. They are actions or rules that are tactical in nature, meaning they provide the steps necessary to achieve security. Just like policies, standards should be regularly reviewed and revised. Standards are usually established by a governing organization, such as the National Institute of Standards and Technology (NIST).
The following sections briefly discuss open standards, adherence to standards, competing standards, lack of standards, and de facto standards.
Open standards are standards that are open to the general public. The general public can provide feedback on the standards and may use the standards without purchasing any rights to the standards or organizational membership. It is important that subject matter and industry experts help guide the development and maintenance of these standards.
Adherence to Standards
Organizations may opt to adhere entirely to both open standards and those managed by a standards organization. Some organizations may even choose to adopt selected parts of standards, depending on the industry. Remember that an organization should fully review any standard and analyze how its adoption will affect the organization.
Legal implications can arise if an organization ignores well-known standards. Neglecting to use standards to guide your organization’s security strategy, especially if others in your industry do, can significantly impact your organization’s reputation and standing.
Competing standards most often come into effect between competing vendors. For example, Microsoft often establishes its own standards for authentication. Many times, its standards are based on an industry standard with slight modifications to suit Microsoft’s needs. In contrast, Linux may implement standards, but because it is an open source operating system, changes may have been made along the way that may not fully align with the standards your organization needs to follow. Always compare competing standards to determine which standard best suits your organization’s needs.
Lack of Standards
In some new technology areas, standards are not formulated yet. Do not let a lack of formal standards prevent you from providing the best security controls for your organization. If you can find similar technology that has formal adopted standards, test the viability of those standards for your solution. In addition, you may want to solicit input from subject matter experts (SMEs). A lack of standards does not excuse your organization from taking every precaution necessary to protect confidential and private data.
De Facto Standards
De facto standards are standards that are widely accepted but not formally adopted. De jure standards are standards that are based on laws or regulations and are adopted by international standards organizations. De jure standards should take precedence over de facto standards. If possible, your organization should adopt security policies that implement both de facto and de jure standards.
Let’s look at an example. Suppose that a chief information officer’s (CIO’s) main objective is to deploy a system that supports the 802.11r standard, which will help wireless VoIP devices in moving vehicles. However, the 802.11r standard has not been formally ratified. The wireless vendor’s products do support 802.11r as it is currently defined. The administrators have tested the product and do not see any security or compatibility issues; however, they are concerned that the standard is not yet final. The best way to proceed would be to purchase the equipment now, as long as its firmware will be upgradable to the final 802.11r standard.
When integrating solutions into a secure enterprise architecture, security practitioners must ensure that they understand all the interoperability issues that can occur with legacy systems/current systems, applications, and in-house versus commercial versus commercial customized applications.
Legacy Systems/Current Systems
Legacy systems are old technologies, computers, or applications that are considered outdated but provide a critical function in the enterprise. Often the vendor no longer supports the legacy systems, meaning that no future updates to the technology, computer, or application will be provided. It is always best to replace these systems as soon as possible because of the security issues they introduce. However, sometimes these systems must be retained because of the critical function they provide.
Some guidelines when retaining legacy systems include:
- If possible, implement the legacy system in a protected network or demilitarized zone (DMZ).
- Limit physical access to the legacy system to administrators.
- If possible, deploy the legacy application on a virtual computer.
- Employ access control lists (ACLs) to protect the data on the system.
- Deploy the highest-level authentication and encryption mechanisms possible.
Let’s look at an example. Suppose an organization has a legacy customer relationship application that it needs to retain. The application requires the Windows 2000 operating system (OS), and the vendor no longer supports the application. The organization could deploy a Windows 2000 virtual machine (VM) and move the application to that VM. Users needing access to the application could use Remote Desktop to access the VM and the application.
Let’s look at a more complex example. Say that an administrator replaces servers whenever budget money becomes available. Over the past several years, the company uses 20 servers and 50 desktops from five different vendors. The management challenges and risks associated with this style of technology life cycle management include increased mean time to failure rate of legacy servers, OS variances, patch availability, and the ability to restore dissimilar hardware.
Any application installed may require certain hardware, software, or other criteria that the organization does not use. However, with recent advances in virtual technology, the organization can implement a virtual machine that fulfills the criteria for the application through virtualization. For example, an application may require a certain screen resolution or graphics driver that is not available on any physical computers in the enterprise. In this case, the organization could deploy a virtual machine that includes the appropriate screen resolution or driver so that the application can be successfully deployed.
Keep in mind that some applications may require older versions of operating systems that are not available. In recent versions of Windows, you can choose to deploy an application in compatibility mode by using the Compatibility tab of the application’s executable file, as shown in Figure 16-1.
Figure 16-1 Compatibility Tab
In-House Developed Versus Commercial Versus Commercial Customized Applications
Applications can be developed in-house or purchased commercially. Applications that are developed in-house can be completely customized to the organization, provided that developers have the necessary skills, budget, and time. Commercial applications may provide customization options to the organization. However, usually the customization is limited.
Organizations should fully research their options when a new application is needed. Once an organization has documented its needs, it can compare them to all the commercially available applications to see if any of them will work. It is usually more economical to purchase a commercial solution than to develop an in-house solution. However, each organization needs to fully assess the commercial application costs versus in-house development costs.
Commercial software is well known and widely available and is commonly referred to as commercial off-the-shelf (COTS) software. Information concerning vulnerabilities and viable attack patterns is typically shared within the IT community. This means that using commercial software can introduce new security risks in the enterprise. Also, it is difficult to verify the security of commercial software code because the source is not available to customers in most cases.
Technical Deployment Models
To integrate hosts, storage solutions, networks, and applications into a secure enterprise, an organization may use various technical deployment models, including outsourcing, insourcing, managed services, and partnerships. The following sections discuss cloud and virtualization considerations and hosting options, virtual machine vulnerabilities, secure use of on-demand/elastic cloud computing, data remnants, data aggregation, and data isolation.
Cloud and Virtualization Considerations and Hosting Options
Cloud computing allows enterprise assets to be deployed without the end user knowing where the physical assets are located or how they are configured. Virtualization involves creating a virtual device on a physical resource; physical resources can hold more than one virtual device. For example, you can deploy multiple virtual computers on a Windows computer. But keep in mind that each virtual machine will consume some of the resources of the host machine, and the configuration of the virtual machine cannot exceed the resources of the host machine.
For the CASP exam, you must understand public, private, hybrid, community, multi-tenancy, and single-tenancy cloud options.
A public cloud is the standard cloud computing model, where a service provider makes resources available to the public over the Internet. Public cloud services may be free or may be offered on a pay-per-use model. An organization needs to have a business or technical liaison responsible for managing the vendor relationship but does not necessarily need a specialist in cloud deployment. Vendors of public cloud solutions include Amazon, IBM, Google, and Microsoft. In a public cloud model, subscribers can add and remove resources as needed, based on their subscription.
A private cloud is a cloud computing model where a private organization implements a cloud in its internal enterprise, and that cloud is used by the organization’s employees and partners. Private cloud services require an organization to employ a specialist in cloud deployment to manage the private cloud.
A hybrid cloud is a cloud computing model where an organization provides and manages some resources in-house and has others provided externally via a public cloud. This model requires a relationship with the service provider as well as an in-house cloud deployment specialist. Rules need to be defined to ensure that a hybrid cloud is deployed properly. Confidential and private information should be limited to the private cloud.
A community cloud is a cloud computing model where the cloud infrastructure is shared among several organizations from a specific group with common computing needs. In this model, agreements should explicitly define the security controls that will be in place to protect the data of each organization involved in the community cloud and how the cloud will be administered and managed.
A multi-tenancy model is a cloud computing model where multiple organizations share the resources. This model allows the service providers to manage the resource utilization more efficiently. In this model, organizations should ensure that their data is protected from access by other organizations or unauthorized users. In addition, organizations should ensure that the service provider will have enough resources for the future needs of the organization. If multi-tenancy models are not properly managed, one organization can consume more than its share of resources, to the detriment of the other organizations involved in the tenancy.
A single-tenancy model is a cloud computing model where a single tenant uses a resource. This model ensures that the tenant organization’s data is protected from other organizations. However, this model is more expensive than the multi-tenancy model.
Vulnerabilities Associated with a Single Physical Server Hosting Multiple Companies’ Virtual Machines
In some virtualization deployments, a single physical server hosts multiple organizations’ VMs. All of the VMs hosted on a single physical computer must share the resources of that physical server. If the physical server crashes or is compromised, all of the organizations that have VMs on that physical server are affected. User access to the VMs should be properly configured, managed, and audited. Appropriate security controls, including antivirus, antimalware, access control lists (ACLs), and auditing, must be implemented on each of the VMs to ensure that each one is properly protected. Other risks to consider include physical server resource depletion, network resource performance, and traffic filtering between virtual machines.
Driven mainly by cost, many companies outsource to cloud providers computing jobs that require a large amount of processor cycles for a short duration. This situation allows a company to avoid a large investment in computing resources that will be used for only a short time. Assuming that the provisioned resources are dedicated to a single company, the main vulnerability associated with on-demand provisioning is traces of proprietary data that can remain on the virtual machine and may be exploited.
Let’s look at an example. Say that a security architect is seeking to outsource company server resources to a commercial cloud service provider. The provider under consideration has a reputation for poorly controlling physical access to data centers and has been the victim of social engineering attacks. The service provider regularly assigns VMs from multiple clients to the same physical resource. When conducting the final risk assessment, the security architect should take into consideration the likelihood that a malicious user will obtain proprietary information by gaining local access to the hypervisor platform.
Vulnerabilities Associated with a Single Platform Hosting Multiple Companies’ Virtual Machines
In some virtualization deployments, a single platform hosts multiple organizations’ VMs. If all of the servers that host VMs use the same platform, attackers will find it much easier to attack the other host servers once the platform is discovered. For example, if all physical servers use VMware to host VMs, any identified vulnerabilities for that platform could be used on all host computers. Other risks to consider include misconfigured platforms, separation of duties, and application of security policy to network interfaces.
If an administrator wants to virtualize the company’s web servers, application servers, and database servers, the following should be done to secure the virtual host machines: only access hosts through a secure management interface and restrict physical and network access to the host console.
Secure Use of On-demand/Elastic Cloud Computing
On-demand, or elastic, cloud computing allows administrators to increase or decrease the resources utilized based on organizational needs. As demands increase, the costs increase. Therefore, it is important that resource allocation be closely monitored and managed to ensure that the organization is not paying for more resources than needed. Administrators should always use secure tools (such as Secure Shell) and encryption to connect to the host when allocating or deallocating resources.
Data remnants are data that is left behind on a computer or another resource when that resource is no longer used. The best way to protect this data is to employ some sort of data encryption. If data is encrypted, it cannot be recovered without the original encryption key. If resources, especially hard drives, are reused frequently, an unauthorized user can access data remnants.
Administrators must understand the kind of data that is stored on physical drives. This helps them determine whether data remnants should be a concern. If the data stored on a drive is not private or confidential, the organization may not be concerned about data remnants. However, if the data stored on the drive is private or confidential, the organization may want to implement asset reuse and disposal policies.
Data aggregation allows data from multiple resources to be queried and compiled together into a summary report. The account used to access the data needs to have appropriate permissions on all of the domains and servers involved. In most cases, these types of deployments will incorporate a centralized data warehousing and mining solution on a dedicated server.
Data isolation in databases prevents data from being corrupted by two concurrent operations. Data isolation is used in cloud computing to ensure that tenant data in a multi-tenant solution is isolated from other tenants’ data, using a tenant ID in the data labels. Trusted login services are usually used as well. In both of these deployments, data isolation should be monitored to ensure that data is not corrupted. In most cases, some sort of transaction rollback should be employed to ensure that proper recovery can be made.
Resource Provisioning and Deprovisioning
One of the benefits of many cloud deployments is the ability to provision and deprovision resources as needed. This includes provisioning and deprovisioning users, servers, virtual devices, and applications. Depending on the deployment model used, your organization may have an internal administrator that handles these tasks, the cloud provider may handle these tasks, or you may have some hybrid solution where these tasks are split between the internal administrator and cloud provider personnel. Remember that any solution where cloud provider personnel must provide provisioning and deprovisioning may not be ideal because cloud provider personnel may not be immediately available to perform any tasks that you need.
When provisioning (or creating) user accounts, it is always best to use an account template. This ensures that all of the appropriate password policies, user permissions, and other account settings are applied to the newly created account.
When deprovisioning a user account, you should consider first disabling the account. Once an account is deleted, it may be impossible to access files, folders, and other resources that are owned by that user account. If the account is disabled instead of deleted, the administrator can reenable the account temporarily to access the resources owned by that account.
An organization should adopt a formal procedure for requesting the creation, disablement, or deletion of user accounts. In addition, administrators should monitor account usage to ensure that accounts are active.
Provisioning and deprovisioning servers should be based on organizational need and performance statistics. To determine when a new server should be provisioned, administrators must monitor the current usage of the server resources. Once a predefined threshold has been reached, procedures should be put in place to ensure that new server resources are provisioned. When those resources are no longer needed, procedures should also be in place to deprovision the servers. Once again, monitoring is key.
Virtual devices consume resources of the host machine. For example, the memory on a physical machine is shared among all the virtual devices that are deployed on that physical machine. Administrators should provision new virtual devices when organizational need demands. However, it is just as important that virtual devices be deprovisioned when they are no longer needed to free up the resources for other virtual devices.
Organizations often need a variety of applications. It is important to maintain the licenses for any commercial applications that are used. When an organization no longer needs applications, administrators must be notified to ensure that licenses are not renewed or that they are renewed at a lower level if usage has simply decreased.
Securing Virtual Environments, Services, Applications, Appliances, and Equipment
When an organization deploys virtual environments, administrators and security practitioners must ensure that the virtual environments are secured in the same manner as any physical deployments of that type. For example, a virtual Windows machine needs to have the same security controls as the host server, including antivirus/antimalware software, ACLs, operating system updates, and so on. This also applies to services, applications, appliances, and equipment. You should ensure that all of the security controls are deployed as spelled out in the organization’s security policies.
Design Considerations During Mergers, Acquisitions, and Demergers/Divestitures
When organizations merge, are acquired, or split, the enterprise design must be considered. In the case of mergers or acquisitions, each separate organization has its own resources, infrastructure, and model. As a security practitioner, it is important that you ensure that two organizations’ structures are analyzed thoroughly before deciding how to merge them. For demergers, you probably have to help determine how to best divide the resources. The security of data should always be a top concern.
Network Secure Segmentation and Delegation
An organization may need to segment its network to improve network performance, to protect certain traffic, or for a number of other reasons. Segmenting the enterprise network is usually achieved through the use of routers, switches, and firewalls. A network administrator may decide to implement VLANs using switches or deploy a demilitarized zone (DMZ) using firewalls. No matter how you choose to segment the network, you should ensure that the interfaces that connect the segments are as secure as possible. This may mean closing ports, implementing MAC filtering, and using other security controls. In a virtualized environment, you can implement separate physical trust zones. When the segments or zones are created, you can delegate separate administrators who are responsible for managing the different segments or zones.
Logical and Physical Deployment Diagrams of Relevant Devices
For the CASP exam, security practitioners must understand two main types of enterprise deployment diagrams: logical deployment diagrams and physical deployment diagrams. A logical deployment diagram shows the architecture, including the domain architecture, with the existing domain hierarchy, names, and addressing scheme; server roles; and trust relationships. A physical deployment diagram shows the details of physical communication links, such as cable length, grade, and wiring paths; servers, with computer name, IP address (if static), server role, and domain membership; device location, such as printer, hub, switch, modem, router, or bridge, as well as proxy location; communication links and the available bandwidth between sites; and the number of users, including mobile users, at each site. A logical diagram usually contains less information than a physical diagram. While you can often create a logical diagram from a physical diagram, it is nearly impossible to create a physical diagram from a logical one.
An example of a logical network diagram is shown in Figure 16-2.
Figure 16-2 Logical Network Diagram
As you can see, the logical diagram shows only a few of the servers in the network, the services they provide, their IP addresses, and their DNS names. The relationships between the different servers are shown by the arrows between them.
An example of a physical network diagram is shown in Figure 16-3.
Figure 16-3 Physical Network Diagram
A physical network diagram gives much more information than a logical one, including the cabling used, the devices on the network, the pertinent information for each server, and other connection information.
Secure Infrastructure Design
As part of the CASP exam, security practitioners must be able to analyze a scenario and decide on the best placement for devices, servers, and applications. To better understand this, it is necessary to understand the different network designs that can be used. Network designs may include demilitarized zones (DMZs), VLANs, virtual private networks (VPNs), and wireless networks. This section shows examples of how these areas look. It also discusses situations in which you may need to decide where to deploy certain devices.
A DMZ contains servers that must be accessed by the general public or partners over an Internet connection. DMZs can also be referred to as screened subnets. Placing servers on a DMZ protects the internal network from the traffic that the servers on the DMZ generate. Several examples of networks with DMZs are shown in Figure 16-4.
Figure 16-4 DMZ Examples
In DMZ deployments, you can configure the firewalls to allow or deny certain traffic based on a variety of settings, including IP address, MAC address, port number, or protocol. Often web servers and external-facing DNS servers are deployed on a DMZ, with database servers and internal DNS servers being deployed on the internal network. If this is the case, then it may be necessary to configure the appropriate rules on the firewall to allow the web server to communicate with the database server and allow the external-facing DNS server to communicate with the internal DNS servers. Remember that you can also configure access rules on routers. It is important that you deploy access rules on the appropriate devices. For example, if you deny certain types of traffic on the Internet-facing router, all of that type of traffic will be unable to leave or enter the DMZ or internal network. Always analyze where the rules should be applied before creating them.
A VLAN is a virtual network that is created using a switch. All computers and devices that are connected to a switch can be divided into separate VLANs, based on organizational needs. An example of a network with VLANs is shown in Figure 16-5.
Figure 16-5 VLAN Example
In this type of deployment, each switch can have several VLANs. A single VLAN can exist on a single switch or can span multiple switches. Configuring VLANs helps manage the traffic on the switch. If you have a legacy system that is not scheduled to be decommissioned for two years and requires the use of the standard Telnet protocol, moving the system to a secure VLAN would provide the security needed until the system can be decommissioned.
A VPN allows external devices to access an internal network by creating a tunnel over the Internet. Traffic that passes through the VPN tunnel is encrypted and protected. An example of a network with a VPN is shown in Figure 16-6.
Figure 16-6 VPN Example
In a VPN deployment, only computers that have the VPN client and are able to authenticate will be able to connect to the internal resources through the VPN concentrator.
A wireless network allows devices to connect to the internal network through a wireless access point. An example of a network that includes a wireless access point is shown in Figure 16-7.
Figure 16-7 Wireless Network Example
In the deployment shown in Figure 16-7, some devices connect to the wired network, while others connect to the wireless network. The wireless network can be protected using a variety of mechanisms, including disabling the service set identifier (SSID), enabling WPA2, and implementing MAC filtering. For some organizations, it may be necessary to implement more than one wireless access point. If this occurs and all the access points use the same 802.11 implementation, then the access points will need to be configured to use different channels within that implementation. In addition, it may be necessary to adjust the signal strength of the access points to limit the coverage area.
Finally, when deciding where to place certain devices, you need to consider whether a device needs to be stored in a secured location. For example, routers, firewalls, switches, server racks, and servers are usually stored in rooms or data centers that have extra physical security controls in addition to the regular physical building security. Always consider the physical security needs when deploying any new devices.
Storage Integration (Security Considerations)
When integrating storage solutions into an enterprise, security practitioners should be involved in the design and deployment to ensure that security considerations are considered.
The following are some of the security considerations for storage integration that you should consider:
- Limit physical access to the storage solution.
- Create a private network to manage the storage solution.
- Implement ACLs for all data, paths, subnets, and networks.
- Implement ACLs at the port level, if possible.
- Implement multi-factor authentication.
Security practitioners should ensure that an organization adopts appropriate security policies for storage solutions to ensure that storage administrators prioritize the security of the storage solutions.
Enterprise Application Integration Enablers
Enterprise application integration enablers ensure that applications and services in an enterprise are able to communicate as needed. For the CASP exam, the primary concerns are understanding which enabler is needed in a particular situation or scenario and ensuring that the solution is deployed in the most secure manner possible. The solutions that you must understand include customer relationship management (CRM); enterprise resource planning (ERP); governance, risk, and compliance (GRC); enterprise service bus (ESB); service-oriented architecture (SOA); Directory Services; Domain Name System (DNS); configuration management database (CMDB); and content management systems (CMSs).
Customer relationship management (CRM) identifies customers and stores all customer-related data, particularly contact information and data on any direct contacts with customers. The security of CRM is vital to an organization. In most cases, access to the CRM is limited to sales and marketing personnel and management. If remote access to CRM is required, you should deploy a VPN or similar solution to ensure that the CRM data is protected.
Enterprise resource planning (ERP) collects, stores, manages, and interprets data from product planning, product cost, manufacturing or service delivery, marketing/sales, inventory management, shipping, payment, and any other business processes. ERP is accessed by personnel for reporting purposes. ERP should be deployed on a secured internal network or DMZ. When deploying ERP, you might face objections because some departments may not want to share their process information with other departments.
Governance, risk, and compliance (GRC) coordinates information and activity across these three areas to be more efficient, to enable information sharing and reporting, and to avoid waste. This integration improves the overall security posture of any organization. However, the information stored in GRC is tied closely to the organization’s security. Access to this system should be tightly controlled.
Enterprise service bus (ESB) designs and implements communication between mutually interacting software applications in a service-oriented architecture (SOA). It allows SOAP, Java, .NET, and other applications to communicate. An ESB solution is usually deployed on a DMZ to allow communication with business partners.
ESB is the most suitable solution for providing event-driven and standards-based secure software architecture.
Service-oriented architecture (SOA) uses software pieces to provide application functionality as services to other applications. A service is a single unit of functionality. Services are combined to provide the entire functionality needed. This architecture often intersects with web services.
Let’s look at an SOA scenario. Suppose a database team suggests deploying an SOA-based system across the enterprise. The chief information officer (CIO) decides to consult the security manager about the risk implications for adopting this architecture. The security manager should present to the CIO two concerns for the SOA system: Users and services are distributed, often over the Internet, and SOA abstracts legacy systems such as web services, which are often exposed to outside threats.
Directory Services stores, organizes, and provides access to information in a computer operating system’s directory. With Directory Services, users can access a resource by using the resource’s name instead of its IP or MAC address. Most enterprises implement an internal Directory Services server that handles any internal requests. This internal server communicates with a root server on a public network or with an externally facing server that is protected by a firewall or other security device to obtain information on any resources that are not on the local enterprise network. Active Directory, DNS, and LDAP are examples of directory services.
Domain Name System (DNS) provides a hierarchical naming system for computers, services, and any resources connected to the Internet or a private network. You should enable Domain Name System Security Extensions (DNSSEC) to ensure that a DNS server is authenticated before the transfer of DNS information begins between the DNS server and client. Transaction Signature (TSIG) is a cryptographic mechanism used with DNSSEC that allows a DNS server to automatically update client resource records if their IP addresses or hostnames change. The TSIG record is used to validate a DNS client.
As a security measure, you can configure internal DNS servers to communicate only with root servers. When you configure internal DNS servers to communicate only with root servers, the internal DNS servers are prevented from communicating with any other external DNS servers.
The Start of Authority (SOA) contains the information regarding a DNS zone’s authoritative server. A DNS record’s Time to Live (TTL) determines how long a DNS record will live before it needs to be refreshed. When a record’s TTL expires, the record is removed from the DNS cache. Poisoning the DNS cache involves adding false records to the DNS zone. If you use a longer TTL, the resource record is read less frequently and therefore is less likely to be poisoned.
Let’s look at a security issue that involves DNS. An IT administrator installs new DNS name servers that host the company mail exchanger (MX) records and resolve the web server’s public address. To secure the zone transfer between the DNS servers, the administrator uses only server ACLs. However, any secondary DNS servers would still be susceptible to IP spoofing attacks.
Another scenario could occur when a security team determines that someone from outside the organization has obtained sensitive information about the internal organization by querying the company’s external DNS server. The security manager should address the problem by implementing a split DNS server, allowing the external DNS server to contain only information about domains that the outside world should be aware and the internal DNS server to maintain authoritative records for internal systems.
A configuration management database (CMDB) keeps track of the state of assets, such as products, systems, software, facilities, and people, as they exist at specific points in time, as well as the relationships between such assets. The IT department typically uses CMDBs as data warehouses.
A content management system (CMS) publishes, edits, modifies, organizes, deletes, and maintains content from a central interface. This central interface allows users to quickly locate content. Because edits occur from this central location, it is easy for users to view the latest version of the content. Microsoft SharePoint is an example of a CMS.