Assets are any entities that are valuable to an organization and include tangible and intangible assets. In this sample chapter from CISSP Cert Guide, 4th Edition, learn how to identify and classify information and assets, review information and asset handling requirements, explore the data life cycle, and more.
This chapter covers the following topics:
Asset Security Concepts: Concepts discussed include asset and data policies, data quality, and data documentation and organization.
Identify and Classify Information and Assets: Classification topics discussed include data and asset classification, sensitivity and criticality, private sector classifications, and military and government classifications.
Information and Asset Handling Requirements: Topics include marking, labeling, storing, and destruction.
Provision Resources Securely: Topics include how to determine and document information, asset ownership, asset inventory, and asset management.
Data Life Cycle: Components include the data life cycle, databases, data audit, data roles, data collection, data location, data maintenance, data retention, data remanence, collection limitation, and data destruction.
Asset Retention: Retention concepts discussed include media, hardware, and personnel retention and asset retention terms.
Data Security Controls: Topics include data security, data states, data access and sharing, data storage and archiving, baselines, scoping and tailoring, standards selections, and data protection methods.
Assets are any entities that are valuable to an organization and include tangible and intangible assets. As mentioned in Chapter 1, “Security and Risk Management,” tangible assets include computers, facilities, supplies, and personnel. Intangible assets include intellectual property, data, and organizational reputation. All assets in an organization must be protected to ensure the organization’s future success. Although securing some assets is as easy as locking them in a safe, other assets require more advanced security measures. The most valuable asset of any organization is its data.
The Asset Security domain addresses a broad array of topics, including information and asset identification and classification, information and asset handling, information and asset ownership, asset inventory and asset management, data life cycle, asset retention, and data security controls and compliance requirements. Out of 100 percent of the exam, this domain carries an average weight of 10 percent, which is the lowest weight of the domains.
A security professional must be concerned with all aspects of asset security. The most important factor in determining the controls used to ensure asset security is an asset’s value. Although some assets in the organization may be considered more important because they have greater value, you should ensure that no assets are forgotten. This chapter covers all the aspects of asset security that you, as an IT security professional, must understand.
Asset Security Concepts
Asset security concepts that you must understand include
Asset and data policies
Data documentation and organization
Asset and Data Policies
As a security professional, you should ensure that your organization implements a data policy that defines long-term goals for data management and asset policies that define long-term goals for each asset type at a minimum. In some cases, each asset may need its own defined policy to ensure that it is properly administered. Business units will need to define asset policies and data policies for any assets and data owned by that business unit. Asset and data policies should be based on the organization’s overall asset and data policies. Individual roles and responsibilities should be defined to ensure that personnel understand their job tasks as related to these policies.
After the overall policies are created, asset and data management practices and procedures should be documented to ensure that the day-to-day tasks related to assets and data are completed. In addition, the appropriate quality assurance and quality control procedures must be put into place for data quality to be ensured. Storage and backup procedures must be defined to ensure that assets and data can be restored.
As part of a data policy, any databases implemented within an organization should be carefully designed based on user requirements and the type of data to be stored. All databases should comply with the data policies that are approved, created, and implemented. Data policies should be strictly enforced.
Prior to establishing a data policy, you should consider several issues that can affect it. These issues include cost, liability, legal and regulatory requirements, privacy, sensitivity, and ownership.
The cost of any data management mechanism is usually the primary consideration of any organization. Often organizations do not implement a data policy because they think it is easier to allow data to be stored in whatever way each business unit or user desires. However, if an organization does not adopt formal data policies and procedures, data security issues can arise because of the different storage methods used. For example, suppose an organization’s research department decides to implement a Microsoft SQL Server database to store all research data, but the organization does not have a data policy. If the database is implemented without a thorough understanding of the types of data that will be stored and the users’ needs, the research department may end up with a database that is difficult to navigate and manage. In addition, the proper access control mechanism may not be in place, resulting in users being able to edit the data that should only have view access.
Liability involves protecting the organization from legal issues. Liability is directly affected by legal and regulatory requirements that apply to the organization. Issues that can affect liability include asset or data misuse, data inaccuracy, data corruption, data breach, and data loss or a data leak.
Data privacy is determined as part of data analysis. Data classifications must be determined based on the value of the data to the organization. After the data classifications are determined, data controls should be implemented to ensure that the appropriate security controls are implemented based on data classifications. Privacy laws and regulations must also be considered.
Sensitive data is any data that could adversely affect an organization or individual if it were released to the public or obtained by attackers. When determining sensitivity, you should understand the types of threats that can occur, the vulnerability of the data, and the data type. For example, Social Security numbers are more sensitive than physical address data.
Data ownership is the final issue that you must consider as part of data policy design. This issue is particularly important if multiple organizations store their data within the same asset or database. One organization may want completely different security controls in place to protect its data. Understanding legal ownership of data is important to ensure that you design a data policy that takes into consideration the different requirements of multiple data owners. While this is most commonly a consideration when multiple organizations are involved, it can also be an issue with different business units in the same organization. For example, data from the human resources department has different owners and therefore different requirements than research department data.
Data quality is defined as data’s fitness for use. The integrity factor of the security triad drives the data quality. Data quality must be maintained throughout the data life cycle, including during data capture, data modification, data storage, data distribution, data usage, and data archiving. These terms are also known as data in use, data at rest, and data in transit. Security professionals must ensure that their organization adopts the appropriate quality control and quality assurance measures so that data quality does not suffer. Data quality is most often safeguarded by ensuring data integrity, which protects data from unintentional, unauthorized, or accidental changes. With data integrity, data is known to be good, and information can be trusted as being complete, consistent, and accurate. System integrity ensures that a system will work as intended.
Security professionals should work to document data standards, processes, and procedures to monitor and control data quality. In addition, internal processes should be designed to periodically assess data quality. When data is stored in databases, quality control and assurance are easier to ensure using the internal data controls in the database. For example, you can configure a field to only a valid number. By doing this, you would ensure that only numbers could be input into the field. This is an example of input validation. Input validation can occur on both the client side (using regular expressions) and the server side (using code or in the database) to avoid SQL injection attacks.
Data contamination occurs when data errors occur. Data can be corrupt due to network or hash corruptions, lack of integrity policies, transmission errors, and bad encryption algorithms. Data errors can be reduced through implementation of the appropriate quality control and assurance mechanisms. Data verification, an important part of the process, evaluates how complete and correct the data is and whether it complies with standards. Data verification can be carried out by personnel who have the responsibility of entering the data. Data validation evaluates data after data verification has occurred and tests data to ensure that data quality standards have been met. Data validation must be carried out by personnel who have the most familiarity with the data.
Organizations should develop procedures and processes that keep two key data issues in the forefront: error prevention and correction. Error prevention is provided at data entry, whereas error correction usually occurs during data verification and validation.
Data Documentation and Organization
Data documentation ensures that data is understood at its most basic level and can be properly organized into data sets. Data sets ensure that data is arranged and stored in a relational way so that the data can be used for multiple purposes. Data sets should be given unique, descriptive names that indicate their contents.
By documenting the data and organizing data sets, organizations can also ensure that duplicate data is not retained in multiple locations. For example, the sales department may capture all demographic information for all customers. However, the shipping department may also need access to this same demographic information to ensure that products are shipped to the correct address. In addition, the accounts receivable department will need access to customer demographic information for billing purposes. There is no need for each business unit to have separate data sets for this information. Identifying the customer demographic data set as being needed by multiple business units prevents duplication of efforts across business units.
Within each data set, documentation must be created for each type of data. In the customer demographic data set example, customer name, address, and phone number are all collected. For each of the data types, the individual parameters for each data type must be created. Whereas an address may allow a mixture of numerals and characters, a phone number should allow only numerals. In addition, each data type may have a maximum length. Finally, it is important to document which data is required—meaning that it must be collected and entered. For example, an organization may decide that fax numbers are not required but phone numbers are required. Remember that each of these decisions is best made by the personnel working most closely with the data.
After all the documentation has occurred, the data organization must be mapped out. This organization will include all interrelationships between the data sets. It should also include information on which business units will need access to data sets or subsets of a data set.