- Defining Security Principles
- Security Management Planning
- Risk Management and Analysis
- Policies, Standards, Guidelines, and Procedures
- Examining Roles and Responsibility
- Management Responsibility
- Understanding Protection Mechanisms
- Classifying Data
- Employment Policies and Practices
- Managing Change Control
- Security Awareness Training
Defining Security Principles
To understand how to manage an information security program, you must understand the basic principles. These principles are the building blocks, or primitives, to being able to determine why information assets need protection.
Remembering that information is the most important of your organization's assets (second to human lives, of course), the first principles ask what is being protected, why, and how do we control access? The fundamental goal of your information security program is to answer these questions by determining the confidentiality of the information, how can you maintain the data's integrity, and in what manner its availability is governed. These three principles make up the CIA triad (see Figure 3.1).
Figure 3.1 Security's fundamental principles are confidentiality, integrity, and availability.
The CIA triad comprises all the principles on which every security program is based. Depending on the nature of the information assets, some of the principles might have varying degrees of importance in your environment.
Confidentiality determines the secrecy of the information asset. Determining confidentiality is not a matter of determining whether information is secret or not. When considering confidentiality, managers determine the level of access in terms of how and where the data can be accessed. For information to be useful to the organization, it can be classified by a degree of confidentiality.
To prevent attackers from gaining access to critical data, a user who might be allowed access to confidential data might not be allowed to access the service from an external access port. The level of confidentiality determines the level of availability that is controlled through various access control mechanisms.
Protections offered to confidential data are only as good as the security program itself. To maintain confidentiality, the security program must consider the consequences of an attacker monitoring the network to read the data. Although tools are available that can prevent the attacker from reading the data in this manner, safeguards should be in place at the points of transmission, such as by using encryption or physically safeguarding the network.
Another attack to confidentially is the use of social engineering to access the data or obtain access. Social engineering is difficult to defend because it requires a comprehensive and proactive security awareness program. Users should be educated about the problems and punishments that result when they intentionally or accidentally disclose information. This can include safeguarding usernames and passwords from being used by an attacker.
Cryptography is the study of how to scramble, or encrypt, information to prevent everyone but the intended recipient from being able to read it. Encryption implements cryptography by using mathematical formulas to scramble and unscramble the data. These formulas use an external piece of private data called a key to lock and unlock the data.
Cryptography can trace its roots back 4,000 years to ancient Egypt where funeral announcements were written using modified hieroglyphics to add to their mystery. Today, cryptography is used to keep data secret. For more information on cryptography, see Chapter 5, "Cryptography."
With data being the primary information asset, integrity provides the assurance that the data is accurate and reliable. Without integrity, the cost of collecting and maintaining the data cannot be justified. Therefore, policies and procedures should support ensuring that data can be trusted.
Mechanisms put in place to ensure the integrity of information should prevent attacks on the storage of that data (contamination) and on its transmission (interference). Data that is altered on the network between the storage and the user's workstation can be as untrustworthy as the attacker altering or deleting the data on the storage media. Protecting data involves both storage and network mechanisms.
Attackers can use many methods to contaminate data. Viruses are the most frequently reported in the media. However, an internal user, such as a programmer, can install a back door into the system or a logic bomb that can be used attack the data. After an attack is launched, it might be difficult to stop and thus affect the integrity of the data. Some of the protections that can be used to prevent these attacks are intrusion detection, encryption, and strict access controls.
Not all integrity attacks are malicious. Users can inadvertently store inaccurate or invalid data by incorrect data entry, an incorrect decision made in running programs, or not following procedures. They can also affect integrity through system configuration errors at their workstations or even by using the wrong programs to access the data. To prevent this, users should be taught about data integrity during their information security awareness training. Additionally, programs should be configured to test the integrity of the data before storing it in the system. In network environments, data can be encrypted to prevent its alteration.
Availability is the ability of the users to access an information asset. Information is of no use if it cannot be accessed. Systems should have sufficient capacity to satisfy user requests for access, and network architects should consider capacity as part of availability. Policies can be written to enforce this by specifying that procedures be created to prevent denial-of-service (DoS) attacks.
More than just attackers can affect system and network availability. The environment, weather, fire, electrical problems, and other factors can prevent systems and networks from functioning. To prevent these problems, your organization's physical security policies should specify various controls and procedures to help maintain availability.
Yet access does not mean that data has to be available immediately. Availability of information should recognize that not all data has to be available upon request. Some data can be stored on media that might require user or operator intervention to access. For example, if your organization collects gigabytes of data daily, you might not have the resources to store it all online. This data can be stored on an offline storage unit, such as a CD jukebox, that does not offer immediate access.
Privacy relates to all elements of the CIA triad. It considers which information can be shared with others (confidentiality), how that information can be accessed safely (integrity), and how it can be accessed (availability).
As an entity, privacy is probably the most watched and regulated area of information security. Laws, such as the U.S. Federal Privacy Act of 1974, provide statutes that limit the government's use of citizens' personal data. More recently, the Health Insurance Portability and Accountability Act (HIPAA) authorizes the Department of Health and Human Services to set the security and privacy standards to cover processing, storing, and transmitting individual's health information to prevent inadvertent or unauthorized use or disclosure.
Laws and regulations have been difficult to keep up-to-date as the technology moves forward. The federal government has been able to keep up by using directives and mandates within the executive branch. However, this has not helped private industry. Regulations, such as those mandated by the U.S. Federal Trade Commission (FTC), attempt to help, but the FTC lacks enforcement capabilities.
If not mandated by law or regulation, organizations should look at the privacy of their own information assets. Aside from having to be concerned about the privacy of employee information, an organization needs to be concerned about the disclosure of customer information that might not be regulated.
Information collected through contact, such as via the Internet, does not require a privacy statement, but the FTC does say organizations should have one. That privacy statement should reflect how the data is handled and available to the users whose information is being collected.
Monitoring privacy has other concerns. Preventing the unauthorized disclosure of data might require monitoring of data transmission between systems and users. One area of concern is the monitoring of email. Email monitoring can include content monitoring to watch for unauthorized disclosure of information. However, before doing so, an organization must ensure that policies are in place that state what might be monitored or disclosed.
Finally, security professionals introduce an additional problem to the privacy of information because of their nearly unlimited access to all resources. Although we would like to think that all professionals have integrity, some have other agendas or lack the knowledge to prevent accidental disclosure. Security professionals should be limited to the information that is necessary to perform their tasks. Policies can be created to have additional checks and balances to ensure integrity of the data.
Information security is the process of managing the access to resources. To allow a user, a program, or any other entity to gain access to the organization's information resources, you must identify them and verify that the entity is who they claim to be. The most common way to do this is through the process of identification and authentication.
The process of identification and authentication is usually a two-step process, although it can involve more than two steps. Identification provides the resource with some type of identifier of who is trying to gain access. Identifiers can be any public or private information that is tied directly to the entity. To identify users, the common practice is to assign the user a username. Typically, organizations use the user's name or employee identification number as a system identifier. There is no magic formula for assigning usernamesit is a matter of your preference and what is considered the best way of tracking users when information appears in log files.
Understand the Principle of Authentication
Authentication is a matter of what the entity knows, what they might have, or who the entity is. For strong authentication, use at least two of these principles.
The second part of the process is to authenticate the claimed identity. The following are the three general types of authentication:
What the entities know, such as a personal identification number (PIN) or password
What the entities have, such as an access card, a smart card, or a token generator
Who or what the entity is, which is usually identified through biometrics
Out of these general types of authentication, if two or more are used, the authentication is called strong authentication. For physical security, a user with an access card commonly must enter a PIN. For authentication to a system or network, a common method is to use a PIN or pass code with a token generator. Although biometrics is a way to identify who the entity is, another step is still necessary to strengthen the authentication.
Of these methods, passwords and PINs are the most common forms of authentication. Although passwords become the most important part of the process, they also represent the weakest link. As a security manager, you must manage the process in such a way to minimize the weakness in the process.
Users typically create passwords that are easily guessed. Common words or the names of spouses and children leave the password open to dictionary or social engineering attacks. To prevent these attacks, some organizations use a password generator to create passwords that cannot be cracked using typical attacks. The problem is that these passwords are usually not that memorable, which causes the users to write them down, leaving them open to another type of social engineering attack in which another user finds the documented password.
Password management involves trying to create a balance between creating passwords that cannot be guessed and passwords users don't need to write down. Policies can mandate several strategies that can be effective in mitigating some of these problems. Following are some of the methods management should use when mitigating these problems:
Password generatorsThese are usually third-party products that can be used to create passwords out of random characters. Some products can be used to create memorable passwords using permutations of random or chosen words or phrases.
Password checkersThese are tools that check the passwords for their probability of being guessed. They are designed to perform typical dictionary attacks, and they use information on the system in an attempt to guess the password using social engineering. These checkers also use common permutations of these attacks, anticipating what a user might try. For example, users commonly use 0s in the place of the letter o. The strength of the password is determined by how many attempts the tool makes to guess the password.
Limiting login attemptsThese can prevent attackers from trying to log in to systems or prevent networks from using exhaustive attacks. By setting a threshold for login failures, the user account can be locked. Some systems can lock accounts for a period of time, whereas others require administrator intervention.
Challenge-ResponseThese are also called cognitive passwords. They use random questions that the user would provide the answer to in advance or use a shared secret. When the user logs in, the system picks a random question that must be answered successfully to gain access. This is commonly used on voice response systems (for example, social security number, account number, ZIP code, and so on) and requires the answer to more than one challenge.
Token devicesThese are a form of one-time password authentication that satisfies the "what you have" scenario. Token devices come in two forms: synchronous and asynchronous. A synchronous token is time-based and generates a value that is used in authentication. The token value is valid for a set period of time before it changes and is based on a secret key held by both the token (usually a sealed device) and the server providing authentication services. An asynchronous token uses a challenge-response mechanism to determine whether the user is valid. After the user enters the identification value, the authentication server sends a challenge value. The user then enters that value into the token device, which then returns a value called a token. The user sends that value back to the server, which validates it to the username. Figure 3.2 demonstrates these steps.
Cryptographic keysThese combine the concepts of "something you have" and "something you know." Using public key cryptography, the user has a private key (or digital signature) that is used to sign a common hash value that is sent to the authentication server. The server can then use the known public key for the user to decrypt the hash. To strengthen the authentication process, the user is asked to enter a PIN or passphrase that is also added to the hash to strengthen the authentication process.
Figure 3.2 Authentication using an asynchronous token device.
Using public key or asynchronous encryption technologies requires the use of a public key infrastructure (PKI) to manage the process.
Nonrepudiation is the ability to ensure that the originator of a communication or message is the true sender by guaranteeing authenticity of his digital signature. Digital signatures are used not only to ensure that a message has been electronically signed by the person who purported to sign the document, but also to ensure that a person cannot later deny that he furnished the signature.
Nonrepudiation is the ability to ensure the authenticity of a message by verifying it using the message's digital signature. Remember, digital signatures require a certificate to generate the signature and a PKI to save the public key for when the message is verified.
One way to authenticate the digital signature is to verify it with the public key obtained from a trusted certification authority (CA). When used in PKI, the CA stores the public key that could be used to verify the signature. However, digital signatures might not always guarantee nonrepudiation. One concern is the trust of the signature and the CA. For example, some commercial CA products do not require verification of the person buying the signature but trusts that his credit card is valid. In pretty good privacy, you have to trust the signers of the user's certificate.
Regardless of how your organization tries to implement nonrepudiation, there will be some risk based on the trust of the information used for validation. Biometric verification can help in the process, but that means you must trust the certification process.
Accountability and Auditing
With the user authenticated to the system and network, most administrators use the various audit capabilities to track all system events. Systems and security administrators can use the audit records to
Produce usage reports
Detect intrusions or attacks
Keep a record of system activity for performance tuning
Create evidence for disciplinary actions or law enforcement
Accountability is created by logging the events with the information from the authenticated user, which might also include date, time, network address, and other information that could further identify the condition that caused the event. Events are audited through system and network facilities designed to help monitor from the lowest levels. These facilities also have Application Program Interfaces (APIs) that can allow applications to audit pertinent event information.
Administrators can set up auditing to capture systems events. However, if you set up auditing to capture everything, you will create logs that can take up all available disk space. Rather, you should set a parameter defining a threshold, or clipping level, of the event to be logged. Setting thresholds is typical in the configuration of intrusion detection systems (IDSs). An IDS has the tendency to log a lot of erroneous events called false positives. Setting thresholds can cut down on the number of errors logged.
The auditing of systems requires active monitoring and passive protections. Active monitoring requires administrators to watch the ongoing activities of the users. One way this can be done is via keystroke monitoring. Passive monitoring is done through the examining of audit data maintained by each system. Because the audit data is usually stored on the system, it should be protected from alteration and unauthorized access. These auditing principles are discussed in the following sections.
Keystroke monitoring is a type of audit that monitors what a user types. It watches how the user types individual words, commands, or other common tasks and creates a profile of that user's characteristics. The keystroke monitor can then detect whether someone other than the profiled user tries to use the system.
The FBI has been looking at new ways of doing covert investigation of criminals on the Internet. One tool they use is called Magic Lantern. As a follow-up to the Carnivore program, the FBI covertly installs Magic Lantern on a targeted computer system to trap keystroke and mouse information. Magic Lantern has been used to break the encryption of a suspected criminal. As this is written, that case has yet to come to trial, but the constitutionality of the FBI using Magic Lantern will be a central question.
Another form of keystroke monitoring is the capture of what the user types. These types of keystroke monitors capture some of the basic user input events, allowing forensic analysis of what the user is doing. This is a more controversial form of auditing because it has been used by law enforcement in recent high-profile cases.
In either case, there are two problems with this type of auditing:
The generation of a lot of data
Because of the nature of the data captured, no clipping level can be set. Therefore, you must ensure that there is enough storage for all the captured information to be stored.
Privacy issues are a concern in all types of monitoring, but especially with keyboard monitoring. Unless used by law enforcement with the proper authorization, you should ensure that your organization has the proper policies in place and users have been notified of those policies. Otherwise, you run the risk of being accused of violating a user's civil rights and liberties. Although this has not been resolved in the courts, you should not try this without the proper policies in place because you do not know what would happen if the monitored user tried to test this in court.
Protecting Audit Data
There will come a time when your organization has to handle an incident. This incident can come from within your organization's network or from the Internet. The only way you will have to figure out how the incident occurred is through log analysis. However, the analysis of the logs can be only as successful as the integrity of the data.
Operating systems have many ways of maintaining the log data integrity, including the capability to store it across a network. Maintaining the integrity of the data is important for analysis. If the incident involves an attack, law enforcement can use the data gathered by the audits to investigate and prosecute the attacker. For the audit data to be used in legal proceedings, it must be proven that the integrity of the audit data has been maintained and there was no possibility for it to be altered. In the legal world, that is called proving the chain of custody. If the prosecutor cannot prove the chain of custody, the audit data cannot be used as evidence.
There are more reasons than law enforcement, but I put the emphasis on it because, if your protection procedures can pass that test, they will pass the others. It becomes important in any situation where legal proceedings might be involved, such as firing an employee for violating policies. Audit data used in the decision can be subpoenaed if the employee sues your organization, which requires the same chain of custody rules.
When I talk to organizations about the condition of their security documentation, most admit that it is not up-to-date. Others say that it is too accessible because it details the controls and settings of various devices. In either case, documentation can become a weak link in the security chain. By not keeping up with documentation, there could be no explanation of how the controls are configured to satisfy policies, which would make their replacement in an emergency situation difficult.
Making the documentation accessible can be a controversial issue. Some believe that the more open security is, the better it can be reviewed and hardened. Review is one thing, but some people could use this information for unscrupulous purposes. If the user who has access to the full description of the security controls is also a disgruntled employee or even someone engaging in industrial espionage, it might be in your organization's best interest to restrict access to security documentation.