What is personally identifying information why must it be handled with extreme care

The Silent Killer

Will Gragido, John Pirc, in Cybercrime and Espionage, 2011

Family Education Rights and Privacy Act (FERPA)

FERPA covers the handling of student personal identifiable information (PII). This information can range from student transcripts, SSN, contact information, and grades that are disclosed to the institution that is governed by FERPA. The Family Policy Compliance Office has the right to audit any school that has to comply with FERPA and if it is not in compliance, it can face the termination of receiving federal money. In terms of security controls, the FERPA act is fairly high level in pointing out what types of sensitive data need to be protected. It does not go into the details of the technologies needed to comply with the act. Although FERPA is enabled to protect student information and confidentiality, it can be overruled by the U.S. Attorney General under the Patriot Act in the event that a foreign student is suspected or engaged in terrorist activities. There are many cases involved where hackers have penetrated universities and have stolen student SSN along with other data that would fall under the PII blanket. This has led to some universities changing their system from tracking their students by SSN to another numbering scheme. This is not a trivial task for most educational institutions but aids in one last place for someone to harness your identity.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496131000030

Compliance

Deborah Gonzalez, in Managing Online Risk, 2015

The Patient Safety and Quality Improvement Act

PSQIA focuses on the confidentiality of patient information by highlighting how the Personal Identifiable Information becomes a patient safety work product. Key sections of the act that can relate to online, digital, mobile, and social media security and risk concerns include:

Section 921 defines key terms, including how information becomes patient safety work product.

Section 922 sets out the confidentiality and privilege protections for patient safety work product, how patient safety work product may be disclosed, and the penalties for disclosures in violation of the protections.

Section 923 describes the network of patient safety databases.

Section 924 outlines the requirements and processes for listing and delisting of patient safety organizations.68

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124200555000074

Evil Twin Attacks

Carl Timm, Richard Perez, in Seven Deadliest Social Network Attacks, 2010

Publisher Summary

Evil Twin attacks, or impersonation, are a growing avenue for attackers. It allows them to impersonate people and companies while using that profile for financial gain, defamation, cyber-bullying, physical crimes, and personal identifiable information gathering. Creating the Evil Twin account requires going to Facebook site, fill in the personal information, upload the profile picture, and validate the account. When creating a new e-mail account, the account name must be unique. When an account is created several friends are added. It can be done through multiple different methods such as by joining groups, sending requests to people from the same high school, sending requests to people from the same employer, sending requests to people that are friends with the victim on other sites but not this site. An attacker after creating an Evil Twin account will usually either try to get money from people, defame people, or just gather information about other people and repeat the process. A user can protects one's account by privacy settings on networking sites such as Facebook has four privacy levels that include Friends, Friends of Friends, Everyone and Publicly Available Information (PAI).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495455000045

Cloud as infrastructure for managing complex scalable business networks, privacy perspective

Abdussalam Ali, Igor Hawryszkiewycz, in The Cloud Security Ecosystem, 2015

3.3 Privacy and security issue in cloud computing

Privacy is one of the human rights. Controlling information owned by people is one type of privacy (Muniraman et al., 2007). Types of information need to be protected as mentioned by Muniraman et al. (2007) include:

Personal identifiable information: Refers to the personal information used to identify people such as their names, addresses, and birth dates.

Sensitive information: Any information considered as private. Example of that includes information about religion, race, and surveillance camera videos and images.

Usage data: The information about habits and devices used through a computer. That includes the information about habits and interests observed through the history of Internet usage, for example.

Privacy and security is one of the main challenges in cloud computing environment. Understanding this issue is an important step to implement privacy and security solutions for cloud services and applications when needed (Chatti, 2012; Muniraman et al., 2007).

Pearson (2009) list the key privacy requirements for privacy solutions in cloud. These requirements are:

1.

Transparency: Anyone who wants to collect information about users should tell what he wants to collect. Also, users should be told about what this information is to be used for. At the same time, they should be given the choice if they want this information to be used by others or not.

2.

Minimization: Only the needed information is collected and shared.

3.

Accessibility: Users or clients must be given access to their information to check its accuracy.

4.

Security provision: Security should be implemented to safeguard unauthorized access of users from accessing the private information.

5.

Limitation of usage: Information should only be limited to the purpose of use.

From the aspect of KM and KM systems, one of the main private assets is the knowledge itself, either if it is tacit or explicit, is owned by an individual, business, or organization. That is in addition to the other information about users, groups, and relationships between them.

In designing such systems, privacy and security mechanisms and policy should be well defined and implemented. In the last section, we provide more explanation about how to support privacy and security in our model based on a real scenario.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128015957000124

Privacy on the Internet

Marco Cremonini, ... Claudio Agostino Ardagna, in Computer and Information Security Handbook (Second Edition), 2013

Languages for Privacy-Aware Access Control and Privacy Preferences

The importance gained by privacy requirements has brought with it the definition of access control models that are enriched with the ability of supporting privacy requirements. These enhanced access control models encompass two privacy aspects: to guarantee the desired level of privacy of information exchanged between different parties by controlling the access to services/resources, and to control all secondary uses of information disclosed for the purpose of access control enforcement. Users requiring access to a server application need then to protect access to their personal data by specifying and evaluating privacy policies.

The most important proposal in this field is the Platform for Privacy Preferences Project (P3P), a World Wide Web Consortium (W3C) project aimed at protecting the privacy of users by addressing their need to assess that the privacy practices adopted by a server provider comply with users’ privacy requirements. The goal of P3P is twofold: i) to allow Web sites to state their data-collection practices in a standardized, machine-readable way, and ii) to provide users with a solution to understand which data will be collected and how those data will be used. To this aim, P3P allows Web sites to declare their privacy practices in a standard and machine-readable XML format known as P3P policy. A P3P policy contains the specification of the data it protects, the data recipients allowed to access the private data, consequences of data release, purposes of data collection, data retention policy, and dispute resolution mechanisms. Supporting privacy preferences and policies in Web-based transactions allows users to automatically understand and match server practices against their privacy preferences. Thus, users do not need to read the privacy policies at every site they interact with, but they are always aware of the server practices in data handling. The corresponding language that would allow users to specify their preferences as a set of preference rules is called a P3P Preference Exchange Language (APPEL)[42]. APPEL can be used by users’ agents to reach automated or semi-automated decisions regarding the acceptability of privacy policies from P3P-enabled Web sites. Unfortunately, interactions between P3P and APPEL have shown that users can explicitly specify just what is unacceptable in a policy, whereas the APPEL syntax is cumbersome and error prone for users.

Other approaches have focused on the definition of access control frameworks that integrate both policy evaluation and privacy functionalities. A solution that introduced a privacy-aware access control system has been defined by Ardagna et al. [43]. This framework allows the integration, evaluation, and enforcement of policies regulating access to service/data and release of personal identifiable information, and provides a mechanism to define constraints on the secondary use of personal data for the protection of users’ privacy. In particular, the following types of privacy policies have been specified:

Access control policies. They govern access/release of services/data managed by the party (as in traditional access control).

Release policies. They govern release of properties/credentials/personal identifiable information (PII) of the party and specify under which conditions this information can be disclosed.

Data handling policies. They define how personal information will be (or should be) dealt with at the receiving parties.

An important feature of this framework is to support requests for certified data, issued and signed by trusted authorities, and uncertified data, signed by the owner itself. It also allows to define conditions that can be satisfied by means of zero-knowledge proof [44] and based on the physical position of the users [45].

Most of the research on security and privacy has focused on the server-side of the problem, while symmetric approaches have been used and implemented at the client-side to protect the privacy of the users (privacy preference definition based on policies). In the last few years, however, some solutions for privacy protection that strictly focus on clients’ needs have been defined.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123943972000428

How Cyber-Savvy are Older Mobile Device Users?

C. Chia, ... D. Fehrenbacher, in Mobile Security and Privacy, 2017

3 Findings and Discussion

Although literacy is one of the variables focused in this study, we will also present findings on the main threats to which elderly users of smart mobile devices are generally exposed. These findings are analyzed from the 55 participants who had participated in the survey, which cover five sections as mentioned in the survey design. This will be presented in Tables 1 and 2.

Table 1. Participants' Responses on General Security (n = 55)

Questions on General SecurityParticipants' Responses
Do you use the “Remember me” feature to save your passwords, login credentials, or credit card information? (Select all that are applicable) Of the seven participants who reportedly saved the passwords, two also saved login credentials.
Do you use any encryption software to protect information on your mobile device? Although four responded “yes,” it is doubtful if participants knew what encryption software is and more likely that they were referring to the password on their devices.
Have you “jail broken” or “rooted” your mobile device (s) before? Only one responded “yes.”

Table 2. Participants' Responses on Loss/Theft (n = 55)

Questions on Loss/TheftParticipants' Responses
In the past calendar year (01/01/2013 to 31/12/2013) has/have your mobile device(s) been lost or stolen? Only two responded “yes.”

18 of the 55 participants (~ 32.7%) reportedly experienced difficulty in understanding the information while accessing their smart mobile devices. Of these 18 participants, 15 participants are literate in Chinese only, and five of the participants are literate in both Chinese and English but prefer to access their smart mobile devices in Chinese (e.g., changed the language settings of their devices to Chinese); see Table 3.

Table 3. Participants' Responses on Unauthorized Access (n = 55)

Questions on Unauthorized AccessParticipants' Responses
How likely are you to read up on information before you download an application? See Fig. 3.
Have you installed any applications from nonreputable or unknown application providers? Four responded “yes” and five responded “not sure”
In the past calendar year (01/01/2013 to 31/12/2013) has your mobile device(s) been accessed without your permission? Three responded “do not know” (n = 54).

The lack of English literacy, or that the level of comfort lies more with the Chinese language, indicates that the users may not be able to either understand all or most of the features available on smart mobile devices, such as instant messaging, emails, and information on websites or social networking sites. Such information is usually not translated into Chinese even if the user sets language settings to Chinese.

65.5% (36) of the participants are literate in both English and Chinese, but nine of them preferred to do the survey in Chinese (a total of 15 participants completed the survey in Chinese). This reflects that their level of comfort in reading lies more with Chinese. Users also responded that they have had difficulty accessing their smart mobile devices even after changing language settings.

First, we look at the 23 users who are either literate in mainly Chinese or regard Chinese as their preferred medium. Of those, 69.6% (16) have changed their language settings to Chinese. Among these 16 users, 68.8% (11) still experience difficulty in understanding information on their devices, and 9 out of these 11 users still experience difficulty understanding information after changing language settings.

Among these difficulties that the 11 users experience, the highest response came from understanding instructions while downloading mobile applications (81%). One user also reported having difficulty understanding messages prompted while playing online games and accessing social networking sites and requests to play games and join social network by friends and unknown contacts, as well as having a fear of leaking sensitive and personal identifiable information (PII) data such as bank account, credit card details, or personal photos/videos.

The two remaining users, who did not have problems understanding instructions while downloading apps, reported to have experienced difficulty in understanding advertisements and requests to play games and join social networks by friends and unknown contacts, as well as a fear of leaking sensitive data and PII.

The problem of understanding the information on their mobile devices is not unique to users who are mainly literate or feel more comfortable accessing in Chinese only.

However, the problem lies less with understanding and probably more with not being sure how to respond, as well as the possible consequences of responding. This is an indication that as features of smart mobile devices increase in function and variety in the form of new apps and device functions, digital immigrants who make use of these features may experience more difficulty in accessing their devices.

Recommendation 1: There is a need for smart mobile device and app designers to consider the ease of usage in the design of the devices and apps, particularly among first-time users and elderly users who are not familiar with dynamically changing digital context.

Understanding the information of the various features on their smart mobile devices may be difficult for those who are not accustomed to reading in English. It may also deter them from reading up before downloading an app. More importantly, participants may be unaware of the retrieval of sensitive data and PII upon downloading an app. 38.2% (21) of the participants responded that they are unlikely to read up on information before downloading an app. Of these 21 participants, 52.4% (11) responded “very unlikely” and nine participants responded “somewhat unlikely” (Fig. 3). Eight other participants responded “neutral” to reading up before an app is downloaded.

What is personally identifying information why must it be handled with extreme care

Fig. 3. Likelihood of reading app information before downloading an app (n = 55).

Other than the lack of awareness that downloading apps may retrieve sensitive data and PII from the user, four participants reportedly installed apps from nonreputable or unknown application providers (e.g., third-party app stores). There are five other participants who were “not sure” whether their apps were installed from nonreputable or unknown providers.

A study conducted by Hewlett Packard Security Research (2013), for example, revealed that 90% of the 2107 mobile apps examined were vulnerable to attacks, 97% accessed sensitive data and PII, and 86% had privacy-related risks. Lack of binary code protection was identified as a potential vulnerability affecting 86% of the apps examined. Another major vulnerability was weak or inappropriate implementation of cryptographic schemes to secure sensitive data stored by the apps on mobile devices, which was revealed in 75% of the apps examined. A study by D'Orazio and Choo (2015) revealed that due to the inappropriate implementation of cryptographic algorithm and the storage of sensitive data and PII in an unencrypted database in a widely used Australian government health care app, users' sensitive data and PII stored on the device could be compromised by an attacker. In another related work, the authors also revealed vulnerabilities in four video on-demand apps, one live TV app, and a security DRM protection module (D'Orazio and Choo, 2016).

Recommendation 2: Considering that older mobile device users may not be accustomed to reading very wordy information, easy-to-read manuals (both in hard/soft copies and audio/video formats) available in different languages (other than English) can be attached to the smart mobile device package upon purchase or made easily available online to inform users of possible cybercrime risks and to heighten their awareness of possible risky activities related to the use of their device or app.

The lack of understanding app information, coupled with a low likelihood of reading up on information before downloading and installing apps from nonreputable or unknown app providers, may put these users at risk of revealing their personal information. This is an indication that awareness needs to be increased in these areas.

Linking the above concerns about the lack of understanding of app information and the low likelihood of reading up on information before downloading an app, we also draw our attention to the type of smart mobile device the participants own.

Thirty-five out of the 55 (63.6%) participants own an Android device. The permission-based method utilized by Android to determine an app's legitimacy has been shown to be insufficient at classifying malicious apps reliably. On the other hand, the review process used by Apple is more restrictive for developers, as each app is thoroughly analyzed for security issues before being released to the public (although there have been reports of potentially malicious apps getting past Apple's reviewers).

Some participants in our study have expressed concern with encountering advertisements while using the app they have downloaded. This may indicate in-app advertising, which is commonly seen in apps. The increase of permissions is predicted to be related to in-app advertising that requires use of additional resources for data mining (Shekhar et al., 2012). A 2012 study of 100,000 Android apps revealed that some mobile advertising libraries used by apps resulted in the direct exposure of personal information, and some advertisement libraries result in unsafe fetching and loading of dynamic code from the Internet. For example, five out of the 100 identified ad libraries had the unsafe practice of downloading suspicious payloads, which allowed the host app to be remotely controlled (Grace et al., 2012).

In addition, an Android device user must choose either to accept all permissions required by the app in order to download or cancel the installation. Some of these permissions may not be necessary, and granting all permissions may pose a privacy risk to Android users. For example, in a 2014 study, seven Android social networking apps were examined (Do et al., 2014). It was discovered that the Facebook app requires the “read contacts” permission, which means retrieving users' contact data including contact numbers, contact addresses, and email addresses, regarded as unnecessary. Both Facebook and Tango require permission to “read phone state” that includes allowing the app to access the device's phone number, the international mobile equipment identifier (IMEI) of the device. As the IMEI serves as a unique identifier that is often used to locate the device, providing such information may be disadvantageous to the user.

56.4% (31) of the participants in our study reported using the Facebook app and 20 out of the 31 participants (64.5%) have used the app on their Android devices. Nine out of the 20 participants who used the Facebook app on their Android devices are mainly literate in Chinese. Given their lack of understanding the information when downloading an app, these participants may be put at risk of granting more than the necessary permissions.

If this need for permissions when downloading apps in Android devices is linked to a lack of understanding of information (in English) on apps and the low likelihood of reading up on information before downloading an app, as seen with our participants, people may be exposed to a higher risk of revealing their sensitive data and PII without being aware of doing so.

Recommendation 3: While studies such as that of Do et al. (2014) suggested that permissions removal in Android devices can be used to enhance user's privacy, we further suggest that incorporating the flexibility of language settings will help users who have little or no literacy in English to be able to remove permissions.

Leaving your belongings unattended in a public place or work environment that may be temporarily out of your sight may be a risky thing to do, especially when the smart mobile device contains a lot of sensitive data and PII about the user, such as photos, login details for the apps installed, corporate and personal emails, and other messages with people you know.

Even though most participants responded “very unlikely” and “somewhat unlikely,” there are still 10.9% of the participants who responded “very likely” (3) and “somewhat likely” (3). The likelihood of half of these participants leaving their smart mobile devices unattended in a public place or work environment may be related to the nature of their work, such as working in an outdoor environment that requires them to travel from place to place, and so the tendency of leaving their belongings unattended is high. Three of these participants, for example, are in the theatrical trade.

Users working in such environments have to be more wary of leaving their smart mobile devices unattended. If leaving a smart mobile device unattended is risky, not having password protection for your device or not knowing if your device has been accessed without your permission may further increase the risk. Among the six participants who are likely to leave their smart mobile devices unattended, half of them do not use a password or PIN to lock their devices. Although most of the other participants responded that their likelihood of leaving their devices unattended is low, 43.6% (24) of them do not lock their mobile devices. Three participants reported that they are “not sure” if their devices have been accessed without their permission.

Unauthorized access of smart mobile devices is a serious threat for any organization whose employees store sensitive data and/or credentials on their smart mobile devices. It should by now be common knowledge that leaving a device unattended, especially if no locking mechanism is in place, exposes any personal and corporate data stored on the device. Even if data has been deleted from the device, it could still potentially be retrieved using open-source and commercial forensic software (e.g. Micro Systemation XRY and CelleBrite UFED Kit) (Tassone et al., 2013; Quick et al., 2013).

While Internet access via smart mobile devices is easily accessible today, connecting to public Wi-Fi networks may put the user at risk of revealing sensitive data and PII. Using Wi-Fi hotspots, a hacker in the local area network may steal such information by replicating the legitimate provider's login or registration webpage. Four of our participants responded “yes” and 10 responded “depends” when asked if they would connect to unknown Wi-Fi networks (see Table 4).

Table 4. Participants' Responses on Wifi and Bluetooth Security (n = 55)

Questions on Wifi and Bluetooth SecurityParticipants' Responses
Do you keep your mobile device(s)'s Wi-Fi switched on at all times? 27 (49.1%) responded “yes”
Would you connect to unknown Wi-Fi networks? Four responded “yes” and 10 responded “depends”
Do you keep your mobile device(s)'s Bluetooth switched on at all times? Five responded “yes”
Would you accept a Bluetooth pairing request from unknown sources? Three responded “depends”

This study also aims to test participant's awareness of phishing. Our findings suggested that a high proportion of the participants are unaware of phishing; see Fig. 4.

What is personally identifying information why must it be handled with extreme care

Fig. 4. Understanding of phishing (n = 55).

This trend stretches across all age groups and educational levels. However, the sample may be biased in finding out if literacy mainly in Chinese may affect users' perception of phishing due to the difficulty in gathering such participants.

87.3% (48) of the participants are either unaware or do not have a sufficient understanding of phishing. Eight of the fifteen participants who said that they know what phishing is fail to identify some of the phishing examples set out in the survey, and 23 participants will perform one or more of the following actions:

Open SMS from unknown contact (20)

Open email (7)

Access instant messaging request (Facebook, MSN) (2)

Table 5 illustrates the questions on phishing, and Tables 6 and 7 illustrate the 10 phishing examples set out in the survey and the participants' responses. There are five examples set in Chinese for participants who are literate mainly in Chinese, while the remaining five examples are the same with the English survey. This is to test the responses of participants who are mainly literate in Chinese on whether they will access without asking for advice, ignore it, or ask for advice from family/friends. Figures marked in bold indicate the number of participants who are aware of what phishing is.

Table 5. Participants' Responses on Phishing (n = 55)

Questions on PhishingParticipants' Responses
Will you access the following from an unknown contact? 23 participants will access one or more of the following: Open SMS from unknown contact, open email, or access instant messaging request (Facebook, MSN)
Will you be able to detect phishing scams received on your mobile device? See Fig. 4.

Table 6. Responses on Phishing for English Survey

Phishing ExamplesResponses
PLN
Local bank phishing email (n = 39) 18 5 16
Bank update SMS phishing (n = 39) 21 3 15
Permission allowing app to messages, personal information, network (n = 38)a 17 8 13
eBay phishing email (n = 39)b 20 4 15
Facebook phishing email (n = 40) 16 4 20
Amazon phishing email (n = 40) 19 1 20
PayPal phishing email (n = 40) 21 5 14
Facebook request (n = 40) 34 0 6
Facebook phishing email (n = 40) 16 4 19
Bank phishing email (n = 40) 21 5 14

P, phishing; L, legitimate; N, not sure.

aThe example here pertains to app permission. Participants will reply “yes,” “no,” or “not sure” to whether they feel this permission is necessary. The “no” option is counted under the “phishing/yes” column to imply participants' awareness that this requirement is unnecessary.bThe example question asks whether the email is legitimate, hence the “no” option will be categorized under the “phishing/yes” column to imply participants' awareness that this email is not legitimate.

Table 7. Responses on Phishing for Chinese Survey

Phishing Examples (n = 15)Responses
PLN
QQ phishing email 9 1 5
Reply to WeChat SMS phishing 14 0 1
Phishing email requesting login 14 1 0
Permission allowing app to messages, personal information, network 12 1 2
Phishing alert on downloading free antivirus software 12 0 3
English phishing examples I W S
Local bank phishing email 8 0 7
PayPal phishing email 11 0 4
eBay phishing email 12 0 3
Facebook request 14 0 1
Facebook phishing email 11 1 3

P, phishing; L, legitimate; N, not sure; I, ignore; W, will access; S, seek advice.

We tested the correlation of phishing with variables like age and educational level. Out of the seven participants who are able to recognize all the phishing examples, six have university education and one graduated with college education. The age group is also more concentrated in the 45–50 age group (4), followed by 51–55 (2) and 56–60 (1). One participant prefers to read in Chinese. However, as the number of participants who are able to identify phishing is small, more tests will need to be conducted on a larger scale for more conclusive results.

A recent research indicates that by 2017, over 1 billion users globally will use their smart mobile devices for banking purposes. Cybercrime is heading towards the “post-PC” era, which is the era of smart mobile devices. It is important to note that the term “phishing” may be unfamiliar to some but other users may have some understanding of the concept of phishing; that is, emails or websites that pretend to be from a trustworthy entity.

Recommendation 4: Given the findings from this survey and the increasing risk of cybercrime targeted at mobile devices, there is an urgent need to increase (elderly) users' awareness about phishing; see Section 4.

Other than responses from the survey, this study has also compiled some feedback from participants and other elderly smart mobile device users whom we have approached but did not participate in the survey.

One user, who did not participate in the survey, responded that he has “technology phobia” and he only knows a few features on the phone such as calling, messaging, and photos, which are mainly the functions of a feature phone. The user occasionally accesses Facebook using both his mobile device and computer.

Most of the users are unsure of how to use features, such as surfing the Internet, playing online games, and social networking. An important factor that obstructed their use of smart mobile devices is the use of the touch-screen feature, as they often face the difficulty of motor and sensory motions like the synthesis of timing to swipe or touch features of the smart mobile device.

As digital immigrants, these seniors may have a longer contact with feature phones than smartphones, so they need time to use the latter more effectively. It is worth noting that a Singapore company manufactures iNo Mobile, an elderly-friendly mobile phone with some models that support smart mobile device features (Dyeo et al., 2010). However, none of our participants own such a mobile device.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128046296000043

Statutory and regulatory GRC

Leighton Johnson, in Security Controls Evaluation, Testing, and Assessment Handbook (Second Edition), 2020

Circulars

A-130, T-5—managing information as a strategic resource—July 2016

First revision of A-130 in 16 years

Three parts

Main

Focuses on planning and budgeting, governance, workforce development, IT investment management, privacy and information security, leveraging the evolving Internet, records management, and information management and access.

Appendix I

Provides the Responsibilities for Protecting Federal Information Resources, including the folowing:

The minimum requirements for Federal information security programs;

Federal agency responsibilities for the security of information and information systems; and

The requirements for Federal privacy programs, including specific responsibilities for privacy program management.

Acknowledges that the concepts of information security and privacy are inexorably linked.

Requires the application of risk management, information security, and privacy policies beginning with the IT acquisition process.

Places ultimate and full responsibility with agency heads.

Appendix II

Addresses the management of Personally Identifiable Information (PII).

The reporting and publication requirements of the Privacy Act of 1974 have been revised and reconstituted as OMB Circular A-108.

Establishes the requirement for a Senior Agency Official for Privacy (SAOP) at each agency.

Establishes a set of fair information practice principles (FIPPs).

Also requires agencies to:

Determine if information systems contain PII

Consider the sensitivity of PII and determine which privacy requirements may apply and any other necessary safeguards

Conduct Privacy Impact Assessments (PIAs) as required by the E-Government Act of 2002

Reduce their holdings of PII to the minimum necessary level for the proper performance of authorized agency functions

A-130, T-4, Appendix III—published in 2000

Security for Federal Information Systems

Requires Executive Branch Agencies:

Plan for Security

Ensure Appropriate Officials Are Assigned with Security Responsibility

Review Security Controls for Systems

Authorized System Processing prior to operations and periodically thereafter—defines this as every 3 years

Defines “adequate security” as

“Security commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of information … provide appropriate confidentiality, integrity, and availability, through the use of cost-effective management, personnel, operational, and technical controls.”

Requires accreditation of federal IS' to operate based on an assessment on management, operational, and technical controls

Defines two types of federal systems

Major Application (MA)

An application that requires special attention to security due to the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of the information in the application.

All Federal applications require some level of protection. Certain applications, because of the information in them, however, require special management oversight and should be treated as major. Adequate security for other applications should be provided by security of the systems in which they operate.

General Support System (GSS)

An interconnected set of information resources under the same direct management control which shares common functionality. A system normally includes hardware, software, information, data, applications, communications, and people.

A system can be, for example, a local area network (LAN) including smart terminals that supports a branch office, an agency-wide backbone, a communications network, a departmental data processing center including its operating system and utilities, a tactical radio network, or a shared information processing service organization (IPSO), such as a service provider.

Remains a crucial component of the overall cybersecurity body of regulations. Last updated in 2000, it requires or specifies:

Risk-based approach to assess and react to threat and vulnerabilities

Security plans and identification and correction of deficiencies

Incident response capabilities

Interruption planning and continuity support

Technical controls consistent with NIST guidance

Periodic review of status and controls

Information sharing (MA only) and public access controls

Responsibility assignment

Periodic reporting of operational and security status

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128184271000033

Online Privacy

Chiara Braghin, Marco Cremonini, in Computer and Information Security Handbook (Third Edition), 2017

One of the pillars of modern data protection and privacy management is the notion of control. Privacy as control of personal information is a foundational principle as powerful as the “right to be left alone” of Warren and Brandais. Even more important for our chapter, the idea of providing citizens with the right to control the usage and dissemination of personal data is at the core of most online privacy initiatives, regulations, and proposals. Citizens of the digital world, customers of Internet merchants, and users of online services should be able to decide which PII to release, to whom, for which purposes, and for how long based on the privacy control theory. It was Alan Westin in 1967 to define privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” [39]. However, like the many developments in the decades following the days of the small Kodak cameras used to peek into people's private life demonstrated, the “right to be left alone” and the theory of privacy control are much more difficult to achieve and complex to analyze than initially believed. The same Alan Westin recognized the intrinsic complex nature of privacy by writing in 2003 an historical overview of how the concept of privacy changed, reflecting social and political events and issues in the last four decades [40].

Le Métayer and Lazaro examined the issue of control applied to privacy in more detail and introduced the distinction between structural/objective control and individual/subjective control [41]. The former is related to the notion of surveillance exercised by public or private organizations over citizens' life and behavior. The latter describes the privacy approach aiming at letting individuals free to define their own digital identity in a self-management fashion. The privacy as control approach falls in the individual/subjective category and has been connected to both a liberal vision of citizenship and a market-oriented definition of privacy as a property right [42]. “Consent should be given by a clear affirmative action establishing a freely given, specific, informed and unambiguous indication of the data subject's agreement to personal data relating to him or her being processed,” states the new EU Data Protection Regulation [2]. Individual consent is the main pillar of the newest and probably the strictest privacy regulation to date, as well as back in the 1970s after Alan Westin articulated his principles [39] that were used in developing the FIPPs [43]. The Individual Participation principle recites: “Organizations should involve the individual in the process of using PII and, to the extent practicable, seek individual consent for the collection, use, dissemination, and maintenance of PII.” As sound the principle of individual consent for data protection could be in theory, its practical application has demonstrated many intrinsic limits, in addition to the subterfuges adopted by those harvesting personal data in order to keep individuals unaware of their practices. Daniel Solove wrote that “Consent legitimizes nearly any form of collection, use, or disclosure of personal data” [44]. This stark declaration has several reasons, among them that cognitive limitations have been reported in many social science research, resulting in an extreme difficulty for individuals to make rational choices about cost and benefits for a complex and often unclearly defined matter as personal privacy in the rich and interactive online environment. Given the number of parties collecting data, the potential secondary usages, and the number of online services we are used to interacting with, it is virtually impossible to make an informed, specific, and unambiguous choice in all cases. It would be simply overwhelming. On the other hand, current examples of informed consent are often blatantly ineffective. Take, for instance, the case of the current, so-called, Cookie Law in the European Union. The ePrivacy directive [45] establishes that for a website it is possible to use cookies (only session cookies are exempted) only after a user has explicitly gave her consent. As a result, all European web sites, starting from 2015, exhibit a banner with a long and mostly incomprehensible disclaimer and consent request to all users at their first visit. The consent in some case must be given by clicking on an “accept” button, but most of the time it can be given indirectly by just scrolling to the page or clicking outside the banner. It is evident that in this case the presumed privacy control is just illusory. The privacy-as-control approach based on informed consent runs into an unsolved and often neglected dilemma: if people have to be fully informed, then the choice becomes unmanageable in practice, otherwise if the choice is made simple (often oversimplified) then people are asked to decide without understanding the matter. The problem of privacy as control does not scale well, and today a solution is still missing. Furthermore, privacy as control can even backfire in the case of advertising. One well-known problem that advertisers face is reactance, which is the emotional reaction of consumers that start behaving in the opposite way an advertising intends if they perceive an ad as intrusive or coercive. Studies have discovered that one effective way for advertisers to mitigate the reactance effect is to improve the perception of privacy control, because, even if the actual control is partial, consumers’ confidence increases and advertising is more effective [46]. The perception of control is, most of the time, the true artifact of privacy initiatives, not an effective control of one's own personal data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000521

Futures

Michael Raggo, Chet Hosmer, in Data Hiding, 2013

Steganography as a Countermeasure

One option in the defense of our systems is to turn the tables on those attacking our systems (insiders or outsiders). By utilizing the capabilities of steganography as a countermeasure we can improve attribution, pedigree and provenance of corporate documents, proposals, intellectual property, and even databases that contain Personal Identifiable Information (PII). In a simplified view, this would work as shown in Figure 12.4.

What is personally identifying information why must it be handled with extreme care

Figure 12.4. Steganography as a Countermeasure

An authorized user creates a document, briefing, spreadsheet, digital image, multimedia file, etc. It is deemed that this document should include a provenance marker. The original object is sent to the Stego-Based Provenance Marker Server. The server secretly embeds hidden markers throughout the object. The markers are embedded in such a way that even when the object is modified or altered the markers remain. This may sound similar to a watermark, however, the content of the markers contain pedigree information (ownership, location, timestamp, description, confidentiality information, expiration, etc.) As the document, image, movie or other digital object is circulated throughout the organization, strategically placed security components can detect the markers and apply policy that would determine distribution, release, access control and integrity operations. Documents, images, etc. that do not have provenance markers, could then be scanned and marked based on the trustworthiness and handling. Even host devices could determine, (based on policy again) how digital objects with/without out provenance markers would be handled, quarantined or processed. This is because the hidden markers don’t effect the usefulness of the object, (in other words they do not affect the quality of the images, multimedia file, document or database as they are non-intrusive to normal use of the objects).

By examining the usefulness of steganography for such confidentiality, integrity, and trust applications, you increase the overall confidentiality, integrity and availability of your cyber infrastructure. Much of today’s cyber security mechanisms rely on passive detection of threats. This method is becoming more difficult as network and processing speeds increase, and the number of devices and diversity of network traffic evolve. We must provide methods to assist these security mechanisms with a-priori and secure information that will improve the efficiencies of these devices.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497435000122

Extending cloud-based applications with mobile opportunistic networks

S. Pal, W. Moreira, in Pervasive Computing, 2016

3.4 Data Storage Protection

In a traditional on-premise application deployment model, sensitive information is stored within a secure boundary based on an organization/company’s policy, fixed security measures, and protocols. In mobile opportunistic/cloud-based networks, there is no fixed infrastructure for communication. Users must somewhat overcome the inherent uncertainty of an available contact opportunity, making them rely upon locally available infrastructures while hoping for the secure handling of their data (Li et al., 2015b). Therefore, how to efficiently protect user’s data inside such decentralized environments is especially challenging while storing that data locally on a device.

To address this, traditional encryption-based security mechanisms can be employed but the growing concern in the use of mobile-cloud networks is regarding the level of control required over a CSP and in what ways a cloud provider can prove itself trustworthy to its client’s encrypted data at storage, when the service provider itself holds the corresponding encryption keys (Grobauer et al., 2011). Therefore, it must be ensured that the cloud-managed user data are protected from vulnerable service providers via encryption in the data storage (Van Dijk et al., 2012).

Moreover, support for dynamically concurrent access control must be provided given the user’s high mobility (users access information in different locations from different devices). To address this, a mechanism that supports dynamic access control and employs fault-tolerant and data integrity schemes to guarantee proper handling of the user data should be implemented (Bowers et al., 2011).

An alternative mechanism for ensuring data integrity in data transmissions in a mobile opportunistic/cloud-based network can be achieved by employing a third-party auditor (TPA), which checks the integrity of stored data in an online storage (Wang et al., 2009). The use of a TPA eliminates the direct involvement of clients in the system, which is important for achieving the economic and performance advantages of cloud-based solutions. This solution also allows the support of data dynamics via most general forms of data operations, for example, block modification, insertion and deletion, which further ensures user privacy by fastening data integrity.

Along similar lines, the privacy-preserving data mining mechanisms can be used for securing sensitive data (Verykios et al., 2004). The major purpose of this data mining technique is to selectively identify the patterns for making predictions of stored data in a data center.

However, within the context of mobile opportunistic/cloud-based networks, it is difficult to structure a pattern for stored data due to the lack of a fixed infrastructure or routing protocol for data storage or data forwarding. These networks face challenges of mining a user’s PII, which has various privacy concerns that create potential security risks to the system. To this end, an anonymization algorithm for privacy-preserving data mining based on the generalization technique can be employed (Mohammed et al., 2011). This noninteractive anonymization algorithm provides a better classification analysis of stored data in a privacy-preserving manner.

On the other hand, a growing concern in mobile opportunistic/cloud-based networks is the ability of processing large amounts of data (ie, Big Data (Che et al., 2013)) by using the resource-constrained mobile devices (Qi and Zong, 2012). The Big Data mining extended the data mining techniques to allow emerging innovative approaches on a large scale that keep in mind the increased value of the user’s PII. When the volume of data increases in such networks, the concerns is the availability of networks for communications to satisfy the required bandwidth for data processing, which may introduce security and privacy issues to the system (Xu et al., 2014).

From the security point of view, a malicious insider can extract a user’s private information and use this information to violate data integrity using unwanted applications (eg, modification in certain part of data). Moreover, in Big Data mining this problem increases with the large volume and velocity of the data streams (Michael and Miller, 2013). Focusing on these issues in a mobile opportunistic/cloud-based network, a major security challenge is how to protect and conduct integrity assessments in such large scaled data, when a seamless network communication may not always be available. The security-monitoring management scheme may be employed to track, log and record such malicious activity (Marchal et al., 2014). By detecting unwanted data manipulation, this scheme allows prevention of further data loss by mitigating potential damage in the network.

Additionally, in the context of data mining, from the privacy point of view, risks (eg, user’s private data disclosure or distortion of sensitive information) may arise with the possible exposure of a user’s PII to a malicious network environment while data are being collected, stored, and analyzed by such a data mining process (Sagiroglu and Sinanc, 2013). In mobile opportunistic/cloud-based networks, this privacy issue may emerge from the potential risk of losing a user’s personal information during storage and manipulation of such data through this data mining process. Secure multiparty computation techniques can be employed (Hongbing et al., 2015), helping to filter malicious users during data communication by mapping the various data elements’ authenticity.

Another concern relates to the privacy and security risks that arise due to data leakage. To address such data leakage, a technique that breaks down sensitive data into insignificant fragments can be employed (Anjum and Umar, 2000). This ensures that a fragment will not contain all of the significant information by itself. By redundantly separating such fragments across various distributed systems, this will mitigate the data leakage problem.

Data leakage may result from the way data flows through these networks. This can be solved through the use of strong network traffic encryption techniques related to secure socket layer (SSL) and the transport layer security (TLS) (Ordean and Giurgiu, 2010). Furthermore, security mechanisms based on distributed cryptography, for example, high-availability and integrity layer (HAIL) (Bowers et al., 2009), can further prevent data leakage by allowing a set of servers to prove to a client that a stored file is intact and retrievable. In mobile opportunistic/cloud-based networks, HAIL can also prevent data leakage while managing file integrity and availability across a collection of independent storage services.

Moreover, from the data storage point of view, data backup is a critical aspect in order to facilitate recovery in the case of connection failures. For example, new privacy issues may arise in the network, for example, potential data loss from data backup in a third-party user within a malicious wireless network environment (Subashini and Kavitha, 2011). It is challenging to manage a privacy-aware data backup mechanisms for users in a mobile opportunistic/cloud-based network, because there are no contemporary paths available between any pair of nodes at a given time. Defense mechanisms like strong security schemes at data storage are therefore needed. Such mechanisms may use attribute-based encryption techniques to protect a user’s data as a way to mitigate data backup issues (Zhou and Huang, 2013). Within the context of mobile opportunistic/cloud-based networks, this feature allows data backup, preventing potential disclosure of a user’s sensitive information.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128036631000140

What is personal identity information?

Personal Identifiable Information (PII) is defined as: Any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.

What are two examples of personally identifiable information?

Personal identification numbers: social security number (SSN), passport number, driver's license number, taxpayer identification number, patient identification number, financial account number, or credit card number. Personal address information: street address, or email address. Personal telephone numbers.

Is all personally identifiable information confidential?

Not all data should be protected in the same way. Organizations must apply appropriate safeguards to protect the confidentiality of PII based on how it categorizes PII in its confidentiality impact levels. Some PII does not even need to be protected.

What is PII data explain how this type of data should be handled?

Personal Identifying Information (PII) is any type of data that can be used to identify someone, from their name and address to their phone number, passport information, and social security numbers. This information is frequently a target for identity thieves, especially over the Internet.