Data Privacy
You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used
You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences should only be used for necessary functions, and you should be protected by ethical review and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.
This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.
Data privacy is a foundational and cross-cutting principle required for achieving all others in this framework. Surveillance and data collection, sharing, use, and reuse now sit at the foundation of business models across many industries, with more and more companies tracking the behavior of the American public, building individual profiles based on this data, and using this granular-level information as input into automated systems that further track, profile, and impact the American public. Government agencies, particularly law enforcement agencies, also use and help develop a variety of technologies that enhance and expand surveillance capabilities, which similarly collect data used as input into other automated systems that directly impact people’s lives. Federal law has not grown to address the expanding scale of private data collection, or of the ability of governments at all levels to access that data and leverage the means of private collection.
Meanwhile, members of the American public are often unable to access their personal data or make critical decisions about its collection and use. Data brokers frequently collect consumer data from numerous sources without consumers’ permission or knowledge.[i] Moreover, there is a risk that inaccurate and faulty data can be used to make decisions about their lives, such as whether they will qualify for a loan or get a job. Use of surveillance technologies has increased in schools and workplaces, and, when coupled with consequential management and evaluation decisions, it is leading to mental health harms such as lowered self-confidence, anxiety, depression, and a reduced ability to use analytical reasoning.[ii] Documented patterns show that personal data is being aggregated by data brokers to profile communities in harmful ways.[iii] The impact of all this data harvesting is corrosive, breeding distrust, anxiety, and other mental health problems; chilling speech, protest, and worker organizing; and threatening our democratic process.[iv] The American public should be protected from these growing risks.
Increasingly, some companies are taking these concerns seriously and integrating mechanisms to protect consumer privacy into their products by design and by default, including by minimizing the data they collect, communicating collection and use clearly, and improving security practices. Federal government surveillance and other collection and use of data is governed by legal protections that help to protect civil liberties and provide for limits on data retention in some cases. Many states have also enacted consumer data privacy protection regimes to address some of these harms.
However, these are not yet standard practices, and the United States lacks a comprehensive statutory or regulatory framework governing the rights of the public when it comes to personal data. While a patchwork of laws exists to guide the collection and use of personal data in specific contexts, including health, employment, education, and credit, it can be unclear how these laws apply in other contexts and in an increasingly automated society. Additional protections would assure the American public that the automated systems they use are not monitoring their activities, collecting information on their lives, or otherwise surveilling them without context-specific consent or legal authority.
- An insurer might collect data from a person’s social media presence as part of deciding what life insurance rates they should be offered.[v]
- A data broker harvested large amounts of personal data and then suffered a breach, exposing hundreds of thousands of people to potential identity theft.[vi]
- A local public housing authority installed a facial recognition system at the entrance to housing complexes to assist law enforcement with identifying individuals viewed via camera when police reports are filed, leading the community, both those living in the housing complex and not, to have videos of them sent to the local police department and made available for scanning by its facial recognition software.[vii]
- Companies use surveillance software to track employee discussions about union activity and use the resulting data to surveil individual employees and surreptitiously intervene in discussions.[viii]
The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.
Traditional terms of service—the block of text that the public is accustomed to clicking through when using a website or digital app—are not an adequate mechanism for protecting privacy. The American public should be protected via built-in privacy protections, data minimization, use and collection limitations, and transparency, in addition to being entitled to clear mechanisms to control access to and use of their data—including their metadata—in a proactive, informed, and ongoing way. Any automated system collecting, using, sharing, or storing personal data should meet these expectations.
Protect privacy by design and by default
- Privacy by design and by default. Automated systems should be designed and built with privacy protected by default. Privacy risks should be assessed throughout the development life cycle, including privacy risks from reidentification, and appropriate technical and policy mitigation measures should be implemented. This includes potential harms to those who are not users of the automated system, but who may be harmed by inferred data, purposeful privacy violations, or community surveillance or other community harms. Data collection should be minimized and clearly communicated to the people whose data is collected. Data should only be collected or used for the purposes of training or testing machine learning models if such collection and use is legal and consistent with the expectations of the people whose data is collected. User experience research should be conducted to confirm that people understand what data is being collected about them and how it will be used, and that this collection matches their expectations and desires.
- Data collection and use-case scope limits. Data collection should be limited in scope, with specific, narrow identified goals, to avoid “mission creep.” Anticipated data collection should be determined to be strictly necessary to the identified goals and should be minimized as much as possible. Data collected based on these identified goals and for a specific context should not be used in a different context without assessing for new privacy risks and implementing appropriate mitigation measures, which may include express consent. Clear timelines for data retention should be established, with data deleted as soon as possible in accordance with legal or policy-based limitations. Determined data retention timelines should be documented and justified.
- Risk identification and mitigation. Entities that collect, use, share, or store sensitive data should attempt to proactively identify harms and seek to manage them so as to avoid, mitigate, and respond appropriately to identified risks. Appropriate responses include determining not to process data when the privacy risks outweigh the benefits or implementing measures to mitigate acceptable risks. Appropriate responses do not include sharing or transferring the privacy risks to users via notice or consent requests where users could not reasonably be expected to understand the risks without further support.
- Privacy-preserving security. Entities creating, using, or governing automated systems should follow privacy and security best practices designed to ensure data and metadata do not leak beyond the specific consented use case. Best practices could include using privacy-enhancing cryptography or other types of privacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with conventional system security protocols.
Protect the public from unchecked surveillance
- Heightened oversight of surveillance. Surveillance or monitoring systems should be subject to heightened oversight that includes at a minimum assessment of potential harms during design (before deployment) and in an ongoing manner, to ensure that the American public’s rights, opportunities, and access are protected. This assessment should be done before deployment and should give special attention to ensure there is not algorithmic discrimination, especially based on community membership, when deployed in a specific real-world context. Such assessment should then be reaffirmed in an ongoing manner as long as the system is in use.
- Limited and proportionate surveillance. Surveillance should be avoided unless it is strictly necessary to achieve a legitimate purpose and it is proportionate to the need. Designers, developers, and deployers of surveillance systems should use the least invasive means of monitoring available and restrict monitoring to the minimum number of subjects possible. To the greatest extent possible consistent with law enforcement and national security needs, individuals subject to monitoring should be provided with clear and specific notice before it occurs and be informed about how the data gathered through surveillance will be used.
- Scope limits on surveillance to protect rights and democratic values. Civil liberties and civil rights must not be limited by the threat of surveillance or harassment facilitated or aided by an automated system. Surveillance systems should not be used to monitor the exercise of democratic rights, such as voting, privacy, peaceful assembly, speech, or association, in a way that limits the exercise of civil rights or civil liberties. Information about or algorithmically-determined assumptions related to identity should be carefully limited if used to target or guide surveillance systems in order to avoid algorithmic discrimination; such identity-related information includes group characteristics or affiliations, geographic designations, location-based and association-based inferences, social networks, and biometrics. Continuous surveillance and monitoring systems should not be used in physical or digital workplaces (regardless of employment status), public educational institutions, and public accommodations. Continuous surveillance and monitoring systems should not be used in a way that has the effect of limiting access to critical resources or services or suppressing the exercise of rights, even where the organization is not under a particular duty to protect those rights.
Provide the public with mechanisms for appropriate and meaningful consent, access, and control over their data
- Use-specific consent. Consent practices should not allow for abusive surveillance practices. Where data collectors or automated systems seek consent, they should seek it for specific, narrow use contexts, for specific time durations, and for use by specific entities. Consent should not extend if any of these conditions change; consent should be re-acquired before using data if the use case changes, a time limit elapses, or data is transferred to another entity (including being shared or sold). Consent requested should be limited in scope and should not request consent beyond what is required. Refusal to provide consent should be allowed, without adverse effects, to the greatest extent possible based on the needs of the use case.
- Brief and direct consent requests. When seeking consent from users short, plain language consent requests should be used so that users understand for what use contexts, time span, and entities they are providing data and metadata consent. User experience research should be performed to ensure these consent requests meet performance standards for readability and comprehension. This includes ensuring that consent requests are accessible to users with disabilities and are available in the language(s) and reading level appropriate for the audience. User experience design choices that intentionally obfuscate or manipulate user choice (i.e., “dark patterns”) should be not be used.
- Data access and correction. People whose data is collected, used, shared, or stored by automated systems should be able to access data and metadata about themselves, know who has access to this data, and be able to correct it if necessary. Entities should receive consent before sharing data with other entities and should keep records of what data is shared and with whom.
- Consent withdrawal and data deletion. Entities should allow (to the extent legally permissible) withdrawal of data access consent, resulting in the deletion of user data, metadata, and the timely removal of their data from any systems (e.g., machine learning models) derived from that data.[ix]
- Automated system support. Entities designing, developing, and deploying automated systems should establish and maintain the capabilities that will allow individuals to use their own automated systems to help them make consent, access, and control decisions in a complex data ecosystem. Capabilities include machine readable data, standardized data formats, metadata or tags for expressing data processing permissions and preferences and data provenance and lineage, context of use and access-specific tags, and training models for assessing privacy risk.
Demonstrate that data privacy and user control are protected
- Independent evaluation. As described in the section on Safe and Effective Systems, entities should allow independent evaluation of the claims made regarding data policies. These independent evaluations should be made public whenever possible. Care will need to be taken to balance individual privacy with evaluation data access needs.
- Reporting. When members of the public wish to know what data about them is being used in a system, the entity responsible for the development of the system should respond quickly with a report on the data it has collected or stored about them. Such a report should be machine-readable, understandable by most users, and include, to the greatest extent allowable under law, any data and metadata about them or collected from them, when and how their data and metadata were collected, the specific ways that data or metadata are being used, who has access to their data and metadata, and what time limitations apply to these data. In cases where a user login is not available, identity verification may need to be performed before providing such a report to ensure user privacy. Additionally, summary reporting should be proactively made public with general information about how peoples’ data and metadata is used, accessed, and stored. Summary reporting should include the results of any surveillance pre-deployment assessment, including disparity assessment in the real-world deployment context, the specific identified goals of any data collection, and the assessment done to ensure only the minimum required data is collected. It should also include documentation about the scope limit assessments, including data retention timelines and associated justification, and an assessment of the impact of surveillance or data collection on rights, opportunities, and access. Where possible, this assessment of the impact of surveillance should be done by an independent party. Reporting should be provided in a clear and machine-readable manner.
Some domains, including health, employment, education, criminal justice, and personal finance, have long been singled out as sensitive domains deserving of enhanced data protections. This is due to the intimate nature of these domains as well as the inability of individuals to opt out of these domains in any meaningful way, and the historical discrimination that has often accompanied data knowledge.[x] Domains understood by the public to be sensitive also change over time, including because of technological developments. Tracking and monitoring technologies, personal tracking devices, and our extensive data footprints are used and misused more than ever before; as such, the protections afforded by current legal guidelines may be inadequate. The American public deserves assurances that data related to such sensitive domains is protected and used appropriately and only in narrowly defined contexts with clear benefits to the individual and/or society.
To this end, automated systems that collect, use, share, or store data related to these sensitive domains should meet additional expectations. Data and metadata are sensitive if they pertain to an individual in a sensitive domain (defined below); are generated by technologies used in a sensitive domain; can be used to infer data from a sensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric data, behavioral data, geolocation data, data related to interaction with the criminal justice system, relationship history and legal status such as custody and divorce information, and home, work, or school environmental data); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful harm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about those who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, but is not limited to, numerical, text, image, audio, or video data. “Sensitive domains” are those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liberties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections or where such enhanced protections are reasonably expected by the public include, but are not limited to, health, family planning and care, employment, education, criminal justice, and personal finance. In the context of this framework, such domains are considered sensitive whether or not the specifics of a system context would necessitate coverage under existing law, and domains and data that are considered sensitive are understood to change over time based on societal norms and context.
- Continuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep apnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the device based on usage data. Patients were not aware that the data would be used in this way or monitored by anyone other than their doctor.[xi]
- A department store company used predictive analytics applied to collected consumer data to determine that a teenage girl was pregnant, and sent maternity clothing ads and other baby-related advertisements to her house, revealing to her father that she was pregnant.[xii]
- School audio surveillance systems monitor student conversations to detect potential “stress indicators” as a warning of potential violence.[xiii] Online proctoring systems claim to detect if a student is cheating on an exam using biometric markers.[xiv] These systems have the potential to limit student freedom to express a range of emotions at school and may inappropriately flag students with disabilities who need accommodations or use screen readers or dictation software as cheating.[xv]
- Location data, acquired from a data broker, can be used to identify people who visit abortion clinics.[xvi]
- Companies collect student data such as demographic information, free or reduced lunch status, whether they’ve used drugs, or whether they’ve expressed interest in LGBTQI+ groups, and then use that data to forecast student success.[xvii] Parents and education experts have expressed concern about collection of such sensitive data without express parental consent, the lack of transparency in how such data is being used, and the potential for resulting discriminatory impacts.
- Many employers transfer employee data to third party job verification services. This information is then used by potential future employers, banks, or landlords. In one case, a former employee alleged that a company supplied false data about her job title which resulted in a job offer being revoked.[xviii]
What should be expected of automated systems?
In addition to the privacy expectations above for general non-sensitive data, any system collecting, using, sharing, or storing sensitive data should meet the expectations below. Depending on the technological use case and based on an ethical assessment, consent for sensitive data may need to be acquired from a guardian and/or child.
Provide enhanced protections for data related to sensitive domains
- Necessary functions only. Sensitive data should only be used for functions strictly necessary for that domain or for functions that are required for administrative reasons (e.g., school attendance records), unless consent is acquired, if appropriate, and the additional expectations in this section are met. Consent for non-necessary functions should be optional, i.e., should not be required, incentivized, or coerced in order to receive opportunities or access to services. In cases where data is provided to an entity (e.g., health insurance company) in order to facilitate payment for such a need, that data should only be used for that purpose.
- Ethical review and use prohibitions. Any use of sensitive data or decision process based in part on sensitive data that might limit rights, opportunities, or access, whether the decision is automated or not, should go through a thorough ethical review and monitoring, both in advance and by periodic review (e.g., via an independent ethics committee or similarly robust process). In some cases, this ethical review may determine that data should not be used or shared for specific uses even with consent. Some novel uses of automated systems in this context, where the algorithm is dynamically developing and where the science behind the use case is not well established, may also count as human subject experimentation, and require special review under organizational compliance bodies applying medical, scientific, and academic human subject experimentation ethics rules and governance procedures.
- Data quality. In sensitive domains, entities should be especially careful to maintain the quality of data to avoid adverse consequences arising from decision-making based on flawed or inaccurate data. Such care is necessary in a fragmented, complex data ecosystem and for datasets that have limited access such as for fraud prevention and law enforcement. It should be not left solely to individuals to carry the burden of reviewing and correcting data. Entities should conduct regular, independent audits and take prompt corrective measures to maintain accurate, timely, and complete data.
- Limit access to sensitive data and derived data. Sensitive data and derived data should not be sold, shared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be used to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies are expected to keep sensitive data private. Access to such data should be limited based on necessity and based on a principle of local control, such that those individuals closest to the data subject have more access while those who are less proximate do not (e.g., a teacher has access to their students’ daily progress data while a superintendent does not).
- Reporting. In addition to the reporting on data privacy (as listed above for non-sensitive data), entities developing technologies related to a sensitive domain and those collecting, using, storing, or sharing sensitive data should, whenever appropriate, regularly provide public reports describing: any data security lapses or breaches that resulted in sensitive data leaks; the number, type, and outcomes of ethical pre-reviews undertaken; a description of any data sold, shared, or made public, and how that data was assessed to determine it did not present a sensitive data risk; and ongoing risk identification and management procedures, and any mitigation added based on these procedures. Reporting should be provided in a clear and machine-readable manner.
Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access.
- The Privacy Act of 1974 requires privacy protections for personal information in federal records systems, including limits on data retention, and also provides individuals a general right to access and correct their data. Among other things, the Privacy Act limits the storage of individual information in federal systems of records, illustrating the principle of limiting the scope of data retention. Under the Privacy Act, federal agencies may only retain data about an individual that is “relevant and necessary” to accomplish an agency’s statutory purpose or to comply with an Executive Order of the President. The law allows for individuals to be able to access any of their individual information stored in a federal system of records, if not included under one of the systems of records exempted pursuant to the Privacy Act. In these cases, federal agencies must provide a method for an individual to determine if their personal information is stored in a particular system of records, and must provide procedures for an individual to contest the contents of a record about them. Further, the Privacy Act allows for a cause of action for an individual to seek legal relief if a federal agency does not comply with the Privacy Act’s requirements. Among other things, a court may order a federal agency to amend or correct an individual’s information in its records or award monetary damages if an inaccurate, irrelevant, untimely, or incomplete record results in an adverse determination about an individual’s “qualifications, character, rights, … opportunities…, or benefits.”
- NIST’s Privacy Framework provides a comprehensive, detailed and actionable approach for organizations to manage privacy risks. The NIST Framework gives organizations ways to identify and communicate their privacy risks and goals to support ethical decision-making in system, product, and service design or deployment, as well as the measures they are taking to demonstrate compliance with applicable laws or regulations. It has been voluntarily adopted by organizations across many different sectors around the world.[xix]
- A school board’s attempt to surveil public school students—undertaken without adequate community input—sparked a state-wide biometrics moratorium.[xx] Reacting to a plan in the city of Lockport, New York, the state’s legislature banned the use of facial recognition systems and other “biometric identifying technology” in schools until July 1, 2022.[xxi] The law additionally requires that a report on the privacy, civil rights, and civil liberties implications of the use of such technologies be issued before biometric identification technologies can be used in New York schools.
- Federal law requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute, providing a transparency mechanism to help protect worker organizing. Employers engaging in workplace surveillance “where an object thereof, directly or indirectly, is […] to obtain information concerning the activities of employees or a labor organization in connection with a labor dispute” must report expenditures relating to this surveillance to the Department of Labor Office of Labor-Management Standards, and consultants who employers retain for these purposes must also file reports regarding their activities.[xxii]
- Privacy choices on smartphones show that when technologies are well designed, privacy and data agency can be meaningful and not overwhelming. These choices—such as contextual, timely alerts about location tracking—are brief, direct, and use-specific. Many of the expectations listed here for privacy by design and use-specific consent mirror those distributed to developers as best practices when developing for smart phone devices,[xxiii] such as being transparent about how user data will be used, asking for app permissions during their use so that the use-context will be clear to users, and ensuring that the app will still work if users deny (or later revoke) some permissions.
[i] See, e.g., the 2014 Federal Trade Commission report “Data Brokers A Call for Transparency and Accountability”. https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf
[ii] See, e.g., Nir Kshetri. School surveillance of students via laptops may do more harm than good. The Conversation. Jan. 21, 2022. https://theconversation.com/school-surveillance-of-students-via-laptops-may-do-more-harm-than-good-170983; Matt Scherer. Warning: Bossware May be Hazardous to Your Health. Center for Democracy & Technology Report. https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf; Human Impact Partners and WWRC. The Public Health Crisis Hidden in Amazon Warehouses. HIP and WWRC report. Jan. 2021. https://humanimpact.org/wp-content/uploads/2021/01/The-Public-Health-Crisis-Hidden-In-Amazon-Warehouses-HIP-WWRC-01-21.pdf; Drew Harwell. Contract lawyers face a growing invasion of surveillance programs that monitor their work. The Washington Post. Nov. 11, 2021. https://www.washingtonpost.com/technology/2021/11/11/lawyer-facial-recognition-monitoring/; Virginia Doellgast and Sean O’Brady. Making Call Center Jobs Better: The Relationship between Management Practices and Worker Stress. A Report for the CWA. June 2020. https://hdl.handle.net/1813/74307
[iii] See, e.g., Federal Trade Commission. Data Brokers: A Call for Transparency and Accountability. May 2014. https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf; Cathy O’Neil. Weapons of Math Destruction. Penguin Books. 2017. https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
[iv] See, e.g., Rachel Levinson-Waldman, Harsha Pandurnga, and Faiza Patel. Social Media Surveillance by the U.S. Government. Brennan Center for Justice. Jan. 7, 2022. https://www.brennancenter.org/our-work/research-reports/social-media-surveillance-us-government; Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs. 2019.
[v] Angela Chen. Why the Future of Life Insurance May Depend on Your Online Presence. The Verge. Feb. 7, 2019. https://www.theverge.com/2019/2/7/18211890/social-media-life-insurance-new-york-algorithms-big-data-discrimination-online-records
[vi] See, e.g., Scott Ikeda. Major Data Broker Exposes 235 Million Social Media Profiles in Data Lead: Info Appears to Have Been Scraped Without Permission. CPO Magazine. Aug. 28, 2020. https://www.cpomagazine.com/cyber-security/major-data-broker-exposes-235-million-social-media-profiles-in-data-leak/; Lily Hay Newman. 1.2 Billion Records Found Exposed Online in a Single Server. WIRED, Nov. 22, 2019. https://www.wired.com/story/billion-records-exposed-online/
[vii] Lola Fadulu. Facial Recognition Technology in Public Housing Prompts Backlash. New York Times. Sept. 24, 2019. https://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-housing.html[8] Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust Unions. Newsweek. Dec. 13, 2021. https://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillance-technology-bust-unions-1658603
[viii] Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust Unions. Newsweek. Dec. 13, 2021. https://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillance-technology-bust-unions-1658603
[ix] See, e.g., enforcement actions by the FTC against the photo storage app Everalbaum (https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter), and against Weight Watchers and their subsidiary Kurbo (https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww)
[x] See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act (FDCPA), Pub. L. 95-109 (1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232g), Children’s Online Privacy Protection Act of 1998, 15 U.S.C. 6501–6505, and Confidential Information Protection and Statistical Efficiency Act (CIPSEA) (116 Stat. 2899)
[xi] Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally True. ProPublica. Nov. 21, 2018. https://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adage-literally-true
[xii] Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb. 16, 2012. https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html
[xiii] Jack Gillum and Jeff Kao. Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools are Using to Monitor Students. ProPublica. Jun. 25, 2019. https://features.propublica.org/aggression-detector/the-unproven-invasive-surveillance-technology-schools-are-using-to-monitor-students/
[xiv] Drew Harwell. Cheating-detection companies made millions during the pandemic. Now students are fighting back. Washington Post. Nov. 12, 2020. https://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/
[xv] See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a Disadvantage. Government Technology. May 24, 2022. https://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-a-disadvantage; Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled people. Center for Democracy and Technology Report. May 24, 2022. https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how-new-surveillance-technologies-in-education-policing-health-care-and-the-workplace-disproportionately-harm-disabled-people/
[xvi] See., e.g., Sam Sabin. Digital surveillance in a post-Roe world. Politico. May 5, 2022. https://www.politico.com/newsletters/digital-future-daily/2022/05/05/digital-surveillance-in-a-post-roe-world-00030459; Federal Trade Commission. FTC Sues Kochava for Selling Data that Tracks People at Reproductive Health Clinics, Places of Worship, and Other Sensitive Locations. Aug. 29, 2022. https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-sues-kochava-selling-data-tracks-people-reproductive-health-clinics-places-worship-other
[xvii] Todd Feathers. This Private Equity Firm Is Amassing Companies That Collect Data on America’s Children. The Markup. Jan. 11, 2022. https://themarkup.org/machine-learning/2022/01/11/this-private-equity-firm-is-amassing-companies-that-collect-data-on-americas-children
[xviii] Reed Albergotti. Every employee who leaves Apple becomes an ‘associate’: In job databases used by employers to verify resume information, every former Apple employee’s title gets erased and replaced with a generic title. The Washington Post. Feb. 10, 2022. https://www.washingtonpost.com/technology/2022/02/10/apple-associate/
[xix] National Institute of Standards and Technology. Privacy Framework Perspectives and Success Stories. Accessed May 2, 2022. https://www.nist.gov/privacy-framework/getting-started-0/perspectives-and-success-stories
[xx] ACLU of New York. What You Need to Know About New York’s Temporary Ban on Facial Recognition in Schools. Accessed May 2, 2022. https://www.nyclu.org/en/publications/what-you-need-know-about-new-yorks-temporary-ban-facial-recognition-schools
[xxi] New York State Assembly. Amendment to Education Law. Enacted Dec. 22, 2020. https://nyassembly.gov/leg/?default_fld=&leg_video=&bn=S05140&term=2019&Summary=Y&Text=Y
[xxii] U.S Department of Labor. Labor-Management Reporting and Disclosure Act of 1959, As Amended. https://www.dol.gov/agencies/olms/laws/labor-management-reporting-and-disclosure-act (Section 203). See also: U.S Department of Labor. Form LM-10. OLMS Fact Sheet, Accessed May 2, 2022. https://www.dol.gov/sites/dolgov/files/OLMS/regs/compliance/LM-10_factsheet.pdf
[xxiii] See, e.g., Apple. Protecting the User’s Privacy. Accessed May 2, 2022.
https://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy; Google Developers. Design for Safety: Android is secure by default and private by design. Accessed May 3, 2022. https://developer.android.com/design-for-safety