Non-Human Identity (NHI) and PLCs

Introduction: PLCs in Everyday Automation

When people consider robots, many think of the humanoid replacements seen on TV, with digitized faces, and funny walking gates. Most don’t consider how robotics are already part of nearly every aspect of our current everyday life. The backbone for many controls for infrastructure, including critical such as water systems, transportation, turbines, and also non critical systems such as automated warehouses and similar robotics applications is the humble Programming Logic Controller (PLC).

To illustrate this, consider something as simple as buying a bottle of water at your local market. That bottle has traveled through multiple automated networks before reaching the shelf. It was likely stored in an automated warehouse, where PLCs controlled its journey from storage to a truck, merging with other soda co-products into a perfectly stacked, mixed pallet. This entire process is orchestrated by a network of PLCs ensuring seamless movement.

PLCs are the backbone of millions of automation services, and to explore it’s ubiquitous, and importance, we will explore the PLC in an Automated Warehouse to understand how something so common, can be so critical.

What does a PLC do

As mentioned above, a PLC is a Programming Logic Controller, but that isn’t enough for the lay person to understand what a PLC does. A PLC is in fact, a computer used for industrial automation.

It is designed to replicate a set operation or process over, and over again- while collecting vital information from connected systems such as sensors, SCADA (Supervisory Control and Data Acquisition) systems, and HMIs (Human Machine Interfaces)—to collect and process data.

Based on this input, the PLC determines the appropriate response, such as activating motors for conveyors, lifts, and other components within an automated warehouse. The PLC itself operates by: detecting the state of all things it’s connected to, following its repeatable logic for action, and output communication back to what it is connected to; in our example it will be turning on or off the motors for conveyors, lifts, and other parts of an Automated Warehouse system, while also broadcasting status the the larger system.1[1]

PLC in its “Natural Environment”

Before diving into how PLCs identify and communicate, let’s explore an environment in which they operate. Consider an automated warehouse, like an amusement park ride for your bottle of water. Products start at different bays, and ride conveyors, that climb, twist, merge, and more, all to end at the the loading dock; PLCs manage every transition.

Automated warehouses function as massive logistical hubs, moving thousands of products daily through an intricate system of conveyors, sorters, and palletizers. These warehouses are structured into zones, each managed by dedicated PLCs that control specific identified (NHI) conveyor types, motors, actuators, etc. Here I provide some human readable examples of identifiers:

  1. PLC: (PLC001, PLC002, etc.) – Each PLC must be identified and identifiable for communication and controls to happen.
  2. BAY: (B001, B002, etc.) – where the product is waiting to be “picked” and have the PLC release onto a conveyor.
  3. Release Conveyor: (RC001, RC002, etc.) move products from storage.
  4. Merge Conveyor: (MC001, MC002, etc.) Multiple conveyors merge into a single conveyor monitored by sensor controllers (e.g., Raspberry Pi devices).
  5. Divert Conveyor: (D001, D002, etc.) When the boxes on the conveyor are splitting onto two or more conveyors.
  6. Sequence Check: (SC001, SC001, etc.) found at intersections to proper order.
  7. Palletizer Merge: till it reaches the Palletizer zone we will call the conveyor that delivers to the palletizer “PM”. Within the Palletizer there are multiple zones:
  8. Palletization & Patterns – Different automated systems have different palletization or loading patterns.  (We will not go into the specifics of any individual warehouse pattern systems.)

At each stage, PLCs ensure that products follow the correct path, communicating with sensors using their programmed logic to maintain order. At many PLC locations, there may be another controller working in tandem, a sensor controller, often a Raspberry Pi. This little card sized controller is used specifically to capture data on sensors, lasers, and similar; used to identify a container as it rolls through a sequence check, this device must have its own identification, and be able to communicate to and be recognized by the PLC. Now that we have a high-level overview the warehouse flow, we can explore how PLCs identify and communicate with each other within this environment.

PLC Identity and Non-Human Identity (NHI)

For PLCs to function in an automated system, they must be able to recognize and communicate with all the devices around them – motors, actuators, and even secondary controllers like the Raspberry Pi. To do this, they rely on a set of unique identifiers known as Non-Human Identity (NHI). These identifiers allow PLCs to track and communicate with every connected device in real time, enabling the automation of operations.

Some of the key NHI mechanisms used in Industrial Automation include:

  • IP or MAC Addresses – Common in modern Ethernet-based networks.
  • Industrial Protocols – Such as Ethernet/IP, Modbus TCP/IP, and Profinet.
  • Legacy Network Identifiers – Older systems use Profibus, CANopen, and DeviceNet, which assign Node IDs instead of IP addresses enabling PLC to communicate to different machines.
  • Memory Addresses & Tags – PLCs store references to connected devices, ensuring recognition even after hardware replacements.
  • Routing Tables & Network Maps – Define communication pathways in complex systems.
  • Raspberry Pi Running Node-RED – fetches data from Allen-Bradley PLC using Modbus TCP/IP, a quick SCADA alternative, and can in some instances be a sub network within a PLC network.

In the warehouse, these identifiers allow different zones to work in tandem. When a product is released from storage, the release conveyor’s PLC communicates with merge and divert PLCs, ensuring proper sequencing for palletization. If anything goes wrong – like a PLC not recognizing a product assigned path, it will trigger a fault, forcing human workers to intercept and correct. Even a single miscommunication can create delays that ripple through the entire warehouse.

Mixed-Age Systems and Heartbeat Identification

Industrial automation systems change, adapt, and evolve over time. As facilities upgrade, they often end up with mixed-age systems, where legacy PLCs must coexist with modern networked controllers and machines2. In such environments, older PLCs often rely on heartbeat signals—simple, periodic pings that confirm a device is online If a heartbeat is lost, the system assumes failure and may trigger emergency shutdowns.

While this mechanism ensures safety, it also presents a risk: heartbeat ID spoofing could allow an unauthorized device to mimic a PLC’s presence, potentially disrupting warehouse operations. (We’ll discuss more in depth to come.)

Multiple PLC Networks and Leader-Follower Configurations

In automated warehouses, PLCs do not operate in isolation, they are part of segmented networks. To manage complexity, PLCs are often grouped into leader-follower configurations, where a leader PLC oversees several subordinate controllers. This structure:

  • Reduces network congestion by centralizing decision-making.
  • Ensures coordinated actions across multiple warehouse zones.
  • Helps isolate faults—if a follower PLC fails, the leader can reroute operations or trigger alerts.

When a PLC broadcasts an error, it’s effects ripple through the system. The Leader PLC broadcasts the error, and upstream PLCs must determine whether it will impact their operations. If an issue is detected, they halt to prevent further errors. Meanwhile, downstream PLCs continue running until a sequence check further along the process detects a missing product. At that point, the downstream PLC registers the failure and alerts its own upstream systems, trigging a secondary shutdown.

This cascading effect can halt sections of the warehouse or, in extreme cases, bring the entire facility to a standstill if warehouse staff and Controls Engineers do not quickly identify and resolve the originating PLC failure.

For example, consider merge zone PLC detecting a sequencing error. The PLC immediately notifies its leader PLC, which then signals upstream systems to pause product flow. By stopping movement before the issue spreads further, the system minimizes disruption and reduces downtime.

The Interconnected Nature of PLCs

The ability of PLCs to recognize and communicate with each other and partner systems is what keeps an automated system running smoothly. But as warehouses grow more complex, integrating mixed age networks, external controllers, and industrial IoT devices, the question of identity becomes just as important as function. Without strong Non-Human Identity (NHI) mechanisms, PLCs cannot securely authenticate the machines they interact with, leaving gaps for errors and exploitation.

In the next section, we will explore some mechanisms PLCs use to establish identity. From IP/MAC addressing to legacy network identifiers, each method plays a role in ensuring that every PLC, sensor, and actuator knows it’s place in the system. These identities and identity methods allow PLCs to interact reliably, but come with limitation and challenges.

Key Non-Human Identity Methods in Automated Warehouses

We continue to explore some of the top uses and vulnerabilities of the Non-Human Identity of Automated Warehouses, and how they relate specifically with regards to the PLC.

IP or MAC Address-Based Identification

When properly set up, PLCs rely on IP or MAC addresses for network communication and identification. In most warehouse environments, leader PLCs may use multiple identifiers for redundancy and protection, while subordinate PLCs may be identified by their MAC address for simplicity.

While MAC spoofing doesn’t have much news coverage, it does happen. There was a 2016 MAC Spoofing attack that cost millions of dollars3. In an industrial setting, even if the malicious actor is successfully blocked through traveling latterly or upstream through network segmentation, we have seen how a single PLC error can cascade and effect the whole system. Strong segmentation may not be enough to prevent disruptions.

And recall, the PLC is not only communicating with other PLCs, but actuators, and other devices such as sensors, and interfaces. If there isn’t a good and regular inventory of all connected devices, the impact of an identity failure can cascade across the entire system.

Industry Protocols (Ethernet/IP, Modbus TCP/IP, Profinet, etc)

Industrial control systems were originally designed with isolation in mind, and isolation was considered secure when Operational Technology (OT) networks were separate from IT infrastructure. However, as automation environments have become more interconnected, these once-closed networks are now discoverable on the network, and face security risks.

Many industry-standard protocols, including Ethernet/IP, Modbus TCP/IP , and Profinet, were developed assuming that the network was closed and secure. These protocols were designed without encryption or authentication mechanisms4,5, making them inherently insecure for communication over modern networks.

This introduces a path to access and capture MAC addresses, verification protocols, or other operational information, widening the door for attack. There are add-ons for security, however the core issue remains; these protocols were not designed with cybersecurity in mind, leaving critical systems vulnerable.

Legacy Network Identifiers

Recall discussing “mixed age systems” above?  Older PLCs may not be fully compatible with newer PLC’s, even when using the same brand. When a facility upgrades, it is very unlikely to transition all existing hardware and components; instead, legacy products that are still working remain, sometimes segmenting by network, or even using “heart beat” protocols, where an older PLC broadcasts a heartbeat (ping) as “proof of life”.

The problem; this heartbeat/follower PLC protocol lacks any NHI identifiers at all, and opens another avenue for entry and disruption. When combined with a non-encrypted network protocol, and a threat actor may be able to map older network segments, identify vulnerable devices, and then make plans accordingly.

Claroty TEAM82 has demonstrated the risk in multiple ways, one of the most interesting involves leveraging a legacy PLC to access the SCADA systems.  The fastest way to achieve this? Trigger a fault in a legacy PLC, and then the Engineer may use SCADA or HMI to review (and thus attacker gains access to engineer SCADA and more)6. If we have older devices that are using heartbeat as it’s identifier, the bar to access that is pretty low .

Protect the Non-Human Identity – Protect the System

By now, it should be clear just how deeply PLCs are embedded in modern life. They don’t just move your bottle of water from storage to shipment, they quietly control much of the world’s infrastructure, from manufacturing and logistics to water plants and critical utilities.

A warehouse shutdown is inconvenient, but what happens when a PLC error does more than stop operations? What if instead of halting a system, it mistakenly activates equipment? What if a disrupted PLC logic sequence sends the wrong command at the wrong time?

Can you imagine an entire pallet of water falling from 7 stories of a warehouse bay? Who was working there at the time, how were they affected? Now, take that same failure, and apply it to a water treatment plant. What happens when a gate controlling chemical flow opens too early or too late?

Non-Human Identity in industrial automation is established through control systems, MAC and IP addresses, industrial protocols, and authentication mechanisms that help machines communicate with their intended counterparts. As automation networks grow more complex and interconnected, protecting these identity structures becomes critical. If a PLC’s identity is spoofed, or compromised, the consequences could ripple far beyond a single warehouse, impacting safety, security, and infrastructure at a much larger scale.


  1. DO Supply, Explaining HMI, SCADA, and PLCs, What They Do, and How They Work Together ↩︎
  2. [1] POR Automated Wherehouse, Overcoming Common Software Implementation Challenges(p8) ↩︎
  3. Secure W2, MAC Spoofing Attacks Explained: A Technical Overview ↩︎
  4. Veridify Security, OT Security: Cybersecurity for Modbus ↩︎
  5. ODVA, Overview of CIP Security ↩︎
  6. Claroty Team82, Evil PLC Attack: Using a Controller as Predator Rather than Prey ↩︎

References

Allen-Bradley. (2005). Rockwell Automation | Literature | Documents | ag-um008. Retrieved from Rockwell Automation: https://literature.rockwellautomation.com/idc/groups/literature/documents/um/ag-um008_-en-p.pdf

DO Supply. (2019, February 4). Explaining HMI, SCADA, and PLCs, What They Do, and How They Work Together. Retrieved from DO Supply: https://www.dosupply.com/tech/2019/02/04/explaining-hmi-scada-and-plcs-what-they-do-and-how-they-work-together/

Huges, C. (2025, February 20). Understanding OWASP’s Top 10 List of non-human identity criticlal risks. Retrieved from CSO: https://www.csoonline.com/article/3828216/understanding-owasps-top-10-list-of-non-human-identity-critical-risks.html

ODVA. (n.d.). ODVA | Technology Standards | Distincht CIP Services. Retrieved from ODVA: https://www.odva.org/wp-content/uploads/2023/07/PUB00319R2_CIP-Security-At-a-Glance.pdf

Panduit. (2022, October). Panduit | Markets | Documents | Infrastructure Warehouse Automation. Retrieved from Panduit: https://www.panduit.com/content/dam/panduit/en/website/solutions/markets/documents/infrastructure-warehouse-automation-cpcb261.pdf

Project, O. W. (2025). OWASP Non-Human Identities Top10. Retrieved from OWASP: https://owasp.org/www-project-non-human-identities-top-10/2025/

Rockwell Automation. (2024, June). Rockwell Automation | Literature | PlantPAx Distributed Control System Configuration and Implementation. Retrieved from Rockwell Automation: https://literature.rockwellautomation.com/idc/groups/literature/documents/um/proces-um100_-en-p.pdf

Secure W2. (2025). MAC Spoofing Attacks Explained: A Technical Overview. Retrieved from Secure W2: https://www.securew2.com/blog/how-do-mac-spoofing-attacks-work

Sharon Brizinov, M. S. (2022, August 13). Claroty | Team82 | Evil PLC Attack: Using a Controller as Predator Rather than Prey. Retrieved from Claroty: https://claroty.com/team82/research/evil-plc-attack-using-a-controller-as-predator-rather-than-prey

Tecsys. (2024). infohub Tecsys | Resources | e-book | Improving Warehouse Operations with Low Code Application Platforms. Retrieved from Tecsys: https://infohub.tecsys.com/resources/e-book/improving-warehouse-operations-with-low-code-application-platforms

The Robot Report. (2024). Automated Warehouse | Overcoming Common Software Implementation Challenges. WTWH Media LLC.

Veridify Security. (n.d.). OT Security: Cybersecurity for Modbus. Retrieved from Veridify Security: https://www.veridify.com/ot-security-cybersecurity-for-modbus/

Dual Hat – NSA and CYBERCOM

Reasons for the Dual Hat, and Reasons against – solution – it’s complicated.

The National Security Agency (NSA) and U.S. Cyber Command (CYBERCOM) are both part of the U.S. Department of Defense, with a single leader overseeing both agencies. CYBERCOM operates under Title 10, governing military operations, while the NSA operates under Title 50, governing intelligence activities. While distinct missions, in cyber operations they frequently intersect.

Intelligence Gathering: Strategic vs. Operational

Intelligence gathering often overlaps with operational activities when identifying threat actors. The methods and tactics used may be inherently operational or offensive, blurring the distinction between intelligence and military operations.

Intelligence has a history of intersecting with military action, as seen from within The DoD War Manual. Item 16.1.2.1 lists in Cyber Operations actions such as advance force, reconnaissance, and gathering of intelligence;1 identifying intelligence as a distinct act.

Splitting the NSA and U.S. Cyber Command would not change how cyber intelligence is gathered but could increase costs, create duplicative efforts and reduce efficiency.

To Split or Not

Post-Gathering: What to Do with the Intelligence?

The NSA’s directive to share intelligence with relevant agencies contrasts with CYBERCOM’s mission to disrupt and impose costs on adversaries. This divergence creates a conflict – who decides how the intelligence is used? For instance, if CYBERCOM wants to gather long-term intelligence or develop countermeasures without disclosure, it could clash with the NSA’s responsibility to share the data.

Splitting the NSA and U.S. Cyber Command would not change how cyber intelligence is gathered, would likely increase costs, and reduce operational efficiency. Maintaining the current dual-hat structure, however, may continue to create conflicts between the agencies’ differing missions, potentially complicating intelligence priorities.

Ultimately, the decision to split or consolidate involves weighing the trade-off between efficiency and resolving mission conflicts.


  1. DOD Law of War Manual, Updated July 2023, Office of General Counsel, Department of Defense ↩︎

References

Department of Defense. (2023, July). Office of General Counsel | Department of Defense | Treaty Documents > DoD Law of War Manual. Retrieved from Office of General Counsel | Department of Defesne: https://ogc.osd.mil/Portals/99/Law%20of%20War%202023/DOD-LAW-OF-WAR-MANUAL-JUNE-2015-UPDATED-JULY 202023.pdf

Garamone, J. (2023, March 8). Cyber Command, NSA Successes Point Way to Future. Retrieved from U.S. Department of Defense: https://www.defense.gov/News/News-Stories/Article/Article/3322765/cyber-command-nsa-successes-point-way-to-future/

House.Gov. (2025). TITLE 10 / Subtitle A / PART I / CHAPTER 6 / §167b. Retrieved from uscode.house.gov: https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim-title10-section167b&num=0&edition=prelim

Maryuama, J. A. (2020, December 24). Split Up NSA and Cybercom. Retrieved from Defense One: https://www.defenseone.com/ideas/2020/12/split-nsa-and-cybercom/171033/

National Security Agency. (n.d.). About NSA/CSS Mission. Retrieved from NSA.gov: https://www.nsa.gov/about/mission-values/

Office of the Director of National Intelligence. (n.d.). Rev Book – 1947 National Security Act. Retrieved from Office of the Director of National Intelligence: https://www.dni.gov/index.php/ic-legal-reference-book/national-security-act-of-1947

Schoka, A. (2019, April 3). Cyber Command, The NSA, and Operating in Cyberspace: Tie To End The Dual Hat. Retrieved from War On The Rocks: https://warontherocks.com/2019/04/cyber-command-the-nsa-and-operating-in-cyberspace-time-to-end-the-dual-hat/

Swaney, R. (2023, September 11). Why Keep the Cybercom and NSA’s Dual Hat Arrangement. Retrieved from Security Intelligence: https://securityintelligence.com/articles/why-keep-cybercom-and-nsas-dual-hat-arrangement/

FRCA v GDPR – USA Scattered Privacy Protections

In this post, I will explore a bit of USA Scattered Privacy Protections as compared to the GDPR. It is important to note- the United States doesn’t have individual privacy protections within the constitution, nor has Congress considered it a priority enough to develop such. Due to this, the laws regarding cyber, and the laws regarding your privacy are being protected in a scattershot fashion, using existing laws. One such law is the FCRA or Fair Information Credit Reporting Act.

How does the FCRA compare with the GDPR

Privacy Protection

When comparing Fair Credit Reporting Act to the General Data Protection Regulation, one must first recognize the FCRA is about banking and credit reporting, not about privacy. In contrast, the GDPR identifies privacy as a human right, and is a regulation specifically about privacy of individuals.

FCRA Purpose

The purpose of FRCA is to protect the banking system and prevent impact on “… the efficiency of the banking system… [and] continued functioning of the banking system.”1 The FCRA doesn’t identify persons as the data subject, and instead defines a person to be “…any individual, partnership, corporation, trust, estate, cooperative, association, government or governmental subdivision”. The definitions continue, clarifying “…’consumer’ means an individual.”2

The FRCA is about the appropriate passage of reports to and from the banking system, specifically regarding credit worthiness of consumers. It holds some limits on what a report can contain, and the approved reasons for transmission. In this limited scope, it has impacts on privacy, and does allow for data subjects to review and dispute.

GDPR Purpose

Compare that definition to the GDPR, where it is “designed to protect the fundamental rights and freedoms of natural persons…” and in the Definitions section, “…’personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified…”3

The GDPR is about the rights of the data subject including the right of access, the right to rectification, the right to erasure or restrict processing, and the right to not be the subject to automated decision making.

EU Credit Measuring and Privacy

The different EU nations measure and manage credit differently, but they are obligated to protect privacy in accordance with the EU GDPR.

This paper recognizes that EU has a law regarding credit rating agencies, however Regulation (EC) No 1060/2009 on Credit Rating Agencies is not about individuals or persons  and is not relevant to credit scores or reports as discussed in this paper. 4

EU and Individual Persons Credit – Different in Each Country

The EU is made up of 27 member states, representing varied credit monitoring methodologies and laws different from the United States. For example, Germany has the SCHUFA that holds data on persons over 18, and all persons start with a score of 100 and then can have deductions of points based on debts. Spain Risk Management Center tracks debts, and can make customer lists from loans for up to 6 years5, and France is entirely dependent on relationships with banks6; an individual must open an account with a bank to build a relationship, and banks don’t share customer information with other banks.

Individual Credit ranking systems in EU are managed within each country, and don’t travel well between countries; many of them vastly different than any credit reporting methodologies in the USA. The laws regarding these different credit methodologies vary, but then must comply with the GDPR regarding privacy.

There have been consequences when an EU nation-states banking/credit/loan system did not comply with the GDPR; in 2021, the Court of Justice of the European Union (CJEU) delivered a judgment regarding “automated decision making” within the GDPR, finding that credit scoring by the SCHUFA constitutes automated decision making, and profiling7. The decision was disputed, but held in 2024.

Regulatory Limitations & Consent Mechanisms

Regulatory

The FRCA provides instances where the data subject has input over the information within a report, as well as rules over the sharing.

FCRA defines legal limits on material in reports, including exclusion of material over seven years old, and much health care information8. The data subject has the right to dispute the accuracy of a report9; it falls to the credit monitoring institution to “reinvestigate” the information for veracity, including reaching out to the source of the disputed information to review the dispute10

While both GDPR and FRCA have the right to dispute and correct, there are some dramatic differences.

Article 5 GDPR Recital 1 lists the provisions that are specifically about the capture of a data on an individual, how long it can be held on to, and how it can be used. The data collected by a business in within the GDPR zone, must be anonymized, used for the purpose of the transaction, and the companies must have a disposal of data plan. Further, the information should not be stored and used for purposes outside of the initial scope of the transaction for which it was gathered, or be subject to automated decision making.

Under both laws the data subject does have the right to file a lawsuit, against a credit bureau/controller for inaccurate information if they have filed for corrections, the corrective actions prescribed by law were not met, and the data subject then suffered material harm due to this. In addition to this, within the GDPR, the rights for suite extend over a wider range; individuals do have that the right to sue the controller, and the legal compliance organizations within the EU for failure to enforce the rules of the GDPR.11 Lastly, the right to suite is not limited to material harm within the GDPR.

Level of Protection – FRCA Limitations

The level of protection offered by the Fair Credit Reporting Act (FRCA) is limited in scope, as defined by the law itself. The rules within the FRCA apply to, and are limited to, Credit Reporting entities and the data shared by them to other entities. Transactions outside of the Credit Reporting market are not within the scope of the law, and thus, not protected; I.E., a user’s search engine data, purchases made on an online retailer or millions of other potential out of scope transactions.

Harm Definitions and Treatments

FRCA defines harm and the ability to sue for harm to be a measurable material harm. For example, FRCA §616 lists civil noncompliance, with limitations of damages to to match the material harm.

Where the GDPR allows for suite from “Any person who has suffered material or non-material damage…”12; non material damage can include non-tangible effects like mental duress.

Accountability Measures

Within the FRCA, the accountability is much more about the banking system and information flows between industry sources than about the individual to from which the reports are made.  Within the GDPR, there are enforcement measures from all levels. Governing bodies have enforcement levers (similar to FCRA) but the data source has many more enforcement levers, and greater potential financial returns due to the vastly different definition or scope of what is “harm.”

Consent

FRCA Default Data Collection and Distribution on Data Subject

Consent is a tricky concept in the flow of information within the FCRA. Within the rules of FCRA, a credit monitoring company may be asked for, and provide a report on the data subject, and where the consumer of the report is an entity that has legal permission to request the report, both entities are engaging in this transaction in an informed manner. We can clearly see a request and reply to the request between two entities that have what could be considered an informed data flow; yet the data subject may not be not part of this communication flow.

Within the FRCA, it is allowable for parties to get credit reports from reporting agencies for establishing consumer’s eligibility for credit, insurance, employment, and other purposes13 which includes things like court orders, credit transactions, insurance, licenses or government required by law, and, notably “otherwise has a legitimate business need14”. FRCA §604(c) allows for acquiring a consumer report not initiated by the consumer15. Interested parties can get a credit report on selected individuals not only without the individual’s direct consent, but even without the data subjects’ awareness. This takes the data subject of the report right out of the equation. By being able to collect a report on a subject, while excluding the data subject from participation in the transaction, the credit report is removed from any contextual integrity heuristic with the data subject.

A data subject has two ways to control the flow of their information. The first is if there is a fraud alert they are informed of, and the consumer then requests an “extended” alert16, which would begin a five-year period of “exclude the consumer from any list of consumers prepared by the consumer reporting agency and provided to any third party… as part of a transaction that was not initiated by the consumer.”17

Another method is for the consumer to enact a freeze18 on their credit, which prohibits a reporting agency from sharing a report on the data subject to any entity requesting a report. It becomes the responsibility of the consumer to then turn on, off, or temporarily suspend a freeze when initiating a transaction where the data subject approves sharing a credit report.

By use of the alert or freeze lever, the data subject inserts themselves in the communication flow between the credit monitoring company, and the entities whom receive the reports, making any of those transactions then require the participation and consent of the data subject.

The limitation remains, that this is it is specific to the credit reporting, and leaves out any other sort of data collection or distribution on the data subject.

Consent GDPR- Privacy by Default

Where it falls entirely upon the data subject to initiate controls for consent within the FCRA, the GDPR instead clearly protects the data subject by changing the dynamic; privacy is the default, and consent must be established for data collection. Data on a data subject must not be processed further than the purpose of the initial transaction, and those purposes must be listed, made in clear language, and transparent19. A data subject can change their mind about consent, revoke consent, and grant consent. The largest, and most defining point here is that within the GDPR, privacy is the default, and notice from the data subject is required for any variation. This process is across the board for all transactions, and is not limited to banking or credit monitoring.

Contextual Approach to Privacy Protection

In the above review and comparison of differences between FCRA and GDPR, we have lightly touched on some key principles and differences between the two legislations, and noted a difference in the contextual approach to privacy.

FCRA and a Contextual Approach

When considering the FCRA, if only looking at the information flows between the credit monitoring company and the recipient of credit reports, we see a clearly defined information flow. The material being asked for and provided matches, and falls within expected norms between those two entities. Where this information flow is broken, is that the material being provided is about a data subject, the data subject doesn’t ask for the report to be made, and the transaction between the reporting agency and the consumer of the report may even fall outside of the knowledge of the data subject.

Consent of the data subject for the collection of the material within a credit report is not even considered, and the data protections in the FRCA are limited specifically to transactions regarding credit reporting and monitoring. Within those limitations, the FRCA does offer the data subject does some default protection regarding their health care information20. This protection could be considered, within a contextual approach, as a natural limit on the information flow.

While privacy is not directly considered in most of the FRCA, there are actions a data subject can take that put them into the communication flow, like alerts and freezes. The data subject becomes a participant of all credit report communication flows, and the provided information transfer would thus be considered within context.

GDPR and a Contextual Approach

The GDPR is built with a contextual approach, as can be seen in several of the recitals and directives within the document. For example, “Personal data shall be collected for specified, explicit and legitimate purposes, and not further processed in a manner that is incompatible with those purposes..”21. If the looking at contextual integrity as a privacy heuristic, then the entire Article 5, Recital 1 could be considered a method to define, in law, what acceptable information flows are expected to be within an individual’s privacy rights and controllers’ responsibilities with regards to the rights of the data subject.

Preferences

The contextual approach within the GDPR is a much more active and supportive privacy law. GDPR recognizes privacy as a human right, and concern, and defines individuals as natural persons!

FRCA is limited in scope due to being specific to Credit Reporting. Today’s world of data gathering is far past credit monitoring, and the use of FRCA as a privacy tool is like using a fly swatter to stop the rain. When considering today’s data landscape, the data gathered on people is much larger, and gathered from more sources, aggregated, and used for automated decision making, far past the scope of FRCA.

The tools that are in the FRCA that are most handy are also within the GDPR; the right to dispute, and even to file suit. However, the scope of protections are completely different, in part because FRCA is about the banking stability, where GDPR is about persons information and how far that information should be allowed to go, how long it should linger, and even a person’s right to be forgotten22. GDPR defines itself recognizes individuals’ privacy “… must be considered in relation to its function in society and be balanced against other fundamental rights…23”.

While some argue that data is already out, I would counter that simply because a boat has already taken on water, doesn’t discount the need for patching it.

A person’s ability to lead productive and participatory lives safely in an open and free society can be dependent on data not being exposed. Freedom of expression, of movement, ability to participate in a society, can be dependent on expiration of information.

Under the FRCA a consumer who went bankrupt, or had a lien that defaulted, can count on that information expiring (being removed) from their report in 7 years. However, for data outside of credit reporting, if it is in a newspaper, web shopping, app tracking and more, there is no right to be forgotten; this can haunt people moving forward. If an individual has to move to become safe from persecution, be it from an institution or an individual, there are no protections under the FRCA. Under the GDPR, an individual is protected under both circumstances; and their ability to participate in society is not hampered by data following them indefinitely.

Per GDPR Recital 2, Respect of the fundamental Rights and Freedoms:

“… This Regulation is intended to contribute to the accomplishment of an area of freedom, security and justice and of an economic union, to economic and social progress, to the strengthening and the convergence of the economies within the internal market, and to the well-being of natural persons.”24 [italics added for emphasis]


Footnotes

  1. FCRA §602 Congressional findings and statement of purpose [15 U.S.C. §1681] ↩︎
  2. From the Fair Credit Reporting Act, Definitions §603(b) and §603(c) [15 U.S.C.§1681a] ↩︎
  3. From GDPR, Chapter 1, Article 4, Recital 1 ↩︎
  4. Regulation (EC) No 1060/2009 of the European Parliament and of the Council of 16 September 2009 on credit rating agencies,  Article 2 (2)(a) This regulation is high level guidance and directives for the banking sector specific to investing and credit ratings within and across banks specific the investing landscape; formed from the needs found after the collapse of the banking markets in 2011. ↩︎
  5. Chase.com Do other countries have credit scores? ↩︎
  6. finmasters What Countries Have Credit Scores and How Do They Work? ↩︎
  7. Case C-634/21 – SCHUFA where the judgment of the court was that the automation within the SCHUFA (German Credit agency) and then used heavily in the application of a loan was in conflict with the GDPR under Article 15(1)(h) and Article 22. ↩︎
  8. FRCA §605(a) Information Excluded from Consumer Reports ↩︎
  9. FCRA §609(c) Summary of Rights to Obtain and Dispute Information ↩︎
  10. FCRA, §611 Procedure in case of disputed accuracy [15 U.S.C. § 1681i] ↩︎
  11. GDPR, Chapter 8, Articles 77, 78, and 79 ↩︎
  12. GDPR, Chapter 8, Article 82, Recital 1 This link takes to a really easily searched GDPR by the Horizon 2020 Framework Programme of the European Union. ↩︎
  13. FCRA §604 lists the permissible purposes of consumer reports. ↩︎
  14. FCRA §604(a)(3)(F) ↩︎
  15. Page 72, FCRA 615(d) calls to 604(c)(1)(B)[§1681b] ↩︎
  16. FCRA §605A(b) Extended Alerts ↩︎
  17. FCRA §605A(b)(1)(B) ↩︎
  18. FCRA §605(i) National Security Freeze ↩︎
  19. GDPR Chapter 2, Article 5, 6, and 7 ↩︎
  20. FRCA §603(d)(3) Restriction on sharing of medical information and §604(g) Protection of Medical Information ↩︎
  21. GDPR Chapter 2, Article 5, Recital 1(b) ↩︎
  22. GDPR Chapter 3, Article 17 ↩︎
  23. GDPR Chapter 1, Article 1, Recital 4 ↩︎
  24. GDPR, Chapter 1, Article 1, Recital 2 ↩︎

References

108th Congress (2003-2004). (2003, December 4). H.R.2622 – Fair and Accurate Credit Transactions Act of 2003. Retrieved from Congress.Gov: https://www.congress.gov/bill/108th-congress/house-bill/2622/text

Consumer Financial Protection Bureau. (n.d.). § 1022.1 Purpose, scope, and model forms and disclosures. Retrieved from CFPB, Consumer Financial Protection Bureau: https://www.consumerfinance.gov/rules-policy/regulations/1022/1/

Consumer Financial Protection Bureau. (n.d.). Appendix K to Part 1022 – Summary of Consumer Rights. Retrieved from CFPB, Consumer Financial Protection Bureau: https://www.consumerfinance.gov/rules-policy/regulations/1022/k/

European Parliment, Council of the European Union. (1995, October 24). Directive – 95/46 – EN – Data Protection Directive – EUR-Lex. Retrieved from EUR-Lex | Access to European Union Law: https://eur-lex.europa.eu/eli/dir/1995/46/oj

European Parliment, Council of the European Union. (2000, June 8). Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’). Retrieved from EUR-Lex | Access to European Union Law: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32000L0031

European Parliment, Council of the European Union. (2016, April 05). General Data Protection Regulation (Document 32016R0679) | Regulation – 2016/679 – EN – gdpr – EUR-Lex. Retrieved from EUR-Lex | Access to European Union Law: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679

European Parliment, Council of the European Union. (n.d.). GDPR.EU – General Data Protection Regulation (GDPR). Retrieved from GDPR.EU: https://gdpr.eu

European Securities and Markets Authority. (2022, October 28). Guidelines on the Scope of the CRA Regulation. Retrieved from ESMA | European Securities and Markets Authority: https://www.esma.europa.eu/sites/default/files/library/esma80-196-6345_guidelines_on_the_scope_of_the_cra_regulation.pdf

Federal Trade Commission. (2023, May). Fair Credit and Reporting Act. Retrieved from Federal Trade Commission: https://www.ftc.gov/system/files/ftc_gov/pdf/fcra-may2023-508.pdf

Gesley, J. (2024, 01 10). European Union: Court of Justice Rules Credit Scoring Constitutes ‘Automated Individual Decision-Making’ under GDPR. Retrieved from Library of Congress: https://www.loc.gov/item/global-legal-monitor/2024-01-10/european-union-court-of-justice-rules-credit-scoring-constitutes-automated-individual-decision-making-under-gdpr/

Institute, L. I. (n.d.). Cornell Law School, Legal Information Institute, LII>U.S. Code > Title 22 > Chapter 78. Retrieved from Legal Information Institute, Cornell Law School: https://www.law.cornell.edu/uscode/text/22/chapter-78

Karst, K. L. (1966, Spring). The Files: Legal Controls Over the Accuracy and Accessibility of Stored Personal Data. Retrieved from Duke Law – Law and Contemporary Problems: https://scholarship.law.duke.edu/lcp/vol31/iss2/8/

Legal Information Institute. (n.d.). Cornell Law School, Legal Information Institute, LII >U.S.Code>Title 15>Chapter 41>Subchapter III. § 1681b. Retrieved from Legal Information Institute, Cornell Law School: https://www.law.cornell.edu/uscode/text/15/1681b

Legal Information Institute. (n.d.). Cornell Law School, Legal Information Institute, LII>U.S.Code>Title 11. Retrieved from Legal Information Institute, Cornell Law School: https://www.law.cornell.edu/uscode/text/11

Official Journal of the European Union. (2024, 01 09). Consolidated Version of the Treaty on the Functioning of the European Union. Retrieved from EUR-Lex | Access to European Law: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:12016E/TXT&qid=1732727796448

Caution – In Cyber Regulation

It is interesting discuss caution in cyber regulation. While caution is an integral part of the regulatory process, we currently see an incautious trend of dismantling regulations that were established with expert knowledge, deliberation, and care.

Cautious step 1: Initiation and Decision for an Agency

Building a regulatory agency requires that multiple branches of government recognize the need for expertise in creating rules ensuring public safety and security.

Article II, §2, Clause 2[1]: states that the president “by and with the advice and consent of the Senate, shall appoint … all other Officers of the United States, whose Appointments are not herein otherwise provided for….”. Agency formation is a careful, deliberate, and cautious process.

Cautious step 2: Designing & Approving, Laws to develop an Agency

Once the need for an agency is recognized, Congress must pass laws directing agency actions and scope on the subject[2]. Making a law is inherently cautious, involving committee revies, debates and votes. Only after approval by both chambers can the law(s) be submitted to the President for approval. 

Cautious step 3: Procedural Guidance Upon Agencies

An Agency’s scope is defined by the law(s) Congress passed to establish it. The Administrative Procedure Act (APA) structures how agencies operate, including rules and guidelines for process and procedure. Agencies must publicly share their actions, methods, and processes in the Federal Register.[3] The allowances for secrecy are defined[4], and the participation of the public is built into the procedure in General Notice §4(a)(b)(c)(d).

Caution is expressed in deliberation, and methodology, to develop the greatest understanding of the rule to be made. These processes apply to any regulation rule, allowing for cool minds and diverse input, and aren’t different for Cyber.

Once a rule is proposed, it often is challenged in court by industries and others to challenge or modify the rule. Clearly, the craft of drafting and enacting any regulation is designed with care and caution.

Lack of Caution?

There is an area where caution is lacking. The judiciary risks dismantling regulations beyond their scope of understanding and neglecting their duty to review in favor of deregulation. The increasing reliance on the Major Questions Doctrine, suggests that Congress should draft more specific laws. This ignores the initial cautious step where Congress recognized that expertise on these matters lay outside its purview. This troubling lack of caution in regulation raises concerns about our agencies ability to be effective and the potential risks posed by insufficient protections against cyber threats.


[1] Constitution Annotated, on the congress.gov site, has not only the full text of the constitution, but as seen in the link, a break down of sections and relevance in current exploration.

[2] A Guide to the Rulemaking Process, Prepared by the Office of the Federal Register. What gives agencies the authority to issue regulations.

[3] 5 U.S.C § § 551-559, Administrative Procedure. An easier to read description specifically to rule making can be found on the Cornell Law School LII site.

[4] Administrative Procedure Act PDF – Public Information §3 (1)(2), Rule Making §4 (1)(2)


Reference material list can be found here.

Universal Opt Out & Global Privacy Controls

What is the significance of UOO and GPC in the context of digital privacy and consumer rights.

Universal Opt Out (Mechanism) (UOO(M)) is not configured per website, but is a standardized signal sent to all visited websites from a browser. Universal Opt Out Mechanism(s) include GPC and will likely include similar technologies in future.

Global Privacy Control (GPC)1   is a browser setting indicating a user’s preferences regarding the collection, distribution, and sale of the user’s data. It is HTTP or HTTPS signal, transmitted over the DOM (Document Object Model) (GitHub, 2024). It is specific to web browsers and HTTP protocols; meaning it is for internet browsers and does not apply to IoT, or other methods of data collection. GPC must be flagged on each browser used; If a user surfs with GPC on in Firefox, but later that day goes to the same site in another browser, the new browser will also need to be set to the users’ preferences.

The future of UOOM will likely include other mechanism and services and expand past just HTTP. UOOM has room to grow to encompass multiple signals; GPC for HTTP(s), and other mechanisms for mobile devices, IoT, perhaps even ISP’s. As the IoT and information flow continues to grow, so too will the need for the toolsets and regulations.

Legal & Regulatory Framework

One of the key components in many of the USA laws is the narrowing of the term processing. For example, Colorado’s new law allows users to opt out of possessing “to advertising and sale…”2 (Rule 1.01, CCR 904-3) (Colorado Attorney General , 2021). California also focuses on the “Consumers’ Right to Opt Out of Sale or Sharing…”3 (California Privacy Protection Agency, 2020). The proposed New York law in the assembly focuses on, targeted advertising, sale, and profiling4(New York Assembly, 2024)
Interestingly California, Colorado, and the GDPR (EU) all recognize and use the GPC HTTP signal in their laws, and New York’s proposal requires the acceptance of any type of opt out signal from multiple types of devices (leaving the door open for new UOOM).

Support

Focusing on the California Privacy Rights Act is a good place to start because it is the most populous state in the union, and represents the the largest tech industry.

The California AG lawsuit against Sephora proved that the state is willing to enforce those rules.

The mandate for opting out seems clear on the surface, yet different entities are defining “sale” differently- and the suit against Sephora helped clarify that sale doesn’t have to include financial transaction. In California law Sale of data means making available “to a third party for monetary or other valuable consideration.”5 (like rewards programs, or supplying to a service provider). A Browser with that signal turned on has not only opted out of collection, distribution, and sale of their data; but the responsibility of the data collector (in this case Sephora) does not stop at the point of turning on the signal. The collector must not share/distribute, and by that they must but make clear to service providers that the user of that data has opted out and the data is not available, should not be collected, and cannot be part of the transaction.6 (Office of Attorney General, San Francisco Superior Court, 2022)

Do Consumers Have Control of Their Data?

Sadly, no, UOOM and GPC are not the end game. UOOM and GPC are the very beginning, and necessary to start the conversation of opting out of data collection and sale.

Currently the UOOM and GPC is specific to HTTP – and it is browser driven. A regular person may surf using Chrome (where GPC isn’t default & requires an addon) or Firefox(where GPC is default if in “Incognito mode”) – but if they switch to edge, or their phone, the GPC flag may not be there. 

From watching videos of the Colorado AG and other law officials discuss GPC7, there are also mis-understandings and misconceptions about how a user is identified on the web. Some arguing that the user’s data isn’t collected till passing a sign in wall. Faulty understanding of the technology can lead to faulty assumptions and make enforcement impossible- for example, if the people drafting or enforcing the law don’t understand or agree on an identifier, how can protection be enacted and enforced?

For consumers it will offer an incomplete understanding of privacy. Selecting or opting to turn it on, is removed when you dump your cache, and you have to do it again. GPC doesn’t carry across browsers, or devices. Even if the company knows it’s you, and you have signed in, and you opted out of tracking in Firefox- if you log in using another device, you are not sending the opt out signal.  How companies choose to collect when a user has opted out, but navigated using a different tool – has not been settled, and is not part of the laws.

Privacy settings on HTTP(s) are a great starting point, and it is exciting to be moving in the right direction. However GPC reflects only a small fraction of the consumer data that is tracked and monetized. Consider the report by the FTC in October of 2021, regarding the privacy practices of six of our major Internet Service Providers. (Federal Trade Commission, 2021)

What Are Some Conflicts Between UooM and Convenience?

Access to Information Friction Points

Currently, because UOOM is not across all states, nor is it adopted across platforms, there are still sites that will prevent viewing if you don’t allow their cookies. In those instances, individuals could be blocked from information.

Companies, that don’t need to sell data to make money with your data, won’t feel any issue with it. But smaller companies may find acquiring data for their projects more difficult. Will the price for the sale of data go up, (from ISPs, or other data sources) when they have less competition. Will this make it less competitive and harder for younger startups and innovation?? 

Privacy V. Convenience

As for privacy v convenience, there isn’t much to say there. This is an initial step to grant some controls, and reduction of transmission of some data. Data continues to be collected from non-flagged browsers and non HTPP sources.

The convenience of the selection is a great first step, and a distinct improvement over opting out at each site. Clarity on the GPC and its limitations needs to be clearer in the support documentation on the different browsers. 

Example WaPo

Washington Post appears to have used and accepted Universal Opt Out as a marketing tool. They are listed in the GPC site, yet on the WP privacy documents it is clear that they will segregate, and disregard the GPC if your IP or any other information indicates you are in a location where GPC is not required by law.

The WP looks good on the GPC Founding Organizations page, while actively striving to do the bare minimum. WP also strongly encourage the use of their apps by limiting browser functionality on mobile devices, while their Privacy Policy8 makes clear they gather data on “…sites, mobile and tablet apps and other online products and services…9. (Washington Post, 2024)

Using Firefox Incognito (GPC is automatic) I navigated from the Privacy Statement to the Your Privacy Choices page, it is evident that GPC opt out is flag is received. That same page indicates if you don’t reside in the states where that is enforced, your privacy may be reset. Weather they do or not, is unclear, but with their verbiage and the amount of time to write these documents, it is likely that users location sets an automation to allow the tracking and selling if outside of the areas where it is required by law.

Monetizing data appears to be important enough to make these marketing decisions.

Increase Awareness

Currently it is only people who already care, that search and find out about privacy. 

Awareness is increased when there are pushes on legislation through links and mentions on the news media. I don’t know how to make it “sexy”, but perhaps early education and exercises could increase awareness amongst the young, and their parents/caregivers.

Support Materials & Website Improvements

There are basic absences on all of the sites regarding privacy and GPC, such as:

  • Simplified explanations,
  • Quick start guides, and
  • Why some cookies are necessary.
  • What a third party is, and
    • why it matters.

Essentially, to try to get the interest and information out, advocates must fight the noise of the endless information pollution. If the Colorado or California AG had influencer contacts, that could be a point to leverage.

However, there is nothing to leverage if simplified support materials are not available.  If they leveraged an influencer now, and directed to their websites – any campaign would fail because the information provided is poorly developed for lay persons, and isn’t available in multiple languages.

The closest I can get to marketing, is to suggest: Simplify, sexify, amplify.

Future of Uoo & Privacy Enhancing Tech

The GPC as a UOOM tool is a fantastic start. I would hope it is only a start, and privacy advocates, and technologists would work together to explore the other areas that need addressing. In fact, starting small, like the GPC may be exactly the right start – if advocates can amplify the discussion of it’s value, and create stories of success. Those same stories can then be leveraged to ease progression and deployment of the next tool. I suspect it is easiest to develop the laws and tools in this process from smallest to largest: from HTTP(s) to Mobile to IoT, tracking across devices, and eventually to IP. This enables the defining of terms, that can then be used in the next stage, and allows the time and space for measurement of success. Once we have some established rules and mechanisms for privacy rights, we can explore what that means with regards to AI. We cannot establish rules around AI specific to privacy rights, prior to having some rules about privacy rights.

However, I do hope that the process is already begun; inertia is a battle that is regularly lost.

Policy Recommendations

I think one of the key components that must be done to enhance UOOM, is to incorporate the right to be forgotten into the rule making. While it is within GDPR, it is completely absent from the USA laws being developed and enacted.

The US laws are defining legal gathering and use of data to be “publicly available information.”

Consider in the draft of the American Privacy Rights Act of 202410 stating “publicly available information” is excluded from covered data §2(9)(B)(iii) (Senate & House of Representatives, 2024)

It defines Publicly Available Information to mean any information that “… has been lawfully made available to the general public…”§2(32)(A)

Yet in the supreme court decision of DOJ v. Reporters Comm. for Free of the press, 489 U.S. 749 (1989) (U.S. Supreme Court, 1989)

Page 763 states

“…To begin with, both the common law and the literal understandings of privacy encompass the individual’s control of information concerning his or her person. In an organized society, there are few facts that are not at one time or another divulged to another. [SCOTUS Footnote 14] Thus, the extent of the protection accorded a privacy right at common law rested in part on the degree of dissemination of the allegedly private fact and the extent to which the passage of time rendered it private. [ SCOTUS Footnote 15] According to Webster’s initial definition, information may be classified as “private” if it is “intended for or restricted to the use of a particular person or group or class of persons: not freely available to the public.”11

This would mean that just because it has been public (once upon a time) does not mean it is public now. The footnotes are very interesting and ties nicely with the Contextual Integrity heuristic; selective disclosure and fixing limits upon the publicity.  Just because there is information on an individual attending university, it does not follow that that should be shared with that individual shopping service 30 years later.


Footnotes

  1. GPC Signal Definition defining a signal transmitted over HTTP and through the DOM, GitHub, March 22, 2024 ↩︎
  2. Rule 1.01 CCR 904-3   ↩︎
  3. California Consumer Privacy Act of 2018, Amended in 2020, § 1798.120 ↩︎
  4. New York State Assembly. (2024) Bill S00365: An Act to Enact the New York Privacy Act § 1102.2 ↩︎
  5. California Consumer Privacy Act of 2018, Amended in 2020, § 1798.140(ad)(1) ↩︎
  6. Filed Judgement – Office of the Attorney General, San Francisco County Superior Court, Aug 24, 2022 – the judgment & Sephora Settlement. Section 6 offers some clarity on the definition of Sale. Laymen’s terms of the same can be found at the same site, with the Press Release, Settlement Announcement, August 24, 2022. ↩︎
  7. Video list provided at the end of this document. Includes presentations by law offices discussing the Colorado and the California Privacy laws. ↩︎
  8. Washington Post Privacy Policy ↩︎
  9. Italics added for emphasis ↩︎
  10. 2024 American Privacy Rights Act (APRA),   ↩︎
  11. DOJ v. Reporters Comm. For Free Press, 489 U.S. 749 (1989) pg -763 through 764 ↩︎

Videos

AG Colorado- Data Privacy and GPC Webinar Colorado office of Attorney General, Phil Weiser AG

CPRA Session 5 Universal Opt Outs and Global Privacy Control Sheri Porath Rockwell, California’s Lawyers Association, and Stacy Grey, Director of Legal Research and Analysis at Privacy Forum. Guest Speakers Dr. Rob van Eijk, EU managing Director, Future of Privacy Forum, and Tanvi Vyas, Principal Engineer at Mozilla

TEDx – Data Privacy and Consent | Fred Cate Fred Cate, VP for research at Indiana University, Distinguished Professor of Law at Indiana University Maurer School of Law, and Senior Fellow of the Center for Applied Cybersecurity Research.

Lessons Learned from California on Global Privacy Control Donna Frazier, SR VP of Privacy Initiatives at BBB National Programs and Jason Cronk, Chair and founder of the Institute of Operational Privacy Design.

Tools Approachable to Small & Mid-Sized Businesses

MS CRS: Information Systems Security Engineering

Review CISA List of Tools and Services

I looked for Cybersecurity tools that would be most useful and approachable to a small/mid-sized company, specifically regarding protection of the internal network, intellectual property, workflows, etc. Areas to keep in mind include technical requirements, coding skill levels, surface area monitoring, information sharing, and initiation costs. Examples used in this document were from the CISA list  Cybersecurity Best Practices Services.

Some of the areas of importance to a small business include:

  • Is it a service or a tool?
  • Surface area monitoring including passwords
  • Scan for weaknesses regularly
  • Does it require coding required or not (and what languages it is compatible with)
  • Updated information sharing
  • Latest vulnerability tables; how many and which ones
  • Knowledge Bases, Help files, Initiation videos, etc.

Services

There are many services out there that enable a company to outsource its security. This paper discusses tools and removing services from review.

Tools

There appeared to be three main categories of tools:

  1. Code as Security (within a development pipeline),
  2. Customizable suites that require coding literacy, and
  3. Customizable Identity and Access Management (IAM) tools, that require a high level of technical literacy but do not require full coding literacy (at least at start).

Code as Security

The first category, Code as Security, are the tools that require coding skill, knowledge, and understanding. This subset of tools help within the development pipeline, but are not coverage for the business as a whole. For example, tools like Google OSS-Fuzz are useful to a company that has a development team, perhaps sells SaaS, and coders within the IT or Security team.  OSS-Fuzz and similar Security as Code tools would be handy within the development pipeline, but don’t represent a full coverage or protection suite.

Customizable Suite of Security Tools Requiring Coding

The second category, Customizable suites of security tools require development level personnel; the amount of command line and other coding language required is high. Using Gripe as an example: It would require an internal dev team to establish, create the dashboards, and to manage it. This sort of tool requires keeping a portion of developers available for monitoring, updating, and keeping up to date not just on the dashboard and metrics tracking, but to also watch, and maintain the software itself. Many of these tools are available on Github, BitBucket, or other repository systems. Constant review and tracking of source files and updates would be necessary, as well as monitoring different boards for latest risks to track if the chosen tool is keeping up to date. If a company is going to establish a security team for this, they then have to watch the tool development itself – to ensure the tool remains safe, and that use of the tool remains up to date with the source code. Selecting this type of tool likely requires a full time CySec officer and team.

Cloud Protection Suites & Identity Access Management

Cloud Protection suites that include the Identity and Access Management (IAM) tools are our third tool category. These are larger protection suites, often provided by the cloud provider. Microsoft Entra ID (formerly Azure Active Directory), Google Security Command Center and AWS AIM, fall within this category.

These tool sets require a good understanding of technology, but do not require a team of coders and developers to manage them (at least to start). These tools have ability to build the reports and graphics required to convey complex data upstream, and have enough technical power to input work flows, track exposure & surface area, odd behavior analytics, and constant monitoring of the known surface area within that environment.

These larger tool sets, that include Identity Access Management (IAM), are an accessible starting point for many small to mid-sized companies. The dashboards that come with these tools can be used to help identify areas of exposure that may require looking for addons. Each of the above-mentioned toolsets have marketplaces for additional functionality, including third party vendors.

Of the three tool sets mentioned, we will more fully explore Google Security Command Center (SCC), because it has the easiest/simplest point of entry for a small to mid-sized company that may not have developed Access Management or Cybersecurity previously. Discussion of third party compatibility as a deciding factor will not be explored here.

Entra, AWS, and SCC tool sets have similar abilities and set up requirements at the small to intermediate business level.
Entra, AWS, and SCC tools sets have similar abilities and setup requirements.

Google Security Command Center (SCC)

Google Security Command Center is a cloud-based security platform that will monitor the attack surface area, and alert the operator to threats, weakness, incorrect configurations and more. It is set up with the ability to prioritize or “threat level identify” the threats. SCC allows the operator to select and view what the threat is, why it is a threat, and recommended mitigation and/or solutions.

Setup

The Google Security Command Center is the most approachable service of the three mentioned above, and has some of the best introductory materials to facilitate small to medium companies to be able to accomplish that initial lift required to gain that first step into Cybersecurity.

GCP -> IAM Permissions
GCP -> IAM Permissions

The initial setup of Google Security Command Center requires setting up the Google IAM, from within the Google Cloud Platform -> IAM page.

Setup even for the IAM requires 5 roles within the Google Cloud Platform -> IAM permissions page[i]. The operator setting up the SCC will need to setup and establish the organization, and select the services.

The “Standard” (free) level built in services include Security Health Analytics, which can identify misconfigured virtual machines, containers, networks, storage, and identity and access management policies. For the Standard tier, the level and depth of scanning is at “high level” misconfiguration, and can be increased in coverage with purchase of a higher-level service.  For example, If the company requires API keys scanning or rotation or other configuration issues, they would be looking for moving up from the Standard to a Premium tier. Understanding and researching the difference in the different tiers would fall upon the team member(s) setting up the security. However, even starting at the free or “Standard” tier is better and more security than choosing not to do it all.

Initial work starts with the Identity Access Management (IAM), the operator setting up the SCC will have to communicate across multiple teams and stake holders; developing roles, permissions, and standards. This is not unique to the SCC; it would be required of every IAM tool or platform. There are times that cyber security and resiliency has dependencies, where one process cannot be implemented until another is accomplished[ii]. Understanding permissions, roles, groups, and access is a requirement that must be accomplished to achieve any level of cyber security coverage.

Secondary set up would be to define areas of interest. Correctly establishing the services, providers, data bases, and exposure points is necessary for the tool to be able to monitor and report on attack surface areas and traffic flow. Again, this is not a unique cost, but it does represent required resources and should be considered.

Once fully set up, the SCC has the ability to continuously monitor the attack surface area, provide reports, and suggests paths of control, response, and remediation if needed. The initial scan will likely take longer than usual (hours) but after that, Standard plan service runs a scan twice a day.

GC: SCC SWOT
Google Cloud, Security Command Center — SWOT

Some areas of opportunity may also be considered weakness – for example not having a report (weakness), but having third party integrations that build reports (Opportunity), what is the security of that third party and who is responsible (Threat).  With that in mind, lets get a litter deeper.

One of the greatest assets to a system such as this, is that as part of a behemoth tech company, these systems of tools have access to some of the largest resources for monitoring, development of tools, remediation of their own defects and the discovery and management of the latest threats. This is an asset for the small to medium companies because there is no way that a single individual or single team, can keep up with the constantly changing threat landscape.  Keeping that task on the tool-set, is a huge asset to a small company.

There are challenges, no product is perfect out of the box. Each of the listed tool sets can integrate with many third parties for more targeted coverage and reporting. Google Security Command Center has the Google Cloud Marketplace where there are thousands of compatible add-ons, services and tools. If the operator doesn’t find an exact match, they are likely to find something that comes close. Some of these integrations will take more work if they are native to a different platform, and it should be considered when deciding on a cloud protection system.

Of course there are differences between AWS, Entra, and Google options. A simple example is their firewalls; at the time writing this document, it appears that AWS offers AWS VPN (Site to site, and point to site) where Google offers Cloud VPN (Site to Site). Google’s cloud security model is not as mature as AWS, but AWS has been called overwhelmingly complex for small businesses or teams without extensive cloud experience. Google may not have the same level of threat detection as AWS, but it can be easier to launch, and is considered less complex.  

Growth could require re-tooling (congratulations)

If a company grows from a mid-sized to large company, the scale of the team managing the SCC would have to expand. The ability to tailor the reports could become insufficient as reporting and compliance demands grow. Growth may force a revisiting of if the tools are sufficient, or if in house teams and developers using different tools is the path forward. The ability and flexibility for larger companies’ cybersecurity will be different between the three platforms listed here. At this point, I would suggest a celebratory dinner before visiting what tools they may want to research/acquire/manage.

[i] Getting Started with SCC Playlist

[ii] NIST Developing Cyber-Resilient Systems

[i] Getting Started with SCC Playlist


Other References & Related Articles

Free Cybersecurity Services and Tools – CISA

Free Non-CISA Cybersecurity Services – CISA

CISA’s Public Safety Communications and Cyber Resiliency Toolkit – CISA

Developing Cyber-Resilient Systems: A systems Security Engineering Approach – NIST December 2021

AWS vs Azure vs Google Cloud Security Comparison – BisBot Business Admin Tools – April 2024

Google Identity Services vs. Active Directory – Jumpcloud (addon service to GIS) – June 2023

Microsoft Entra ID

Overview of Attack Surface Management – Microsoft Security Exposure Management – March 2024

What is Security Command Center – Google – March 2024

Google AIM

GCP Security Command Center – Pros & Cons – JIT – Feb 2024

Google Cloud Security Command Center – Google

Getting Started with Security Command Center – Google – March 2023

Google Marketplace: Command Center Services – Google 

Getting Started with Security Command Center Playlist – Google – youtube 

AWS vs. Azure vs. Cloud: Security comparison – Sysdig- Feb 2023

NIST Developing Cyber-Resilient Systems – December 2021

Blog Sample – Serverless

A sample of technical writing via Blog.

What is Serverless – in Laymen Terms

Serverless applications is an interesting name, that really has less to do with the application, and more to do with the technology hosting and storage of the application. Serverless applications do make use of servers, it’s just that they use them differently than in the past.

If you consider an application to be a product, activity, or service, you can in turn also think of the server as the house in which that product, activity, or service is homed. In traditional server systems, that house is static, probably like your house, or mine.

In the current “Serverless” system, you can have that same product, activity, and service, but the house can change as the needs grow or shrink- like adding a room when you need more space, or renting that room out when space is not being used.

Serverless technology has benefits for both the server hub, and the producer of the application. Applications using serverless architecture only pay for services when actively using those services- as in executing a process.

Let’s Take a More Technical Look

The most well-known and understood advantage and selling point of serverless computing is that it economizes the use of cloud resources. Serverless providers only charge for the time that code is executing, maximizing the function and profitability for both the provider and developer. Interestingly Serverless has also increased stability due to spinning services/instances as needed and having redundancy built into the system.

The numbers of applications and services that have moved to serverless is a testament to it’s economical use and function.

Additional interesting strengths are even greater costs reduction when multiple applications share common components, and in defining workflows.

Current thoughts on defining and describing serverless include calling it Event Driven, or Function and a Service (FaaS) protocol. Serverless architecture is best utilized to process events, or discrete chunks of data generated as a time series.

How it Works

Data arrives at the application, (via human or endpoint), and the architecture incorporates an API gateway that accepts the data and determines which serverless component receives the data.

Regardless of which host is being used for the applications serverless architecture, the runtime environment will pass the data is to the component, where it is processed, and returned to the gateway for further processing by other runtime functions, or returned to the user completed.

  1. Application Development
    • Developers write code, and deploy to the cloud provider.
  2. Cloud Host
    • Application Code is hosted by the cloud provider, and homed in a fleet of servers.
  3. Application Use
    • Requests are made to execute the Application code.
    • The cloud provider creates a new container to run the code in.
    • The container is deleted when the execution has been completed
      • Usually after a time period of inactivity
How Serverless Works
Simple flow diagram

Considerations

It’s important to keep in mind that serverless systems are not intended to become complete application. Successful use of serverless requires a separation of data input from computing actions. This separation will affect all stages of development and testing.

Timed out

One challenge is that Serverless isn’t as successful with longer computation times. For example, if processing takes to long, serverless can stop, and require a cold start- it simply may not work for that longer time period. There are some work arounds for this, but they can be problematic. One fix could be to make lots of little computations, that when broken apart, are fast enough to work well in a serverless environment; but the amount of coding time and rebuilding by developers can be prohibitive.

Serverless is Stateless (lack of persistence & it’s impact)

Another consideration is Serverless functions are stateless; individual functions accept input, they process that input, and they output a result. By design, there is no local or persistent storage.

The lack of persistence has impacts in both development and testing. For example, developers in data processing applications often want to be able to temporarily persist data that may be needed a few steps along, and testing can depend on maintaining a state from one step to the next in a workflow, results of previous operations can be understood as input to subsequent steps.

It becomes challenging to test more than one function at a time, and to replicate a serverless system for testing of a process that may use multiple functions is not always possible.

The most common approach is to break the development & tests into even smaller processes. It requires a heavy lift at the beginning, for a transformation of the workflow, as well as greater breakdown in understanding development and testing coverage into micro units, rather than full processes.

Some testing and developers have resorted to ad hoc methods of persisting data, such as creating and writing files to a cloud database. This can make an application more difficult to maintain, and could have security impacts depending on the platform/product/material being stored.

Major providers now have documentation and best practice methods and work arounds for providing persistence. AWS has introduced Step Functions, Microsoft Azure has Durable Functions and Logic Apps, and there are open source add on solutions as well.

Wrap Up

Serverless – or Function as a Service is one of the greatest transitions in recent computational history and demand. As the cost of moving data becomes more affordable, relative cost then increases on the storage or computation. Serverless architecture is a leap forward on this, moving our storage and computation from a static system to a kinetic system allowing for peaks and valleys to be represented and carried over in both costs and savings for providers and consumers. Finding a way to distribute the costs of both storage and functions based on use in a live and active manner is a huge leap forward, and we are still at the beginning stages of this.

What’s coming to Serverless? Things to keep an eye on include security, persistent storage, and data integrity. Global Serverless Computing Market is expected a compound annual growth rate of more than 22% in the period between 2024-2031.1


Other related and interesting content can be found at the following:


Footnote

  1. https://www.skyquestt.com/report/serverless-architecture-market ↩︎