Non-Human Identity (NHI) and PLCs

Introduction: PLCs in Everyday Automation

When people consider robots, many think of the humanoid replacements seen on TV, with digitized faces, and funny walking gates. Most don’t consider how robotics are already part of nearly every aspect of our current everyday life. The backbone for many controls for infrastructure, including critical such as water systems, transportation, turbines, and also non critical systems such as automated warehouses and similar robotics applications is the humble Programming Logic Controller (PLC).

To illustrate this, consider something as simple as buying a bottle of water at your local market. That bottle has traveled through multiple automated networks before reaching the shelf. It was likely stored in an automated warehouse, where PLCs controlled its journey from storage to a truck, merging with other soda co-products into a perfectly stacked, mixed pallet. This entire process is orchestrated by a network of PLCs ensuring seamless movement.

PLCs are the backbone of millions of automation services, and to explore it’s ubiquitous, and importance, we will explore the PLC in an Automated Warehouse to understand how something so common, can be so critical.

What does a PLC do

As mentioned above, a PLC is a Programming Logic Controller, but that isn’t enough for the lay person to understand what a PLC does. A PLC is in fact, a computer used for industrial automation.

It is designed to replicate a set operation or process over, and over again- while collecting vital information from connected systems such as sensors, SCADA (Supervisory Control and Data Acquisition) systems, and HMIs (Human Machine Interfaces)—to collect and process data.

Based on this input, the PLC determines the appropriate response, such as activating motors for conveyors, lifts, and other components within an automated warehouse. The PLC itself operates by: detecting the state of all things it’s connected to, following its repeatable logic for action, and output communication back to what it is connected to; in our example it will be turning on or off the motors for conveyors, lifts, and other parts of an Automated Warehouse system, while also broadcasting status the the larger system.1[1]

PLC in its “Natural Environment”

Before diving into how PLCs identify and communicate, let’s explore an environment in which they operate. Consider an automated warehouse, like an amusement park ride for your bottle of water. Products start at different bays, and ride conveyors, that climb, twist, merge, and more, all to end at the the loading dock; PLCs manage every transition.

Automated warehouses function as massive logistical hubs, moving thousands of products daily through an intricate system of conveyors, sorters, and palletizers. These warehouses are structured into zones, each managed by dedicated PLCs that control specific identified (NHI) conveyor types, motors, actuators, etc. Here I provide some human readable examples of identifiers:

  1. PLC: (PLC001, PLC002, etc.) – Each PLC must be identified and identifiable for communication and controls to happen.
  2. BAY: (B001, B002, etc.) – where the product is waiting to be “picked” and have the PLC release onto a conveyor.
  3. Release Conveyor: (RC001, RC002, etc.) move products from storage.
  4. Merge Conveyor: (MC001, MC002, etc.) Multiple conveyors merge into a single conveyor monitored by sensor controllers (e.g., Raspberry Pi devices).
  5. Divert Conveyor: (D001, D002, etc.) When the boxes on the conveyor are splitting onto two or more conveyors.
  6. Sequence Check: (SC001, SC001, etc.) found at intersections to proper order.
  7. Palletizer Merge: till it reaches the Palletizer zone we will call the conveyor that delivers to the palletizer “PM”. Within the Palletizer there are multiple zones:
  8. Palletization & Patterns – Different automated systems have different palletization or loading patterns.  (We will not go into the specifics of any individual warehouse pattern systems.)

At each stage, PLCs ensure that products follow the correct path, communicating with sensors using their programmed logic to maintain order. At many PLC locations, there may be another controller working in tandem, a sensor controller, often a Raspberry Pi. This little card sized controller is used specifically to capture data on sensors, lasers, and similar; used to identify a container as it rolls through a sequence check, this device must have its own identification, and be able to communicate to and be recognized by the PLC. Now that we have a high-level overview the warehouse flow, we can explore how PLCs identify and communicate with each other within this environment.

PLC Identity and Non-Human Identity (NHI)

For PLCs to function in an automated system, they must be able to recognize and communicate with all the devices around them – motors, actuators, and even secondary controllers like the Raspberry Pi. To do this, they rely on a set of unique identifiers known as Non-Human Identity (NHI). These identifiers allow PLCs to track and communicate with every connected device in real time, enabling the automation of operations.

Some of the key NHI mechanisms used in Industrial Automation include:

  • IP or MAC Addresses – Common in modern Ethernet-based networks.
  • Industrial Protocols – Such as Ethernet/IP, Modbus TCP/IP, and Profinet.
  • Legacy Network Identifiers – Older systems use Profibus, CANopen, and DeviceNet, which assign Node IDs instead of IP addresses enabling PLC to communicate to different machines.
  • Memory Addresses & Tags – PLCs store references to connected devices, ensuring recognition even after hardware replacements.
  • Routing Tables & Network Maps – Define communication pathways in complex systems.
  • Raspberry Pi Running Node-RED – fetches data from Allen-Bradley PLC using Modbus TCP/IP, a quick SCADA alternative, and can in some instances be a sub network within a PLC network.

In the warehouse, these identifiers allow different zones to work in tandem. When a product is released from storage, the release conveyor’s PLC communicates with merge and divert PLCs, ensuring proper sequencing for palletization. If anything goes wrong – like a PLC not recognizing a product assigned path, it will trigger a fault, forcing human workers to intercept and correct. Even a single miscommunication can create delays that ripple through the entire warehouse.

Mixed-Age Systems and Heartbeat Identification

Industrial automation systems change, adapt, and evolve over time. As facilities upgrade, they often end up with mixed-age systems, where legacy PLCs must coexist with modern networked controllers and machines2. In such environments, older PLCs often rely on heartbeat signals—simple, periodic pings that confirm a device is online If a heartbeat is lost, the system assumes failure and may trigger emergency shutdowns.

While this mechanism ensures safety, it also presents a risk: heartbeat ID spoofing could allow an unauthorized device to mimic a PLC’s presence, potentially disrupting warehouse operations. (We’ll discuss more in depth to come.)

Multiple PLC Networks and Leader-Follower Configurations

In automated warehouses, PLCs do not operate in isolation, they are part of segmented networks. To manage complexity, PLCs are often grouped into leader-follower configurations, where a leader PLC oversees several subordinate controllers. This structure:

  • Reduces network congestion by centralizing decision-making.
  • Ensures coordinated actions across multiple warehouse zones.
  • Helps isolate faults—if a follower PLC fails, the leader can reroute operations or trigger alerts.

When a PLC broadcasts an error, it’s effects ripple through the system. The Leader PLC broadcasts the error, and upstream PLCs must determine whether it will impact their operations. If an issue is detected, they halt to prevent further errors. Meanwhile, downstream PLCs continue running until a sequence check further along the process detects a missing product. At that point, the downstream PLC registers the failure and alerts its own upstream systems, trigging a secondary shutdown.

This cascading effect can halt sections of the warehouse or, in extreme cases, bring the entire facility to a standstill if warehouse staff and Controls Engineers do not quickly identify and resolve the originating PLC failure.

For example, consider merge zone PLC detecting a sequencing error. The PLC immediately notifies its leader PLC, which then signals upstream systems to pause product flow. By stopping movement before the issue spreads further, the system minimizes disruption and reduces downtime.

The Interconnected Nature of PLCs

The ability of PLCs to recognize and communicate with each other and partner systems is what keeps an automated system running smoothly. But as warehouses grow more complex, integrating mixed age networks, external controllers, and industrial IoT devices, the question of identity becomes just as important as function. Without strong Non-Human Identity (NHI) mechanisms, PLCs cannot securely authenticate the machines they interact with, leaving gaps for errors and exploitation.

In the next section, we will explore some mechanisms PLCs use to establish identity. From IP/MAC addressing to legacy network identifiers, each method plays a role in ensuring that every PLC, sensor, and actuator knows it’s place in the system. These identities and identity methods allow PLCs to interact reliably, but come with limitation and challenges.

Key Non-Human Identity Methods in Automated Warehouses

We continue to explore some of the top uses and vulnerabilities of the Non-Human Identity of Automated Warehouses, and how they relate specifically with regards to the PLC.

IP or MAC Address-Based Identification

When properly set up, PLCs rely on IP or MAC addresses for network communication and identification. In most warehouse environments, leader PLCs may use multiple identifiers for redundancy and protection, while subordinate PLCs may be identified by their MAC address for simplicity.

While MAC spoofing doesn’t have much news coverage, it does happen. There was a 2016 MAC Spoofing attack that cost millions of dollars3. In an industrial setting, even if the malicious actor is successfully blocked through traveling latterly or upstream through network segmentation, we have seen how a single PLC error can cascade and effect the whole system. Strong segmentation may not be enough to prevent disruptions.

And recall, the PLC is not only communicating with other PLCs, but actuators, and other devices such as sensors, and interfaces. If there isn’t a good and regular inventory of all connected devices, the impact of an identity failure can cascade across the entire system.

Industry Protocols (Ethernet/IP, Modbus TCP/IP, Profinet, etc)

Industrial control systems were originally designed with isolation in mind, and isolation was considered secure when Operational Technology (OT) networks were separate from IT infrastructure. However, as automation environments have become more interconnected, these once-closed networks are now discoverable on the network, and face security risks.

Many industry-standard protocols, including Ethernet/IP, Modbus TCP/IP , and Profinet, were developed assuming that the network was closed and secure. These protocols were designed without encryption or authentication mechanisms4,5, making them inherently insecure for communication over modern networks.

This introduces a path to access and capture MAC addresses, verification protocols, or other operational information, widening the door for attack. There are add-ons for security, however the core issue remains; these protocols were not designed with cybersecurity in mind, leaving critical systems vulnerable.

Legacy Network Identifiers

Recall discussing “mixed age systems” above?  Older PLCs may not be fully compatible with newer PLC’s, even when using the same brand. When a facility upgrades, it is very unlikely to transition all existing hardware and components; instead, legacy products that are still working remain, sometimes segmenting by network, or even using “heart beat” protocols, where an older PLC broadcasts a heartbeat (ping) as “proof of life”.

The problem; this heartbeat/follower PLC protocol lacks any NHI identifiers at all, and opens another avenue for entry and disruption. When combined with a non-encrypted network protocol, and a threat actor may be able to map older network segments, identify vulnerable devices, and then make plans accordingly.

Claroty TEAM82 has demonstrated the risk in multiple ways, one of the most interesting involves leveraging a legacy PLC to access the SCADA systems.  The fastest way to achieve this? Trigger a fault in a legacy PLC, and then the Engineer may use SCADA or HMI to review (and thus attacker gains access to engineer SCADA and more)6. If we have older devices that are using heartbeat as it’s identifier, the bar to access that is pretty low .

Protect the Non-Human Identity – Protect the System

By now, it should be clear just how deeply PLCs are embedded in modern life. They don’t just move your bottle of water from storage to shipment, they quietly control much of the world’s infrastructure, from manufacturing and logistics to water plants and critical utilities.

A warehouse shutdown is inconvenient, but what happens when a PLC error does more than stop operations? What if instead of halting a system, it mistakenly activates equipment? What if a disrupted PLC logic sequence sends the wrong command at the wrong time?

Can you imagine an entire pallet of water falling from 7 stories of a warehouse bay? Who was working there at the time, how were they affected? Now, take that same failure, and apply it to a water treatment plant. What happens when a gate controlling chemical flow opens too early or too late?

Non-Human Identity in industrial automation is established through control systems, MAC and IP addresses, industrial protocols, and authentication mechanisms that help machines communicate with their intended counterparts. As automation networks grow more complex and interconnected, protecting these identity structures becomes critical. If a PLC’s identity is spoofed, or compromised, the consequences could ripple far beyond a single warehouse, impacting safety, security, and infrastructure at a much larger scale.


  1. DO Supply, Explaining HMI, SCADA, and PLCs, What They Do, and How They Work Together ↩︎
  2. [1] POR Automated Wherehouse, Overcoming Common Software Implementation Challenges(p8) ↩︎
  3. Secure W2, MAC Spoofing Attacks Explained: A Technical Overview ↩︎
  4. Veridify Security, OT Security: Cybersecurity for Modbus ↩︎
  5. ODVA, Overview of CIP Security ↩︎
  6. Claroty Team82, Evil PLC Attack: Using a Controller as Predator Rather than Prey ↩︎

References

Allen-Bradley. (2005). Rockwell Automation | Literature | Documents | ag-um008. Retrieved from Rockwell Automation: https://literature.rockwellautomation.com/idc/groups/literature/documents/um/ag-um008_-en-p.pdf

DO Supply. (2019, February 4). Explaining HMI, SCADA, and PLCs, What They Do, and How They Work Together. Retrieved from DO Supply: https://www.dosupply.com/tech/2019/02/04/explaining-hmi-scada-and-plcs-what-they-do-and-how-they-work-together/

Huges, C. (2025, February 20). Understanding OWASP’s Top 10 List of non-human identity criticlal risks. Retrieved from CSO: https://www.csoonline.com/article/3828216/understanding-owasps-top-10-list-of-non-human-identity-critical-risks.html

ODVA. (n.d.). ODVA | Technology Standards | Distincht CIP Services. Retrieved from ODVA: https://www.odva.org/wp-content/uploads/2023/07/PUB00319R2_CIP-Security-At-a-Glance.pdf

Panduit. (2022, October). Panduit | Markets | Documents | Infrastructure Warehouse Automation. Retrieved from Panduit: https://www.panduit.com/content/dam/panduit/en/website/solutions/markets/documents/infrastructure-warehouse-automation-cpcb261.pdf

Project, O. W. (2025). OWASP Non-Human Identities Top10. Retrieved from OWASP: https://owasp.org/www-project-non-human-identities-top-10/2025/

Rockwell Automation. (2024, June). Rockwell Automation | Literature | PlantPAx Distributed Control System Configuration and Implementation. Retrieved from Rockwell Automation: https://literature.rockwellautomation.com/idc/groups/literature/documents/um/proces-um100_-en-p.pdf

Secure W2. (2025). MAC Spoofing Attacks Explained: A Technical Overview. Retrieved from Secure W2: https://www.securew2.com/blog/how-do-mac-spoofing-attacks-work

Sharon Brizinov, M. S. (2022, August 13). Claroty | Team82 | Evil PLC Attack: Using a Controller as Predator Rather than Prey. Retrieved from Claroty: https://claroty.com/team82/research/evil-plc-attack-using-a-controller-as-predator-rather-than-prey

Tecsys. (2024). infohub Tecsys | Resources | e-book | Improving Warehouse Operations with Low Code Application Platforms. Retrieved from Tecsys: https://infohub.tecsys.com/resources/e-book/improving-warehouse-operations-with-low-code-application-platforms

The Robot Report. (2024). Automated Warehouse | Overcoming Common Software Implementation Challenges. WTWH Media LLC.

Veridify Security. (n.d.). OT Security: Cybersecurity for Modbus. Retrieved from Veridify Security: https://www.veridify.com/ot-security-cybersecurity-for-modbus/

Dual Hat – NSA and CYBERCOM

Reasons for the Dual Hat, and Reasons against – solution – it’s complicated.

The National Security Agency (NSA) and U.S. Cyber Command (CYBERCOM) are both part of the U.S. Department of Defense, with a single leader overseeing both agencies. CYBERCOM operates under Title 10, governing military operations, while the NSA operates under Title 50, governing intelligence activities. While distinct missions, in cyber operations they frequently intersect.

Intelligence Gathering: Strategic vs. Operational

Intelligence gathering often overlaps with operational activities when identifying threat actors. The methods and tactics used may be inherently operational or offensive, blurring the distinction between intelligence and military operations.

Intelligence has a history of intersecting with military action, as seen from within The DoD War Manual. Item 16.1.2.1 lists in Cyber Operations actions such as advance force, reconnaissance, and gathering of intelligence;1 identifying intelligence as a distinct act.

Splitting the NSA and U.S. Cyber Command would not change how cyber intelligence is gathered but could increase costs, create duplicative efforts and reduce efficiency.

To Split or Not

Post-Gathering: What to Do with the Intelligence?

The NSA’s directive to share intelligence with relevant agencies contrasts with CYBERCOM’s mission to disrupt and impose costs on adversaries. This divergence creates a conflict – who decides how the intelligence is used? For instance, if CYBERCOM wants to gather long-term intelligence or develop countermeasures without disclosure, it could clash with the NSA’s responsibility to share the data.

Splitting the NSA and U.S. Cyber Command would not change how cyber intelligence is gathered, would likely increase costs, and reduce operational efficiency. Maintaining the current dual-hat structure, however, may continue to create conflicts between the agencies’ differing missions, potentially complicating intelligence priorities.

Ultimately, the decision to split or consolidate involves weighing the trade-off between efficiency and resolving mission conflicts.


  1. DOD Law of War Manual, Updated July 2023, Office of General Counsel, Department of Defense ↩︎

References

Department of Defense. (2023, July). Office of General Counsel | Department of Defense | Treaty Documents > DoD Law of War Manual. Retrieved from Office of General Counsel | Department of Defesne: https://ogc.osd.mil/Portals/99/Law%20of%20War%202023/DOD-LAW-OF-WAR-MANUAL-JUNE-2015-UPDATED-JULY 202023.pdf

Garamone, J. (2023, March 8). Cyber Command, NSA Successes Point Way to Future. Retrieved from U.S. Department of Defense: https://www.defense.gov/News/News-Stories/Article/Article/3322765/cyber-command-nsa-successes-point-way-to-future/

House.Gov. (2025). TITLE 10 / Subtitle A / PART I / CHAPTER 6 / §167b. Retrieved from uscode.house.gov: https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim-title10-section167b&num=0&edition=prelim

Maryuama, J. A. (2020, December 24). Split Up NSA and Cybercom. Retrieved from Defense One: https://www.defenseone.com/ideas/2020/12/split-nsa-and-cybercom/171033/

National Security Agency. (n.d.). About NSA/CSS Mission. Retrieved from NSA.gov: https://www.nsa.gov/about/mission-values/

Office of the Director of National Intelligence. (n.d.). Rev Book – 1947 National Security Act. Retrieved from Office of the Director of National Intelligence: https://www.dni.gov/index.php/ic-legal-reference-book/national-security-act-of-1947

Schoka, A. (2019, April 3). Cyber Command, The NSA, and Operating in Cyberspace: Tie To End The Dual Hat. Retrieved from War On The Rocks: https://warontherocks.com/2019/04/cyber-command-the-nsa-and-operating-in-cyberspace-time-to-end-the-dual-hat/

Swaney, R. (2023, September 11). Why Keep the Cybercom and NSA’s Dual Hat Arrangement. Retrieved from Security Intelligence: https://securityintelligence.com/articles/why-keep-cybercom-and-nsas-dual-hat-arrangement/

Caution – In Cyber Regulation

It is interesting discuss caution in cyber regulation. While caution is an integral part of the regulatory process, we currently see an incautious trend of dismantling regulations that were established with expert knowledge, deliberation, and care.

Cautious step 1: Initiation and Decision for an Agency

Building a regulatory agency requires that multiple branches of government recognize the need for expertise in creating rules ensuring public safety and security.

Article II, §2, Clause 2[1]: states that the president “by and with the advice and consent of the Senate, shall appoint … all other Officers of the United States, whose Appointments are not herein otherwise provided for….”. Agency formation is a careful, deliberate, and cautious process.

Cautious step 2: Designing & Approving, Laws to develop an Agency

Once the need for an agency is recognized, Congress must pass laws directing agency actions and scope on the subject[2]. Making a law is inherently cautious, involving committee revies, debates and votes. Only after approval by both chambers can the law(s) be submitted to the President for approval. 

Cautious step 3: Procedural Guidance Upon Agencies

An Agency’s scope is defined by the law(s) Congress passed to establish it. The Administrative Procedure Act (APA) structures how agencies operate, including rules and guidelines for process and procedure. Agencies must publicly share their actions, methods, and processes in the Federal Register.[3] The allowances for secrecy are defined[4], and the participation of the public is built into the procedure in General Notice §4(a)(b)(c)(d).

Caution is expressed in deliberation, and methodology, to develop the greatest understanding of the rule to be made. These processes apply to any regulation rule, allowing for cool minds and diverse input, and aren’t different for Cyber.

Once a rule is proposed, it often is challenged in court by industries and others to challenge or modify the rule. Clearly, the craft of drafting and enacting any regulation is designed with care and caution.

Lack of Caution?

There is an area where caution is lacking. The judiciary risks dismantling regulations beyond their scope of understanding and neglecting their duty to review in favor of deregulation. The increasing reliance on the Major Questions Doctrine, suggests that Congress should draft more specific laws. This ignores the initial cautious step where Congress recognized that expertise on these matters lay outside its purview. This troubling lack of caution in regulation raises concerns about our agencies ability to be effective and the potential risks posed by insufficient protections against cyber threats.


[1] Constitution Annotated, on the congress.gov site, has not only the full text of the constitution, but as seen in the link, a break down of sections and relevance in current exploration.

[2] A Guide to the Rulemaking Process, Prepared by the Office of the Federal Register. What gives agencies the authority to issue regulations.

[3] 5 U.S.C § § 551-559, Administrative Procedure. An easier to read description specifically to rule making can be found on the Cornell Law School LII site.

[4] Administrative Procedure Act PDF – Public Information §3 (1)(2), Rule Making §4 (1)(2)


Reference material list can be found here.

Universal Opt Out & Global Privacy Controls

What is the significance of UOO and GPC in the context of digital privacy and consumer rights.

Universal Opt Out (Mechanism) (UOO(M)) is not configured per website, but is a standardized signal sent to all visited websites from a browser. Universal Opt Out Mechanism(s) include GPC and will likely include similar technologies in future.

Global Privacy Control (GPC)1   is a browser setting indicating a user’s preferences regarding the collection, distribution, and sale of the user’s data. It is HTTP or HTTPS signal, transmitted over the DOM (Document Object Model) (GitHub, 2024). It is specific to web browsers and HTTP protocols; meaning it is for internet browsers and does not apply to IoT, or other methods of data collection. GPC must be flagged on each browser used; If a user surfs with GPC on in Firefox, but later that day goes to the same site in another browser, the new browser will also need to be set to the users’ preferences.

The future of UOOM will likely include other mechanism and services and expand past just HTTP. UOOM has room to grow to encompass multiple signals; GPC for HTTP(s), and other mechanisms for mobile devices, IoT, perhaps even ISP’s. As the IoT and information flow continues to grow, so too will the need for the toolsets and regulations.

Legal & Regulatory Framework

One of the key components in many of the USA laws is the narrowing of the term processing. For example, Colorado’s new law allows users to opt out of possessing “to advertising and sale…”2 (Rule 1.01, CCR 904-3) (Colorado Attorney General , 2021). California also focuses on the “Consumers’ Right to Opt Out of Sale or Sharing…”3 (California Privacy Protection Agency, 2020). The proposed New York law in the assembly focuses on, targeted advertising, sale, and profiling4(New York Assembly, 2024)
Interestingly California, Colorado, and the GDPR (EU) all recognize and use the GPC HTTP signal in their laws, and New York’s proposal requires the acceptance of any type of opt out signal from multiple types of devices (leaving the door open for new UOOM).

Support

Focusing on the California Privacy Rights Act is a good place to start because it is the most populous state in the union, and represents the the largest tech industry.

The California AG lawsuit against Sephora proved that the state is willing to enforce those rules.

The mandate for opting out seems clear on the surface, yet different entities are defining “sale” differently- and the suit against Sephora helped clarify that sale doesn’t have to include financial transaction. In California law Sale of data means making available “to a third party for monetary or other valuable consideration.”5 (like rewards programs, or supplying to a service provider). A Browser with that signal turned on has not only opted out of collection, distribution, and sale of their data; but the responsibility of the data collector (in this case Sephora) does not stop at the point of turning on the signal. The collector must not share/distribute, and by that they must but make clear to service providers that the user of that data has opted out and the data is not available, should not be collected, and cannot be part of the transaction.6 (Office of Attorney General, San Francisco Superior Court, 2022)

Do Consumers Have Control of Their Data?

Sadly, no, UOOM and GPC are not the end game. UOOM and GPC are the very beginning, and necessary to start the conversation of opting out of data collection and sale.

Currently the UOOM and GPC is specific to HTTP – and it is browser driven. A regular person may surf using Chrome (where GPC isn’t default & requires an addon) or Firefox(where GPC is default if in “Incognito mode”) – but if they switch to edge, or their phone, the GPC flag may not be there. 

From watching videos of the Colorado AG and other law officials discuss GPC7, there are also mis-understandings and misconceptions about how a user is identified on the web. Some arguing that the user’s data isn’t collected till passing a sign in wall. Faulty understanding of the technology can lead to faulty assumptions and make enforcement impossible- for example, if the people drafting or enforcing the law don’t understand or agree on an identifier, how can protection be enacted and enforced?

For consumers it will offer an incomplete understanding of privacy. Selecting or opting to turn it on, is removed when you dump your cache, and you have to do it again. GPC doesn’t carry across browsers, or devices. Even if the company knows it’s you, and you have signed in, and you opted out of tracking in Firefox- if you log in using another device, you are not sending the opt out signal.  How companies choose to collect when a user has opted out, but navigated using a different tool – has not been settled, and is not part of the laws.

Privacy settings on HTTP(s) are a great starting point, and it is exciting to be moving in the right direction. However GPC reflects only a small fraction of the consumer data that is tracked and monetized. Consider the report by the FTC in October of 2021, regarding the privacy practices of six of our major Internet Service Providers. (Federal Trade Commission, 2021)

What Are Some Conflicts Between UooM and Convenience?

Access to Information Friction Points

Currently, because UOOM is not across all states, nor is it adopted across platforms, there are still sites that will prevent viewing if you don’t allow their cookies. In those instances, individuals could be blocked from information.

Companies, that don’t need to sell data to make money with your data, won’t feel any issue with it. But smaller companies may find acquiring data for their projects more difficult. Will the price for the sale of data go up, (from ISPs, or other data sources) when they have less competition. Will this make it less competitive and harder for younger startups and innovation?? 

Privacy V. Convenience

As for privacy v convenience, there isn’t much to say there. This is an initial step to grant some controls, and reduction of transmission of some data. Data continues to be collected from non-flagged browsers and non HTPP sources.

The convenience of the selection is a great first step, and a distinct improvement over opting out at each site. Clarity on the GPC and its limitations needs to be clearer in the support documentation on the different browsers. 

Example WaPo

Washington Post appears to have used and accepted Universal Opt Out as a marketing tool. They are listed in the GPC site, yet on the WP privacy documents it is clear that they will segregate, and disregard the GPC if your IP or any other information indicates you are in a location where GPC is not required by law.

The WP looks good on the GPC Founding Organizations page, while actively striving to do the bare minimum. WP also strongly encourage the use of their apps by limiting browser functionality on mobile devices, while their Privacy Policy8 makes clear they gather data on “…sites, mobile and tablet apps and other online products and services…9. (Washington Post, 2024)

Using Firefox Incognito (GPC is automatic) I navigated from the Privacy Statement to the Your Privacy Choices page, it is evident that GPC opt out is flag is received. That same page indicates if you don’t reside in the states where that is enforced, your privacy may be reset. Weather they do or not, is unclear, but with their verbiage and the amount of time to write these documents, it is likely that users location sets an automation to allow the tracking and selling if outside of the areas where it is required by law.

Monetizing data appears to be important enough to make these marketing decisions.

Increase Awareness

Currently it is only people who already care, that search and find out about privacy. 

Awareness is increased when there are pushes on legislation through links and mentions on the news media. I don’t know how to make it “sexy”, but perhaps early education and exercises could increase awareness amongst the young, and their parents/caregivers.

Support Materials & Website Improvements

There are basic absences on all of the sites regarding privacy and GPC, such as:

  • Simplified explanations,
  • Quick start guides, and
  • Why some cookies are necessary.
  • What a third party is, and
    • why it matters.

Essentially, to try to get the interest and information out, advocates must fight the noise of the endless information pollution. If the Colorado or California AG had influencer contacts, that could be a point to leverage.

However, there is nothing to leverage if simplified support materials are not available.  If they leveraged an influencer now, and directed to their websites – any campaign would fail because the information provided is poorly developed for lay persons, and isn’t available in multiple languages.

The closest I can get to marketing, is to suggest: Simplify, sexify, amplify.

Future of Uoo & Privacy Enhancing Tech

The GPC as a UOOM tool is a fantastic start. I would hope it is only a start, and privacy advocates, and technologists would work together to explore the other areas that need addressing. In fact, starting small, like the GPC may be exactly the right start – if advocates can amplify the discussion of it’s value, and create stories of success. Those same stories can then be leveraged to ease progression and deployment of the next tool. I suspect it is easiest to develop the laws and tools in this process from smallest to largest: from HTTP(s) to Mobile to IoT, tracking across devices, and eventually to IP. This enables the defining of terms, that can then be used in the next stage, and allows the time and space for measurement of success. Once we have some established rules and mechanisms for privacy rights, we can explore what that means with regards to AI. We cannot establish rules around AI specific to privacy rights, prior to having some rules about privacy rights.

However, I do hope that the process is already begun; inertia is a battle that is regularly lost.

Policy Recommendations

I think one of the key components that must be done to enhance UOOM, is to incorporate the right to be forgotten into the rule making. While it is within GDPR, it is completely absent from the USA laws being developed and enacted.

The US laws are defining legal gathering and use of data to be “publicly available information.”

Consider in the draft of the American Privacy Rights Act of 202410 stating “publicly available information” is excluded from covered data §2(9)(B)(iii) (Senate & House of Representatives, 2024)

It defines Publicly Available Information to mean any information that “… has been lawfully made available to the general public…”§2(32)(A)

Yet in the supreme court decision of DOJ v. Reporters Comm. for Free of the press, 489 U.S. 749 (1989) (U.S. Supreme Court, 1989)

Page 763 states

“…To begin with, both the common law and the literal understandings of privacy encompass the individual’s control of information concerning his or her person. In an organized society, there are few facts that are not at one time or another divulged to another. [SCOTUS Footnote 14] Thus, the extent of the protection accorded a privacy right at common law rested in part on the degree of dissemination of the allegedly private fact and the extent to which the passage of time rendered it private. [ SCOTUS Footnote 15] According to Webster’s initial definition, information may be classified as “private” if it is “intended for or restricted to the use of a particular person or group or class of persons: not freely available to the public.”11

This would mean that just because it has been public (once upon a time) does not mean it is public now. The footnotes are very interesting and ties nicely with the Contextual Integrity heuristic; selective disclosure and fixing limits upon the publicity.  Just because there is information on an individual attending university, it does not follow that that should be shared with that individual shopping service 30 years later.


Footnotes

  1. GPC Signal Definition defining a signal transmitted over HTTP and through the DOM, GitHub, March 22, 2024 ↩︎
  2. Rule 1.01 CCR 904-3   ↩︎
  3. California Consumer Privacy Act of 2018, Amended in 2020, § 1798.120 ↩︎
  4. New York State Assembly. (2024) Bill S00365: An Act to Enact the New York Privacy Act § 1102.2 ↩︎
  5. California Consumer Privacy Act of 2018, Amended in 2020, § 1798.140(ad)(1) ↩︎
  6. Filed Judgement – Office of the Attorney General, San Francisco County Superior Court, Aug 24, 2022 – the judgment & Sephora Settlement. Section 6 offers some clarity on the definition of Sale. Laymen’s terms of the same can be found at the same site, with the Press Release, Settlement Announcement, August 24, 2022. ↩︎
  7. Video list provided at the end of this document. Includes presentations by law offices discussing the Colorado and the California Privacy laws. ↩︎
  8. Washington Post Privacy Policy ↩︎
  9. Italics added for emphasis ↩︎
  10. 2024 American Privacy Rights Act (APRA),   ↩︎
  11. DOJ v. Reporters Comm. For Free Press, 489 U.S. 749 (1989) pg -763 through 764 ↩︎

Videos

AG Colorado- Data Privacy and GPC Webinar Colorado office of Attorney General, Phil Weiser AG

CPRA Session 5 Universal Opt Outs and Global Privacy Control Sheri Porath Rockwell, California’s Lawyers Association, and Stacy Grey, Director of Legal Research and Analysis at Privacy Forum. Guest Speakers Dr. Rob van Eijk, EU managing Director, Future of Privacy Forum, and Tanvi Vyas, Principal Engineer at Mozilla

TEDx – Data Privacy and Consent | Fred Cate Fred Cate, VP for research at Indiana University, Distinguished Professor of Law at Indiana University Maurer School of Law, and Senior Fellow of the Center for Applied Cybersecurity Research.

Lessons Learned from California on Global Privacy Control Donna Frazier, SR VP of Privacy Initiatives at BBB National Programs and Jason Cronk, Chair and founder of the Institute of Operational Privacy Design.

Tools Approachable to Small & Mid-Sized Businesses

MS CRS: Information Systems Security Engineering

Review CISA List of Tools and Services

I looked for Cybersecurity tools that would be most useful and approachable to a small/mid-sized company, specifically regarding protection of the internal network, intellectual property, workflows, etc. Areas to keep in mind include technical requirements, coding skill levels, surface area monitoring, information sharing, and initiation costs. Examples used in this document were from the CISA list  Cybersecurity Best Practices Services.

Some of the areas of importance to a small business include:

  • Is it a service or a tool?
  • Surface area monitoring including passwords
  • Scan for weaknesses regularly
  • Does it require coding required or not (and what languages it is compatible with)
  • Updated information sharing
  • Latest vulnerability tables; how many and which ones
  • Knowledge Bases, Help files, Initiation videos, etc.

Services

There are many services out there that enable a company to outsource its security. This paper discusses tools and removing services from review.

Tools

There appeared to be three main categories of tools:

  1. Code as Security (within a development pipeline),
  2. Customizable suites that require coding literacy, and
  3. Customizable Identity and Access Management (IAM) tools, that require a high level of technical literacy but do not require full coding literacy (at least at start).

Code as Security

The first category, Code as Security, are the tools that require coding skill, knowledge, and understanding. This subset of tools help within the development pipeline, but are not coverage for the business as a whole. For example, tools like Google OSS-Fuzz are useful to a company that has a development team, perhaps sells SaaS, and coders within the IT or Security team.  OSS-Fuzz and similar Security as Code tools would be handy within the development pipeline, but don’t represent a full coverage or protection suite.

Customizable Suite of Security Tools Requiring Coding

The second category, Customizable suites of security tools require development level personnel; the amount of command line and other coding language required is high. Using Gripe as an example: It would require an internal dev team to establish, create the dashboards, and to manage it. This sort of tool requires keeping a portion of developers available for monitoring, updating, and keeping up to date not just on the dashboard and metrics tracking, but to also watch, and maintain the software itself. Many of these tools are available on Github, BitBucket, or other repository systems. Constant review and tracking of source files and updates would be necessary, as well as monitoring different boards for latest risks to track if the chosen tool is keeping up to date. If a company is going to establish a security team for this, they then have to watch the tool development itself – to ensure the tool remains safe, and that use of the tool remains up to date with the source code. Selecting this type of tool likely requires a full time CySec officer and team.

Cloud Protection Suites & Identity Access Management

Cloud Protection suites that include the Identity and Access Management (IAM) tools are our third tool category. These are larger protection suites, often provided by the cloud provider. Microsoft Entra ID (formerly Azure Active Directory), Google Security Command Center and AWS AIM, fall within this category.

These tool sets require a good understanding of technology, but do not require a team of coders and developers to manage them (at least to start). These tools have ability to build the reports and graphics required to convey complex data upstream, and have enough technical power to input work flows, track exposure & surface area, odd behavior analytics, and constant monitoring of the known surface area within that environment.

These larger tool sets, that include Identity Access Management (IAM), are an accessible starting point for many small to mid-sized companies. The dashboards that come with these tools can be used to help identify areas of exposure that may require looking for addons. Each of the above-mentioned toolsets have marketplaces for additional functionality, including third party vendors.

Of the three tool sets mentioned, we will more fully explore Google Security Command Center (SCC), because it has the easiest/simplest point of entry for a small to mid-sized company that may not have developed Access Management or Cybersecurity previously. Discussion of third party compatibility as a deciding factor will not be explored here.

Entra, AWS, and SCC tool sets have similar abilities and set up requirements at the small to intermediate business level.
Entra, AWS, and SCC tools sets have similar abilities and setup requirements.

Google Security Command Center (SCC)

Google Security Command Center is a cloud-based security platform that will monitor the attack surface area, and alert the operator to threats, weakness, incorrect configurations and more. It is set up with the ability to prioritize or “threat level identify” the threats. SCC allows the operator to select and view what the threat is, why it is a threat, and recommended mitigation and/or solutions.

Setup

The Google Security Command Center is the most approachable service of the three mentioned above, and has some of the best introductory materials to facilitate small to medium companies to be able to accomplish that initial lift required to gain that first step into Cybersecurity.

GCP -> IAM Permissions
GCP -> IAM Permissions

The initial setup of Google Security Command Center requires setting up the Google IAM, from within the Google Cloud Platform -> IAM page.

Setup even for the IAM requires 5 roles within the Google Cloud Platform -> IAM permissions page[i]. The operator setting up the SCC will need to setup and establish the organization, and select the services.

The “Standard” (free) level built in services include Security Health Analytics, which can identify misconfigured virtual machines, containers, networks, storage, and identity and access management policies. For the Standard tier, the level and depth of scanning is at “high level” misconfiguration, and can be increased in coverage with purchase of a higher-level service.  For example, If the company requires API keys scanning or rotation or other configuration issues, they would be looking for moving up from the Standard to a Premium tier. Understanding and researching the difference in the different tiers would fall upon the team member(s) setting up the security. However, even starting at the free or “Standard” tier is better and more security than choosing not to do it all.

Initial work starts with the Identity Access Management (IAM), the operator setting up the SCC will have to communicate across multiple teams and stake holders; developing roles, permissions, and standards. This is not unique to the SCC; it would be required of every IAM tool or platform. There are times that cyber security and resiliency has dependencies, where one process cannot be implemented until another is accomplished[ii]. Understanding permissions, roles, groups, and access is a requirement that must be accomplished to achieve any level of cyber security coverage.

Secondary set up would be to define areas of interest. Correctly establishing the services, providers, data bases, and exposure points is necessary for the tool to be able to monitor and report on attack surface areas and traffic flow. Again, this is not a unique cost, but it does represent required resources and should be considered.

Once fully set up, the SCC has the ability to continuously monitor the attack surface area, provide reports, and suggests paths of control, response, and remediation if needed. The initial scan will likely take longer than usual (hours) but after that, Standard plan service runs a scan twice a day.

GC: SCC SWOT
Google Cloud, Security Command Center — SWOT

Some areas of opportunity may also be considered weakness – for example not having a report (weakness), but having third party integrations that build reports (Opportunity), what is the security of that third party and who is responsible (Threat).  With that in mind, lets get a litter deeper.

One of the greatest assets to a system such as this, is that as part of a behemoth tech company, these systems of tools have access to some of the largest resources for monitoring, development of tools, remediation of their own defects and the discovery and management of the latest threats. This is an asset for the small to medium companies because there is no way that a single individual or single team, can keep up with the constantly changing threat landscape.  Keeping that task on the tool-set, is a huge asset to a small company.

There are challenges, no product is perfect out of the box. Each of the listed tool sets can integrate with many third parties for more targeted coverage and reporting. Google Security Command Center has the Google Cloud Marketplace where there are thousands of compatible add-ons, services and tools. If the operator doesn’t find an exact match, they are likely to find something that comes close. Some of these integrations will take more work if they are native to a different platform, and it should be considered when deciding on a cloud protection system.

Of course there are differences between AWS, Entra, and Google options. A simple example is their firewalls; at the time writing this document, it appears that AWS offers AWS VPN (Site to site, and point to site) where Google offers Cloud VPN (Site to Site). Google’s cloud security model is not as mature as AWS, but AWS has been called overwhelmingly complex for small businesses or teams without extensive cloud experience. Google may not have the same level of threat detection as AWS, but it can be easier to launch, and is considered less complex.  

Growth could require re-tooling (congratulations)

If a company grows from a mid-sized to large company, the scale of the team managing the SCC would have to expand. The ability to tailor the reports could become insufficient as reporting and compliance demands grow. Growth may force a revisiting of if the tools are sufficient, or if in house teams and developers using different tools is the path forward. The ability and flexibility for larger companies’ cybersecurity will be different between the three platforms listed here. At this point, I would suggest a celebratory dinner before visiting what tools they may want to research/acquire/manage.

[i] Getting Started with SCC Playlist

[ii] NIST Developing Cyber-Resilient Systems

[i] Getting Started with SCC Playlist


Other References & Related Articles

Free Cybersecurity Services and Tools – CISA

Free Non-CISA Cybersecurity Services – CISA

CISA’s Public Safety Communications and Cyber Resiliency Toolkit – CISA

Developing Cyber-Resilient Systems: A systems Security Engineering Approach – NIST December 2021

AWS vs Azure vs Google Cloud Security Comparison – BisBot Business Admin Tools – April 2024

Google Identity Services vs. Active Directory – Jumpcloud (addon service to GIS) – June 2023

Microsoft Entra ID

Overview of Attack Surface Management – Microsoft Security Exposure Management – March 2024

What is Security Command Center – Google – March 2024

Google AIM

GCP Security Command Center – Pros & Cons – JIT – Feb 2024

Google Cloud Security Command Center – Google

Getting Started with Security Command Center – Google – March 2023

Google Marketplace: Command Center Services – Google 

Getting Started with Security Command Center Playlist – Google – youtube 

AWS vs. Azure vs. Cloud: Security comparison – Sysdig- Feb 2023

NIST Developing Cyber-Resilient Systems – December 2021