Caution – In Cyber Regulation

It is interesting discuss caution in cyber regulation. While caution is an integral part of the regulatory process, we currently see an incautious trend of dismantling regulations that were established with expert knowledge, deliberation, and care.

Cautious step 1: Initiation and Decision for an Agency

Building a regulatory agency requires that multiple branches of government recognize the need for expertise in creating rules ensuring public safety and security.

Article II, §2, Clause 2[1]: states that the president “by and with the advice and consent of the Senate, shall appoint … all other Officers of the United States, whose Appointments are not herein otherwise provided for….”. Agency formation is a careful, deliberate, and cautious process.

Cautious step 2: Designing & Approving, Laws to develop an Agency

Once the need for an agency is recognized, Congress must pass laws directing agency actions and scope on the subject[2]. Making a law is inherently cautious, involving committee revies, debates and votes. Only after approval by both chambers can the law(s) be submitted to the President for approval. 

Cautious step 3: Procedural Guidance Upon Agencies

An Agency’s scope is defined by the law(s) Congress passed to establish it. The Administrative Procedure Act (APA) structures how agencies operate, including rules and guidelines for process and procedure. Agencies must publicly share their actions, methods, and processes in the Federal Register.[3] The allowances for secrecy are defined[4], and the participation of the public is built into the procedure in General Notice §4(a)(b)(c)(d).

Caution is expressed in deliberation, and methodology, to develop the greatest understanding of the rule to be made. These processes apply to any regulation rule, allowing for cool minds and diverse input, and aren’t different for Cyber.

Once a rule is proposed, it often is challenged in court by industries and others to challenge or modify the rule. Clearly, the craft of drafting and enacting any regulation is designed with care and caution.

Lack of Caution?

There is an area where caution is lacking. The judiciary risks dismantling regulations beyond their scope of understanding and neglecting their duty to review in favor of deregulation. The increasing reliance on the Major Questions Doctrine, suggests that Congress should draft more specific laws. This ignores the initial cautious step where Congress recognized that expertise on these matters lay outside its purview. This troubling lack of caution in regulation raises concerns about our agencies ability to be effective and the potential risks posed by insufficient protections against cyber threats.


[1] Constitution Annotated, on the congress.gov site, has not only the full text of the constitution, but as seen in the link, a break down of sections and relevance in current exploration.

[2] A Guide to the Rulemaking Process, Prepared by the Office of the Federal Register. What gives agencies the authority to issue regulations.

[3] 5 U.S.C § § 551-559, Administrative Procedure. An easier to read description specifically to rule making can be found on the Cornell Law School LII site.

[4] Administrative Procedure Act PDF – Public Information §3 (1)(2), Rule Making §4 (1)(2)


Reference material list can be found here.

Universal Opt Out & Global Privacy Controls

What is the significance of UOO and GPC in the context of digital privacy and consumer rights.

Universal Opt Out (Mechanism) (UOO(M)) is not configured per website, but is a standardized signal sent to all visited websites from a browser. Universal Opt Out Mechanism(s) include GPC and will likely include similar technologies in future.

Global Privacy Control (GPC)1   is a browser setting indicating a user’s preferences regarding the collection, distribution, and sale of the user’s data. It is HTTP or HTTPS signal, transmitted over the DOM (Document Object Model) (GitHub, 2024). It is specific to web browsers and HTTP protocols; meaning it is for internet browsers and does not apply to IoT, or other methods of data collection. GPC must be flagged on each browser used; If a user surfs with GPC on in Firefox, but later that day goes to the same site in another browser, the new browser will also need to be set to the users’ preferences.

The future of UOOM will likely include other mechanism and services and expand past just HTTP. UOOM has room to grow to encompass multiple signals; GPC for HTTP(s), and other mechanisms for mobile devices, IoT, perhaps even ISP’s. As the IoT and information flow continues to grow, so too will the need for the toolsets and regulations.

Legal & Regulatory Framework

One of the key components in many of the USA laws is the narrowing of the term processing. For example, Colorado’s new law allows users to opt out of possessing “to advertising and sale…”2 (Rule 1.01, CCR 904-3) (Colorado Attorney General , 2021). California also focuses on the “Consumers’ Right to Opt Out of Sale or Sharing…”3 (California Privacy Protection Agency, 2020). The proposed New York law in the assembly focuses on, targeted advertising, sale, and profiling4(New York Assembly, 2024)
Interestingly California, Colorado, and the GDPR (EU) all recognize and use the GPC HTTP signal in their laws, and New York’s proposal requires the acceptance of any type of opt out signal from multiple types of devices (leaving the door open for new UOOM).

Support

Focusing on the California Privacy Rights Act is a good place to start because it is the most populous state in the union, and represents the the largest tech industry.

The California AG lawsuit against Sephora proved that the state is willing to enforce those rules.

The mandate for opting out seems clear on the surface, yet different entities are defining “sale” differently- and the suit against Sephora helped clarify that sale doesn’t have to include financial transaction. In California law Sale of data means making available “to a third party for monetary or other valuable consideration.”5 (like rewards programs, or supplying to a service provider). A Browser with that signal turned on has not only opted out of collection, distribution, and sale of their data; but the responsibility of the data collector (in this case Sephora) does not stop at the point of turning on the signal. The collector must not share/distribute, and by that they must but make clear to service providers that the user of that data has opted out and the data is not available, should not be collected, and cannot be part of the transaction.6 (Office of Attorney General, San Francisco Superior Court, 2022)

Do Consumers Have Control of Their Data?

Sadly, no, UOOM and GPC are not the end game. UOOM and GPC are the very beginning, and necessary to start the conversation of opting out of data collection and sale.

Currently the UOOM and GPC is specific to HTTP – and it is browser driven. A regular person may surf using Chrome (where GPC isn’t default & requires an addon) or Firefox(where GPC is default if in “Incognito mode”) – but if they switch to edge, or their phone, the GPC flag may not be there. 

From watching videos of the Colorado AG and other law officials discuss GPC7, there are also mis-understandings and misconceptions about how a user is identified on the web. Some arguing that the user’s data isn’t collected till passing a sign in wall. Faulty understanding of the technology can lead to faulty assumptions and make enforcement impossible- for example, if the people drafting or enforcing the law don’t understand or agree on an identifier, how can protection be enacted and enforced?

For consumers it will offer an incomplete understanding of privacy. Selecting or opting to turn it on, is removed when you dump your cache, and you have to do it again. GPC doesn’t carry across browsers, or devices. Even if the company knows it’s you, and you have signed in, and you opted out of tracking in Firefox- if you log in using another device, you are not sending the opt out signal.  How companies choose to collect when a user has opted out, but navigated using a different tool – has not been settled, and is not part of the laws.

Privacy settings on HTTP(s) are a great starting point, and it is exciting to be moving in the right direction. However GPC reflects only a small fraction of the consumer data that is tracked and monetized. Consider the report by the FTC in October of 2021, regarding the privacy practices of six of our major Internet Service Providers. (Federal Trade Commission, 2021)

What Are Some Conflicts Between UooM and Convenience?

Access to Information Friction Points

Currently, because UOOM is not across all states, nor is it adopted across platforms, there are still sites that will prevent viewing if you don’t allow their cookies. In those instances, individuals could be blocked from information.

Companies, that don’t need to sell data to make money with your data, won’t feel any issue with it. But smaller companies may find acquiring data for their projects more difficult. Will the price for the sale of data go up, (from ISPs, or other data sources) when they have less competition. Will this make it less competitive and harder for younger startups and innovation?? 

Privacy V. Convenience

As for privacy v convenience, there isn’t much to say there. This is an initial step to grant some controls, and reduction of transmission of some data. Data continues to be collected from non-flagged browsers and non HTPP sources.

The convenience of the selection is a great first step, and a distinct improvement over opting out at each site. Clarity on the GPC and its limitations needs to be clearer in the support documentation on the different browsers. 

Example WaPo

Washington Post appears to have used and accepted Universal Opt Out as a marketing tool. They are listed in the GPC site, yet on the WP privacy documents it is clear that they will segregate, and disregard the GPC if your IP or any other information indicates you are in a location where GPC is not required by law.

The WP looks good on the GPC Founding Organizations page, while actively striving to do the bare minimum. WP also strongly encourage the use of their apps by limiting browser functionality on mobile devices, while their Privacy Policy8 makes clear they gather data on “…sites, mobile and tablet apps and other online products and services…9. (Washington Post, 2024)

Using Firefox Incognito (GPC is automatic) I navigated from the Privacy Statement to the Your Privacy Choices page, it is evident that GPC opt out is flag is received. That same page indicates if you don’t reside in the states where that is enforced, your privacy may be reset. Weather they do or not, is unclear, but with their verbiage and the amount of time to write these documents, it is likely that users location sets an automation to allow the tracking and selling if outside of the areas where it is required by law.

Monetizing data appears to be important enough to make these marketing decisions.

Increase Awareness

Currently it is only people who already care, that search and find out about privacy. 

Awareness is increased when there are pushes on legislation through links and mentions on the news media. I don’t know how to make it “sexy”, but perhaps early education and exercises could increase awareness amongst the young, and their parents/caregivers.

Support Materials & Website Improvements

There are basic absences on all of the sites regarding privacy and GPC, such as:

  • Simplified explanations,
  • Quick start guides, and
  • Why some cookies are necessary.
  • What a third party is, and
    • why it matters.

Essentially, to try to get the interest and information out, advocates must fight the noise of the endless information pollution. If the Colorado or California AG had influencer contacts, that could be a point to leverage.

However, there is nothing to leverage if simplified support materials are not available.  If they leveraged an influencer now, and directed to their websites – any campaign would fail because the information provided is poorly developed for lay persons, and isn’t available in multiple languages.

The closest I can get to marketing, is to suggest: Simplify, sexify, amplify.

Future of Uoo & Privacy Enhancing Tech

The GPC as a UOOM tool is a fantastic start. I would hope it is only a start, and privacy advocates, and technologists would work together to explore the other areas that need addressing. In fact, starting small, like the GPC may be exactly the right start – if advocates can amplify the discussion of it’s value, and create stories of success. Those same stories can then be leveraged to ease progression and deployment of the next tool. I suspect it is easiest to develop the laws and tools in this process from smallest to largest: from HTTP(s) to Mobile to IoT, tracking across devices, and eventually to IP. This enables the defining of terms, that can then be used in the next stage, and allows the time and space for measurement of success. Once we have some established rules and mechanisms for privacy rights, we can explore what that means with regards to AI. We cannot establish rules around AI specific to privacy rights, prior to having some rules about privacy rights.

However, I do hope that the process is already begun; inertia is a battle that is regularly lost.

Policy Recommendations

I think one of the key components that must be done to enhance UOOM, is to incorporate the right to be forgotten into the rule making. While it is within GDPR, it is completely absent from the USA laws being developed and enacted.

The US laws are defining legal gathering and use of data to be “publicly available information.”

Consider in the draft of the American Privacy Rights Act of 202410 stating “publicly available information” is excluded from covered data §2(9)(B)(iii) (Senate & House of Representatives, 2024)

It defines Publicly Available Information to mean any information that “… has been lawfully made available to the general public…”§2(32)(A)

Yet in the supreme court decision of DOJ v. Reporters Comm. for Free of the press, 489 U.S. 749 (1989) (U.S. Supreme Court, 1989)

Page 763 states

“…To begin with, both the common law and the literal understandings of privacy encompass the individual’s control of information concerning his or her person. In an organized society, there are few facts that are not at one time or another divulged to another. [SCOTUS Footnote 14] Thus, the extent of the protection accorded a privacy right at common law rested in part on the degree of dissemination of the allegedly private fact and the extent to which the passage of time rendered it private. [ SCOTUS Footnote 15] According to Webster’s initial definition, information may be classified as “private” if it is “intended for or restricted to the use of a particular person or group or class of persons: not freely available to the public.”11

This would mean that just because it has been public (once upon a time) does not mean it is public now. The footnotes are very interesting and ties nicely with the Contextual Integrity heuristic; selective disclosure and fixing limits upon the publicity.  Just because there is information on an individual attending university, it does not follow that that should be shared with that individual shopping service 30 years later.


Footnotes

  1. GPC Signal Definition defining a signal transmitted over HTTP and through the DOM, GitHub, March 22, 2024 ↩︎
  2. Rule 1.01 CCR 904-3   ↩︎
  3. California Consumer Privacy Act of 2018, Amended in 2020, § 1798.120 ↩︎
  4. New York State Assembly. (2024) Bill S00365: An Act to Enact the New York Privacy Act § 1102.2 ↩︎
  5. California Consumer Privacy Act of 2018, Amended in 2020, § 1798.140(ad)(1) ↩︎
  6. Filed Judgement – Office of the Attorney General, San Francisco County Superior Court, Aug 24, 2022 – the judgment & Sephora Settlement. Section 6 offers some clarity on the definition of Sale. Laymen’s terms of the same can be found at the same site, with the Press Release, Settlement Announcement, August 24, 2022. ↩︎
  7. Video list provided at the end of this document. Includes presentations by law offices discussing the Colorado and the California Privacy laws. ↩︎
  8. Washington Post Privacy Policy ↩︎
  9. Italics added for emphasis ↩︎
  10. 2024 American Privacy Rights Act (APRA),   ↩︎
  11. DOJ v. Reporters Comm. For Free Press, 489 U.S. 749 (1989) pg -763 through 764 ↩︎

Videos

AG Colorado- Data Privacy and GPC Webinar Colorado office of Attorney General, Phil Weiser AG

CPRA Session 5 Universal Opt Outs and Global Privacy Control Sheri Porath Rockwell, California’s Lawyers Association, and Stacy Grey, Director of Legal Research and Analysis at Privacy Forum. Guest Speakers Dr. Rob van Eijk, EU managing Director, Future of Privacy Forum, and Tanvi Vyas, Principal Engineer at Mozilla

TEDx – Data Privacy and Consent | Fred Cate Fred Cate, VP for research at Indiana University, Distinguished Professor of Law at Indiana University Maurer School of Law, and Senior Fellow of the Center for Applied Cybersecurity Research.

Lessons Learned from California on Global Privacy Control Donna Frazier, SR VP of Privacy Initiatives at BBB National Programs and Jason Cronk, Chair and founder of the Institute of Operational Privacy Design.

Tools Approachable to Small & Mid-Sized Businesses

MS CRS: Information Systems Security Engineering

Review CISA List of Tools and Services

I looked for Cybersecurity tools that would be most useful and approachable to a small/mid-sized company, specifically regarding protection of the internal network, intellectual property, workflows, etc. Areas to keep in mind include technical requirements, coding skill levels, surface area monitoring, information sharing, and initiation costs. Examples used in this document were from the CISA list  Cybersecurity Best Practices Services.

Some of the areas of importance to a small business include:

  • Is it a service or a tool?
  • Surface area monitoring including passwords
  • Scan for weaknesses regularly
  • Does it require coding required or not (and what languages it is compatible with)
  • Updated information sharing
  • Latest vulnerability tables; how many and which ones
  • Knowledge Bases, Help files, Initiation videos, etc.

Services

There are many services out there that enable a company to outsource its security. This paper discusses tools and removing services from review.

Tools

There appeared to be three main categories of tools:

  1. Code as Security (within a development pipeline),
  2. Customizable suites that require coding literacy, and
  3. Customizable Identity and Access Management (IAM) tools, that require a high level of technical literacy but do not require full coding literacy (at least at start).

Code as Security

The first category, Code as Security, are the tools that require coding skill, knowledge, and understanding. This subset of tools help within the development pipeline, but are not coverage for the business as a whole. For example, tools like Google OSS-Fuzz are useful to a company that has a development team, perhaps sells SaaS, and coders within the IT or Security team.  OSS-Fuzz and similar Security as Code tools would be handy within the development pipeline, but don’t represent a full coverage or protection suite.

Customizable Suite of Security Tools Requiring Coding

The second category, Customizable suites of security tools require development level personnel; the amount of command line and other coding language required is high. Using Gripe as an example: It would require an internal dev team to establish, create the dashboards, and to manage it. This sort of tool requires keeping a portion of developers available for monitoring, updating, and keeping up to date not just on the dashboard and metrics tracking, but to also watch, and maintain the software itself. Many of these tools are available on Github, BitBucket, or other repository systems. Constant review and tracking of source files and updates would be necessary, as well as monitoring different boards for latest risks to track if the chosen tool is keeping up to date. If a company is going to establish a security team for this, they then have to watch the tool development itself – to ensure the tool remains safe, and that use of the tool remains up to date with the source code. Selecting this type of tool likely requires a full time CySec officer and team.

Cloud Protection Suites & Identity Access Management

Cloud Protection suites that include the Identity and Access Management (IAM) tools are our third tool category. These are larger protection suites, often provided by the cloud provider. Microsoft Entra ID (formerly Azure Active Directory), Google Security Command Center and AWS AIM, fall within this category.

These tool sets require a good understanding of technology, but do not require a team of coders and developers to manage them (at least to start). These tools have ability to build the reports and graphics required to convey complex data upstream, and have enough technical power to input work flows, track exposure & surface area, odd behavior analytics, and constant monitoring of the known surface area within that environment.

These larger tool sets, that include Identity Access Management (IAM), are an accessible starting point for many small to mid-sized companies. The dashboards that come with these tools can be used to help identify areas of exposure that may require looking for addons. Each of the above-mentioned toolsets have marketplaces for additional functionality, including third party vendors.

Of the three tool sets mentioned, we will more fully explore Google Security Command Center (SCC), because it has the easiest/simplest point of entry for a small to mid-sized company that may not have developed Access Management or Cybersecurity previously. Discussion of third party compatibility as a deciding factor will not be explored here.

Entra, AWS, and SCC tool sets have similar abilities and set up requirements at the small to intermediate business level.
Entra, AWS, and SCC tools sets have similar abilities and setup requirements.

Google Security Command Center (SCC)

Google Security Command Center is a cloud-based security platform that will monitor the attack surface area, and alert the operator to threats, weakness, incorrect configurations and more. It is set up with the ability to prioritize or “threat level identify” the threats. SCC allows the operator to select and view what the threat is, why it is a threat, and recommended mitigation and/or solutions.

Setup

The Google Security Command Center is the most approachable service of the three mentioned above, and has some of the best introductory materials to facilitate small to medium companies to be able to accomplish that initial lift required to gain that first step into Cybersecurity.

GCP -> IAM Permissions
GCP -> IAM Permissions

The initial setup of Google Security Command Center requires setting up the Google IAM, from within the Google Cloud Platform -> IAM page.

Setup even for the IAM requires 5 roles within the Google Cloud Platform -> IAM permissions page[i]. The operator setting up the SCC will need to setup and establish the organization, and select the services.

The “Standard” (free) level built in services include Security Health Analytics, which can identify misconfigured virtual machines, containers, networks, storage, and identity and access management policies. For the Standard tier, the level and depth of scanning is at “high level” misconfiguration, and can be increased in coverage with purchase of a higher-level service.  For example, If the company requires API keys scanning or rotation or other configuration issues, they would be looking for moving up from the Standard to a Premium tier. Understanding and researching the difference in the different tiers would fall upon the team member(s) setting up the security. However, even starting at the free or “Standard” tier is better and more security than choosing not to do it all.

Initial work starts with the Identity Access Management (IAM), the operator setting up the SCC will have to communicate across multiple teams and stake holders; developing roles, permissions, and standards. This is not unique to the SCC; it would be required of every IAM tool or platform. There are times that cyber security and resiliency has dependencies, where one process cannot be implemented until another is accomplished[ii]. Understanding permissions, roles, groups, and access is a requirement that must be accomplished to achieve any level of cyber security coverage.

Secondary set up would be to define areas of interest. Correctly establishing the services, providers, data bases, and exposure points is necessary for the tool to be able to monitor and report on attack surface areas and traffic flow. Again, this is not a unique cost, but it does represent required resources and should be considered.

Once fully set up, the SCC has the ability to continuously monitor the attack surface area, provide reports, and suggests paths of control, response, and remediation if needed. The initial scan will likely take longer than usual (hours) but after that, Standard plan service runs a scan twice a day.

GC: SCC SWOT
Google Cloud, Security Command Center — SWOT

Some areas of opportunity may also be considered weakness – for example not having a report (weakness), but having third party integrations that build reports (Opportunity), what is the security of that third party and who is responsible (Threat).  With that in mind, lets get a litter deeper.

One of the greatest assets to a system such as this, is that as part of a behemoth tech company, these systems of tools have access to some of the largest resources for monitoring, development of tools, remediation of their own defects and the discovery and management of the latest threats. This is an asset for the small to medium companies because there is no way that a single individual or single team, can keep up with the constantly changing threat landscape.  Keeping that task on the tool-set, is a huge asset to a small company.

There are challenges, no product is perfect out of the box. Each of the listed tool sets can integrate with many third parties for more targeted coverage and reporting. Google Security Command Center has the Google Cloud Marketplace where there are thousands of compatible add-ons, services and tools. If the operator doesn’t find an exact match, they are likely to find something that comes close. Some of these integrations will take more work if they are native to a different platform, and it should be considered when deciding on a cloud protection system.

Of course there are differences between AWS, Entra, and Google options. A simple example is their firewalls; at the time writing this document, it appears that AWS offers AWS VPN (Site to site, and point to site) where Google offers Cloud VPN (Site to Site). Google’s cloud security model is not as mature as AWS, but AWS has been called overwhelmingly complex for small businesses or teams without extensive cloud experience. Google may not have the same level of threat detection as AWS, but it can be easier to launch, and is considered less complex.  

Growth could require re-tooling (congratulations)

If a company grows from a mid-sized to large company, the scale of the team managing the SCC would have to expand. The ability to tailor the reports could become insufficient as reporting and compliance demands grow. Growth may force a revisiting of if the tools are sufficient, or if in house teams and developers using different tools is the path forward. The ability and flexibility for larger companies’ cybersecurity will be different between the three platforms listed here. At this point, I would suggest a celebratory dinner before visiting what tools they may want to research/acquire/manage.

[i] Getting Started with SCC Playlist

[ii] NIST Developing Cyber-Resilient Systems

[i] Getting Started with SCC Playlist


Other References & Related Articles

Free Cybersecurity Services and Tools – CISA

Free Non-CISA Cybersecurity Services – CISA

CISA’s Public Safety Communications and Cyber Resiliency Toolkit – CISA

Developing Cyber-Resilient Systems: A systems Security Engineering Approach – NIST December 2021

AWS vs Azure vs Google Cloud Security Comparison – BisBot Business Admin Tools – April 2024

Google Identity Services vs. Active Directory – Jumpcloud (addon service to GIS) – June 2023

Microsoft Entra ID

Overview of Attack Surface Management – Microsoft Security Exposure Management – March 2024

What is Security Command Center – Google – March 2024

Google AIM

GCP Security Command Center – Pros & Cons – JIT – Feb 2024

Google Cloud Security Command Center – Google

Getting Started with Security Command Center – Google – March 2023

Google Marketplace: Command Center Services – Google 

Getting Started with Security Command Center Playlist – Google – youtube 

AWS vs. Azure vs. Cloud: Security comparison – Sysdig- Feb 2023

NIST Developing Cyber-Resilient Systems – December 2021

Blog Sample – Serverless

A sample of technical writing via Blog.

What is Serverless – in Laymen Terms

Serverless applications is an interesting name, that really has less to do with the application, and more to do with the technology hosting and storage of the application. Serverless applications do make use of servers, it’s just that they use them differently than in the past.

If you consider an application to be a product, activity, or service, you can in turn also think of the server as the house in which that product, activity, or service is homed. In traditional server systems, that house is static, probably like your house, or mine.

In the current “Serverless” system, you can have that same product, activity, and service, but the house can change as the needs grow or shrink- like adding a room when you need more space, or renting that room out when space is not being used.

Serverless technology has benefits for both the server hub, and the producer of the application. Applications using serverless architecture only pay for services when actively using those services- as in executing a process.

Let’s Take a More Technical Look

The most well-known and understood advantage and selling point of serverless computing is that it economizes the use of cloud resources. Serverless providers only charge for the time that code is executing, maximizing the function and profitability for both the provider and developer. Interestingly Serverless has also increased stability due to spinning services/instances as needed and having redundancy built into the system.

The numbers of applications and services that have moved to serverless is a testament to it’s economical use and function.

Additional interesting strengths are even greater costs reduction when multiple applications share common components, and in defining workflows.

Current thoughts on defining and describing serverless include calling it Event Driven, or Function and a Service (FaaS) protocol. Serverless architecture is best utilized to process events, or discrete chunks of data generated as a time series.

How it Works

Data arrives at the application, (via human or endpoint), and the architecture incorporates an API gateway that accepts the data and determines which serverless component receives the data.

Regardless of which host is being used for the applications serverless architecture, the runtime environment will pass the data is to the component, where it is processed, and returned to the gateway for further processing by other runtime functions, or returned to the user completed.

  1. Application Development
    • Developers write code, and deploy to the cloud provider.
  2. Cloud Host
    • Application Code is hosted by the cloud provider, and homed in a fleet of servers.
  3. Application Use
    • Requests are made to execute the Application code.
    • The cloud provider creates a new container to run the code in.
    • The container is deleted when the execution has been completed
      • Usually after a time period of inactivity
How Serverless Works
Simple flow diagram

Considerations

It’s important to keep in mind that serverless systems are not intended to become complete application. Successful use of serverless requires a separation of data input from computing actions. This separation will affect all stages of development and testing.

Timed out

One challenge is that Serverless isn’t as successful with longer computation times. For example, if processing takes to long, serverless can stop, and require a cold start- it simply may not work for that longer time period. There are some work arounds for this, but they can be problematic. One fix could be to make lots of little computations, that when broken apart, are fast enough to work well in a serverless environment; but the amount of coding time and rebuilding by developers can be prohibitive.

Serverless is Stateless (lack of persistence & it’s impact)

Another consideration is Serverless functions are stateless; individual functions accept input, they process that input, and they output a result. By design, there is no local or persistent storage.

The lack of persistence has impacts in both development and testing. For example, developers in data processing applications often want to be able to temporarily persist data that may be needed a few steps along, and testing can depend on maintaining a state from one step to the next in a workflow, results of previous operations can be understood as input to subsequent steps.

It becomes challenging to test more than one function at a time, and to replicate a serverless system for testing of a process that may use multiple functions is not always possible.

The most common approach is to break the development & tests into even smaller processes. It requires a heavy lift at the beginning, for a transformation of the workflow, as well as greater breakdown in understanding development and testing coverage into micro units, rather than full processes.

Some testing and developers have resorted to ad hoc methods of persisting data, such as creating and writing files to a cloud database. This can make an application more difficult to maintain, and could have security impacts depending on the platform/product/material being stored.

Major providers now have documentation and best practice methods and work arounds for providing persistence. AWS has introduced Step Functions, Microsoft Azure has Durable Functions and Logic Apps, and there are open source add on solutions as well.

Wrap Up

Serverless – or Function as a Service is one of the greatest transitions in recent computational history and demand. As the cost of moving data becomes more affordable, relative cost then increases on the storage or computation. Serverless architecture is a leap forward on this, moving our storage and computation from a static system to a kinetic system allowing for peaks and valleys to be represented and carried over in both costs and savings for providers and consumers. Finding a way to distribute the costs of both storage and functions based on use in a live and active manner is a huge leap forward, and we are still at the beginning stages of this.

What’s coming to Serverless? Things to keep an eye on include security, persistent storage, and data integrity. Global Serverless Computing Market is expected a compound annual growth rate of more than 22% in the period between 2024-2031.1


Other related and interesting content can be found at the following:


Footnote

  1. https://www.skyquestt.com/report/serverless-architecture-market ↩︎

Interesting Cyber Threat Analysis Exercise

Office

As you know, I’m enrolled for a MS in Cybersecurity Risk & Strategy; the people who teach, and the people who attend are all interesting, experts in their fields, and sources of knowledge to explore. It’s pretty amazing.

For homework in a Governance and Regulatory class, we had to read a bit, where lots of words were used to discuss how to discuss risk, and how to quantify it. Pages to tell you that first – you must name and define what you are looking for. Explaining and exploring how to quantify risk, and then create a methodology of ranking followed by exploring the issue over time. (Velocity Measurement, Distance Measurement, Persistence Measurement)

Super Simple Example:

Externally Accessible

  • Computers that have out of date patches.
    • How out of date?
    • What about the ones that fall out of date today, that were not on the last report? (If you pull this report monthly, you want to add an aging column)
Aging Machines Missing Patches
0-30 days 31-60 days 61-90 days >90 days Total
65 42 35 47 189
34% 22% 19% 25% 100%

Next Cool Thing

We had a lecture later – and covered how to prioritize.
But I have to get to my Cyber Crime class now – so we’ll explore the matrix map next time.