Policy makers in both the US and the EU are currently debating several efforts to improve the security of digital infrastructure, including broad new regulations such as SOSSA and the CRA. In light of high profile recent events, we are all aware that open source software is a critical part of software infrastructure as a whole. Security issues related to popular open source projects are in the news and in policy conversations. As open source professionals, we hope that policy makers carefully consider the distinction between open source projects and products.
Policy makers who are most familiar with commercial software and procurement-based policies will find that open source software presents different motivators and mechanisms. Extending commercial software policies to open source will miss the very features that make open source unique, and regulations previously written for physical goods’ supply chains cannot be cleanly mapped to a digital ecosystem whose innovation is founded, at least in the US, on free speech and individual creative expression.
Inverting Linus’ Law🔗
Let’s start with the combination of principles and pragmatics that underlie the ethos of how open source projects approach security flaws. Penned by Eric Raymond, this is Linus’ Law:
“Given enough eyeballs, all bugs are shallow.”
In principle, transparency builds trust. Pragmatically, it is easier to fix things when you have more fixers available to help. Since there are more developers outside of any given company than inside it, open source software ought to be more secure than closed source software. While several famous projects (such as the Linux Kernel, Kubernetes, and WordPress) are the result of the collaboration of thousands of developers, it may surprise you to learn that the average number of maintainers per project is one. Unfortunately, most projects don’t benefit from the attention of hundreds of developers fixing bugs. In fact, in the past few years, we have seen significant attention being paid to finding more flaws. Linus’ Law could be restated for this decade in its inverse:
“Given enough eyeballs, all software contains bugs.”
Over time, all software will need to be updated due to flaws that are not yet discovered. This implies two important questions for policy makers: (1) who should be on the hook to perform these future updates, and (2) what is the minimum set of safe software development practices to minimize the risk of catastrophic flaws in the future?
Projects and Products🔗
In addressing the above questions, policy makers will encounter open source in two forms: projects and products. This distinction, less relevant to a consumer of digital goods, is germaine to policy discussions. Projects represent the open and evolving development of capabilities driven by a community. Products, on the other hand, represent units of value sold by vendors. Over the last twenty-some years, many open source projects have become, or are integrated within, popular commercial products. In fact, a debate continues within the tech sector over the use of the terms open core and commercial open source to delineate a specific business practice related to withholding key features from the project’s community specifically to bolster the value of related products.
While communities have an interest in improving their projects, and vendors have an interest in improving their products, the mechanisms and motivations differ. Mickos’ Law highlights this difference:
“Project communities have time and no money, while customers have money and no time.”
Adding to some confusion, these often include the same people. When open source projects become essential to vendor products, vendors typically hire influential maintainers from the project community. When the maintainer is allowed to continue their open source work, this serves to support the vendor’s product by ensuring the health of the project, and provides the vendor with influence in the project direction. (Notably, sometimes vendors hire open source maintainers and then direct them to cease, or reduce, their open source work, in favor of improving the product!) These vendors, in some cases the sole maintainers of important open source projects, also provide critical services that support the open source community. This creates a delicate balance of dependencies.
Once open source projects become commercialized, any policy that addresses minimum requirements, liability, or verifiable assurances of the product must be sensitive to the project’s origin as well. Successful open source communities will seek to ensure their project is secure and operable, but often not due to direct financial incentives. Whereas the vendors will seek a similar outcome for their products. In their case, however, will they allocate their developer resources to improve the open source project? Or perhaps hold them back to focus on their commercial product (leaving the open source project unaddressed)?
How About An Example?🔗
Imagine a pretend open source project we’ll call Graphtheus. It started as an open source project, became popular, and now has a vendor who provides various “enterprise” features for a cost. We’ll call the vendor NeoChrono. NeoChrono wants people to love and trust Graphtheus, because many Graphtheus users are potential enterprise customers of NeoChrono. They support the open source project since it is the heart of their product offering, and makes up about 90% of the code in their commercial product. However, NeoChrono also needs to maintain a feature gap between the open source project and their product – this gap is protected as their unique business value. This creates some disincentives to contribute to the open source project.
-
Given the need to fix a security feature, the NeoChrono engineering team is incentivized to have an internal conversation: should we make the fix in the open source project, or should we hold back just a bit so that the enterprise version has better security?
-
Given the need to sell more licenses for the enterprise edition of NeoChrono, the product design team is similarly incentivized to have an internal conversation: should we contribute this usability enhancing feature to the open source project, or develop it internally and integrate it only into our product?
Each situation will differ. While some developers may altruistically fix security issues upstream, and some security issues may only be repairable in the upstream codebase, other companies will see an opportunity to sell “hardened” versions of an open source project and hold back the fix. Sure, the open source community can implement a security feature without the vendor’s support, and even if a single company controls the open source code base, individuals or other vendors could fork the project, creating a competing project with a separate code base, but doing this requires an investment of time and resources.
Forking also carries social risk – the risk that others do not follow, or, worse, publicly shame the attempt as needlessly detracting from a common cause. And so, some software vendors succeed precisely because they are seen as “good maintainers” of the open source project – that is, while supporting the development of the open source project, they are successful in maintaining a slight commercial advantage to drive software sales.
In summary, commercial software vendors, even those lauded for being “good at open source”, are incentivized to keep open source slightly less secure, less operable, and less feature-rich at all times in order to maintain a competitive advantage within their products.
Conflicting Interests🔗
We’re not suggesting that vendors are bad actors, obstructing security fixes to open source projects or withholding usability and security enhancements, but certainly, some do, some of the time, because there is a strong motivation at play that biases toward these behaviors. We are suggesting, however, that if policy makers only consider the commercial vendors of products based on open source, they will miss an important part of the ecosystem. They may inadvertently provide vendors more encouragement to withhold fixes to open source projects. They will also fail to address the millions of open source projects that are not supported by any vendor.
Much effort in the open source security space is already spent finding flaws and creating mechanisms to safely communicate their solutions, and as external expectations (perhaps in the form of new regulations) are placed on open source communities to account for and improve their security posture, we propose the following considerations:
-
Open source software is rooted in First Amendment case law in the United States (Bernstein v. DOJ, 1999) which prioritized an individual’s right to free expression regardless of the functional nature (i.e., code) of the medium they choose. In Europe, open source is rooted in the Freedom of expression and its corollary, the freedom to write software. Open source software licenses and contribution agreements universally disclaim liability to protect this right, and many communities include explicit protections for anonymous or pseudonymous contributions (essential to enabling free speech). Any policy which imposes liability on open source software contributors, communities, or non-profits will stifle open source contributions.
-
Companies providing software for critical systems can, and should, demonstrate adherence to engineering best-practices that minimize risks and flaws. Such measurements could, in principle, be applied to open source software communities as well, however this would create financial burdens which most open source projects cannot bear.
-
The government has mechanisms to invest in and fund common infrastructure, and could apply these so that critical open source projects may reach the same safety standards applied to commercial software.
Closing Thoughts: Regulate With Care🔗
Today’s solutions become tomorrow’s problems. Both of us have long valued open source collaboration on common software platforms as an effective way for humanity to advance scientifically and technologically, and for the economic opportunities that open source software creates. However, the critical role it now plays in society cannot be stewarded solely by companies, for it was not solely for-profit interests that got us here. Security issues in open source, if left solely to companies to solve, will frequently be solved first in commercial products.
In short, it is our view that neither regulations which utilize controls on corporate procurement nor corporate funding in open source projects (whether direct or via trade consortia) will be sufficient to achieve the result that we believe both US and EU regulators desire, namely, that open source projects, across the board, become safer to consume directly from their upstream sources.
By-Lines🔗
Aeva Black is a queer and non-binary open source hacker. They work in the Azure Office of the CTO, and currently serve open source communities through roles as Secretary of the Board of the Open Source Initiative and as Vice Chair of the Technical Advisory Committee for the OpenSSF. In this post, Aeva represents only their own views.
Gil Yehuda is an open source professional. He is currently the Head of Open Source at U.S. Bank. Previously, he led the open source program office at Verizon Media and Yahoo. He represents his own views in this post.
This post was also published on LinkedIn: https://www.linkedin.com/pulse/open-source-security-policy-conundrum-gil-yehuda/