This is the start of the OpenVet dev log. I plan to write about what I am working on, what I managed to break, and from time to time post about supply-chain security related news.
What OpenVet is #
OpenVet is a project that gives you supply-chain security by requiring that your dependencies are audited, and match requirements you set out. It has two parts:
-
A public registry that hosts signed, machine-readable audits of software dependencies. It lets you publish audits to your own cryptographically-signed, append-only log. And it lets others see and consume the audits you produce. It is hosted at https://openvet.org.
-
A command-line tool that can ingest those audit logs, enforce that your dependencies are audited, and match the requirements you have. It doesn’t trust the registry, it trusts the key of the publisher of the log. The tooling is also not tied to the registry: you could just as well host your own logs on any static site host.
Both of these components are open-source. The command-line tool is MIT and
Apache-2.0 licensed, giving you freedom to adapt and integrate it. The
registry is AGPL-3.0 licensed, more restrictive in what you can do with the
code, requiring modifications to be released under the same license.
Why #
As software engineers, we don’t build everything from scratch. We build on top of what others before us have built, which means depending on software packages hosted on public registries — and that is a net positive.
When the internet was initially built, there was a culture of trust: there was no encryption, little authentication. That was possible because it had few users and a lack of valuable targets. We have a similar culture of implicit trust on anything published in public registries. But slowly, attackers have found out that developers’ machines are high value targets, containing API keys, cryptocurrency wallets, and access to company infrastructure. Times are changing.
Supply-chain attacks have moved from hypothetical to routine, accelerated in recent years by the availability of LLMs. Some attacks like the Shai-Hulud worm or the xz-utils have become well-known, but a major supply-chain security company has logged over 1.2 million malicious packages in total, with 454,600 added in 2025 alone.
The current defenses either don’t work, or don’t scale. Build and release provenance doesn’t help when attackers hijack CI workflows that release packages (they just prove where the release happened, not what’s in the package). CVE scanners are reactionary: by the time the CVE lands, your API keys and wallets have been exfiltrated, and attackers have already had access to your infrastructure. Dependency freezes just shift the pain to whoever updates first, and if we all implemented them, they just delay when attacks are discovered.
But malicious packages are not the only thing that OpenVet addresses: it is also correctness. We are producing code at an ever-growing rate, with ever-growing complexity. In my opinion, the only way to do this sustainably is to have reviewed and well-tested primitives hosted at registries that can be composed to build complex software.
One personal data point: my average pet project pulls in 400 dependencies
summing up to 3.5 million lines of code, as verified by cargo vendor and
tokei. You may want to check your projects, and think about: how many lines
of that have I actually reviewed? How do I know that my dependencies are
correct and safe to use?
The way out is to drop implicit trust and require external vetting (auditing) of software dependencies. And when I say auditing, I don’t mean the multi-month project that is auditing a cryptographic library. For most dependencies, there are some simple questions you need to answer, like:
- Does this have any build-time or install-time actions, and do these actions look safe?
- Does this code make network requests, and if so, why and what is sent?
- Does it read or write to the filesystem, and if so, what is read or written?
- Does it read any environment variables, and if so, what does it do with them?
- Does it have extensive test coverage, including randomized tests (fuzz tests, proptests)?
Any software engineer with some experience can answer that with a checklist. Thorough audits may be required for some dependencies, but not for all.
The actually hard problems are elsewhere:
- The time it takes to perform audits of all of your dependencies
- The user experience for creating, publishing and discovering audits
- The distribution of audits
- The tooling for validating that your dependencies are audited
OpenVet is an approach to solving these four problems in a way that might be able to scale beyond niche.
How it works, briefly #
OpenVet defines a data format for audits. They contain both machine-readable claims (similar to what cargo-vet does, in the future I will make a post explaining the mental model of them), and human-readable findings, source annotations, and a report. Audits are signed by the author, the signature is required and verified.
OpenVet defines a way to distribute them. Audits are published in cryptographically-signed append-only logs, so you do not need to trust the platform, only the person holding the keys. OpenVet has a registry for audits, think about it like a GitHub, but it holds audits, not code.
It does not scale for us to audit every single dependency ourselves. The core idea of OpenVet is that auditing is a collaborative process, just like open-source development is. You publish your audits, others can trust you (and with that, your audit-log). You can choose to trust an entity (a large company, which has audited several dependencies, or a friend). With that, you only need to audit dependencies that others you trust haven’t already audited, which is a lot more manageable. It makes OpenVet’s adoption much cheaper, and thereby scalable.
When you do need to audit software, OpenVet tries to make it easy. It has a command-line driven workflow for auditing software that lets you create an audit workspace. I try to build tooling that makes the process as simple as possible. And this tooling also allows you to automate the auditing process for low-risk dependencies, by getting an LLM to do it. I am not saying that LLM-generated audits are the perfect solution, but in my opinion: LLM-audited dependencies are better than unaudited dependencies, and it means you can spend your time on higher-risk dependencies.
Finally, OpenVet attempts to ship tooling that makes enforcing dependency audits as painless as possible. The command-line tooling can scan all of your lockfiles, and search through all of the audit-logs you trust for relevant audits in parallel and with local caching. The audit log data structure is designed for cacheability and efficient lookup.
What to expect here #
Don’t expect a refined product here, or a sales team. It’s just me, and this is not a commercial endeavour. I am building this because I want it to exist, and because it is fun to do so. I am implementing features, and breaking things along the way, as I figure out what works and what does not.
If you think this idea has merit, feel free to stay along for the ride. I try to post updates on this blog as I am working on OpenVet, as I implement new features. I would ask you to like, comment, and subscribe, but I don’t have any social media buttons on here, nor do I have a mailing list, nor a comment feature.
If you want to get in touch or help with this, you probably find a way to reach me on my GitHub profile.
In the meantime, if you want the source, it’s on GitLab.