Skip to content
Bain Capital Crypto
  • Team
  • Portfolio
  • Insights
  • Jobs
  • Contact
  • Basics
  • Cryptography
  • DeFi
  • Hiring
  • MEV
  • Press Release
  • Privacy
  • Regulatory
  • Research
  • Uncategorized
  • Akshay Agrawal
  • Alex Evans
  • Andrew Cleverdon
  • Andrija Novakovic
  • Charlotte Kyle
  • Ciamac Moallemi
  • Eugene Chen
  • grjte
  • Guillermo Angeris
  • Haley MacCormac
  • Justin Rattigan
  • Kaley Petulla
  • Kamil Yusubov
  • Kara Reusch
  • Kevin Zhang
  • Khurram Dara
  • Kimleang Svay
  • Kobi Gurkan
  • Koh Wei Jie
  • Kshitij Kulkarni
  • Mallesh Pai
  • Mandy Campbell
  • Max Resnick
  • Michael Chagnon
  • Myles Maloof
  • Nashqueue
  • Natalie Mullins
  • Nathan Sheng
  • Nicolas Mohnblatt
  • Parth Chopra
  • Sanaz Taheri
  • Stefan Cohen
  • Stephen Boyd
  • Tarun Chitra
Ligerito: A Small and Concretely Fast Polynomial Commitment Scheme
In this note we present Ligerito, a small and practically fast polynomial commitment and inner product scheme. For the case…
05.01.25
Exploring interoperability & composability for local-first software
Check out the project at https://github.com/grjte/groundmist-syncCross-posted to my atproto PDS and viewable at Groundmist Notebook and WhiteWind. This is the third exploration connecting local-first…
  • grjte
  • Privacy,
  • Research
04.22.25
Exploring the AT Protocol as a legibility layer for local-first software
Check out the project at https://notebook.groundmist.xyzCross-posted to my atproto PDS and viewable at Groundmist Notebook and WhiteWind Some people like vim and some…
  • grjte
  • Privacy,
  • Research
04.22.25
Exploring the AT Protocol as a distribution layer for local-first software
Check out the project at https://library.groundmist.xyzCross-posted to my atproto PDS and viewable at Groundmist Notebook and WhiteWind What if private…
  • grjte
  • Privacy,
  • Research
04.22.25
CryptoUtilities.jl: A Small Julia Library for Succinct Proofs
We’re excited to open-source CryptoUtilities.jl, a collection of Julia packages built to prototype and benchmark succinct proof systems over binary…
  • Andrija Novakovic,
  • Guillermo Angeris
  • Cryptography,
  • Research
04.16.25
Public, Provable Prices
What happens when exchanges operate with a discrete clock? In our last post, we argued that blockchains have a system…
  • Theo Diamandis,
  • Khurram Dara
  • Regulatory,
  • Research
02.26.25
Perpetual Demand Lending Pools
Decentralized perpetuals protocols have collectively reached billions of dollars of daily trading volume, yet are still not serious competitors on…
  • Tarun Chitra,
  • Theo Diamandis,
  • Kamil Yusubov,
  • Nathan Sheng
  • DeFi,
  • Research
02.12.25
The Accidental Computer: Polynomial Commitments from Data Availability
In this paper, we present two simple variations of a data availability scheme that allow it to function as a…
  • Alex Evans,
  • Guillermo Angeris
  • Research
01.31.25
Manipulability Is a Bug, Not a Feature
Like a computer, a blockchain has an internal clock: the blocktime [1]. Users send the blockchain transactions which contain sequences…
  • Theo Diamandis,
  • Khurram Dara
  • Regulatory,
  • Research
01.30.25
Optimizing Montgomery Multiplication in WebAssembly
Operations on elliptic curves over large prime fields can be significantly sped up via optimisations to their underlying field multiplication…
  • Koh Wei Jie
  • Research
12.05.24
Chosen-Instance Attack
How succinct proofs leak information What happens when a succinct proof does not have the zero-knowledge property? There is a…
  • Nicolas Mohnblatt
  • Privacy,
  • Research
12.04.24
ZODA: Zero-Overhead Data Availability
We introduce ZODA, short for ‘zero-overhead data availability,’ which is a protocol for proving that symbols received from an encoding…
  • Alex Evans,
  • Nicolas Mohnblatt,
  • Guillermo Angeris
  • Research
12.03.24
ZODA: An Explainer
Data availability sampling (DAS) is critical to scaling blockchains while maintaining decentralization [ASB18,HASW23]. In our previous post, we informally introduced…
  • Alex Evans,
  • Nicolas Mohnblatt,
  • Guillermo Angeris,
  • Sanaz Taheri,
  • Nashqueue
  • Research
12.03.24
Sampling for Proximity and Availability
In blockchains, nodes can ensure that the chain is valid without trusting anyone, not even the validators or miners. Early…
  • Alex Evans,
  • Nicolas Mohnblatt,
  • Guillermo Angeris
  • Research
11.08.24
Expanding
At Bain Capital Crypto, research and investing are interlocked. Researching foundational problems has led to our most successful investments. Working…
  • Alex Evans
  • Hiring
10.29.24
An Analysis of Intent-Based Markets
Mechanisms for decentralized finance on blockchains suffer from various problems, including suboptimal price execution for users, latency, and a worse…
  • Tarun Chitra,
  • Theo Diamandis,
  • Kshitij Kulkarni,
  • Mallesh Pai
  • DeFi,
  • Research
03.06.24
Multidimensional Blockchain Fees are (Essentially) Optimal
Abstract In this paper we show that, using only mild assumptions, previously proposed multidimensional blockchain fee markets are essentially optimal,…
  • Guillermo Angeris,
  • Theo Diamandis,
  • Ciamac Moallemi
  • Research
02.13.24
Toward Multidimensional Solana Fees
A Solana transaction’s journey from user submission to block inclusion can be arduous. Even once the transaction reaches the current leader, it…
  • Theo Diamandis,
  • Tarun Chitra,
  • Eugene Chen
  • Research
01.31.24
Succinct Proofs and Linear Algebra
Abstract The intuitions behind succinct proof systems are often difficult to separate from some of the deep cryptographic techniques that…
  • Alex Evans,
  • Guillermo Angeris
  • Research
09.21.23
The Specter (and Spectra) of MEV
Abstract Miner extractable value (MEV) refers to any excess value that a transaction validator can realize by manipulating the ordering…
  • Guillermo Angeris,
  • Tarun Chitra,
  • Theo Diamandis,
  • Kshitij Kulkarni
  • Research
08.14.23
The Geometry of Constant Function Market Makers
Abstract Constant function market makers (CFMMs) are the most popular type of decentralized trading venue for cryptocurrency tokens. In this paper,…
  • Guillermo Angeris,
  • Tarun Chitra,
  • Theo Diamandis,
  • Alex Evans,
  • Kshitij Kulkarni
  • Research
07.20.23
Our Comment on The SEC’s Proposed Amendments to Exchange Act Rule 3b-16
This week, we submitted a comment in response to the SEC’s proposed amendments to Exchange Act Rule 3b-16 regarding the…
  • Regulatory
06.15.23
Opinion: A House Bill Would Make It Harder for the SEC to Argue Crypto Tokens Are Securities
The proposed Securities Clarity Act by Representatives Tom Emmer and Darren Soto would significantly reduce uncertainty for both crypto investors…
  • Khurram Dara
  • Regulatory
06.01.23
Opinion: Regulators Should Not ‘Front-Run’ Congress on Stablecoins
Growing consensus on the need for comprehensive legislation on payment stablecoins provides Congress with an opportunity to enact sensible regulation…
  • Khurram Dara
  • Regulatory
05.17.23
Our Comment on The SEC’s Proposed Custody Rule
This week, we submitted a comment in response to the SEC’s proposed custody rule, together with Dragonfly Capital, Electric Capital,…
  • Regulatory
05.09.23
A Note on the Welfare Gap in Fair Ordering
In this short note, we show a gap between the welfare of a traditionally ‘fair’ ordering, namely first-in-first-out (an ideal…
  • Theo Diamandis,
  • Guillermo Angeris
  • Research
03.27.23
An Efficient Algorithm for Optimal Routing Through Constant Function Market Makers
Constant function market makers (CFMMs) such as Uniswap have facilitated trillions of dollars of digital asset trades and have billions…
  • Theo Diamandis,
  • Max Resnick,
  • Tarun Chitra,
  • Guillermo Angeris
  • DeFi,
  • Research
02.17.23
Multi-dimensional On-chain Resource Pricing
Public blockchains allow any user to submit transactions which modify the shared state of the network. These transactions are independently…
  • Theo Diamandis
  • Basics
08.16.22
Dynamic Pricing for Non-fungible Resources
Public blockchains implement a fee mechanism to allocate scarce computational resources across competing transactions. Most existing fee market designs utilize a joint, fungible unit of account (e.g., gas in Ethereum) to price otherwise non-fungible resources such as bandwidth, computation, and storage, by hardcoding their relative prices. Fixing the relative price of each resource in this way inhibits granular price discovery, limiting scalability and opening up the possibility of denial-of-service attacks.
  • Theo Diamandis,
  • Alex Evans,
  • Tarun Chitra,
  • Guillermo Angeris
  • Basics
08.16.22
Introducing CFMMRouter.jl
We created CFMMRouter.jl for convex optimization enthusiasts, twitter anons, and Tarun Chitra to easily find the optimal way to execute…
  • Guillermo Angeris,
  • Theo Diamandis
  • DeFi,
  • MEV
04.05.22
Introducing Bain Capital Crypto
We are excited to announce Bain Capital Crypto (BCC), our first $560mm fund, and the launch of a new platform…
  • Stefan Cohen
  • Press Release
03.08.22
Optimal Routing for Constant Function Market Makers
We consider the problem of optimally executing an order involving multiple cryptoassets, sometimes called tokens, on a network of multiple constant function market makers (CFMMs). When we ignore the fixed cost associated with executing an order on a CFMM, this optimal routing problem can be cast as a convex optimization problem, which is computationally tractable. When we include the fixed costs, the optimal routing problem is a mixed-integer convex problem, which can be solved using (sometimes slow) global optimization methods, or approximately solved using various heuristics based on convex optimization. The optimal routing problem includes as a special case the problem of identifying an arbitrage present in a network of CFMMs, or certifying that none exists.
  • Guillermo Angeris,
  • Tarun Chitra,
  • Alex Evans,
  • Stephen Boyd
  • MEV
12.01.21
Replicating Monotonic Payoffs Without Oracles
In this paper, we show that any monotonic payoff can be replicated using only liquidity provider shares in constant function market makers (CFMMs), without the need for additional collateral or oracles. Such payoffs include cash-or-nothing calls and capped calls, among many others, and we give an explicit method for finding a trading function matching these payoffs. For example, this method provides an easy way to show that the trading function for maintaining a portfolio where 50% of the portfolio is allocated in one asset and 50% in the other is exactly the constant product market maker (e.g., Uniswap) from first principles. We additionally provide a simple formula for the total earnings of an arbitrageur who is arbitraging against these CFMMs.
  • Guillermo Angeris,
  • Alex Evans,
  • Tarun Chitra
  • DeFi
09.01.21
Constant Function Market Makers: Multi-Asset Trades via Convex Optimization
The rise of Ethereum and other blockchains that support smart contracts has led to the creation of decentralized exchanges (DEXs), such as Uniswap, Balancer, Curve, mStable, and SushiSwap, which enable agents to trade cryptocurrencies without trusting a centralized authority. While traditional exchanges use order books to match and execute trades, DEXs are typically organized as constant function market makers (CFMMs). CFMMs accept and reject proposed trades based on the evaluation of a function that depends on the proposed trade and the current reserves of the DEX. For trades that involve only two assets, CFMMs are easy to understand, via two functions that give the quantity of one asset that must be tendered to receive a given quantity of the other, and vice versa. When more than two assets are being exchanged, it is harder to understand the landscape of possible trades. We observe that various problems of choosing a multi-asset trade can be formulated as convex optimization problems, and can therefore be reliably and efficiently solved.
  • Guillermo Angeris,
  • Akshay Agrawal,
  • Alex Evans,
  • Tarun Chitra,
  • Stephen Boyd
  • Basics,
  • DeFi
07.01.21
Replicating Market Makers
We present a method for constructing Constant Function Market Makers (CFMMs) whose portfolio value functions match a desired payoff. More specifically, we show that the space of concave, nonnegative, nondecreasing, 1-homogeneous payoff functions and the space of convex CFMMs are equivalent; in other words, every CFMM has a concave, nonnegative, nondecreasing, 1-homogeneous payoff function, and every payoff function with these properties has a corresponding convex CFMM. We demonstrate a simple method for recovering a CFMM trading function that produces this desired payoff. This method uses only basic tools from convex analysis and is intimately related to Fenchel conjugacy. We demonstrate our result by constructing trading functions corresponding to basic payoffs, as well as standard financial derivatives such as options and swaps.
  • Guillermo Angeris,
  • Alex Evans,
  • Tarun Chitra
  • DeFi
03.01.21
A Note on Privacy in Constant Function Market Makers
Constant function market makers (CFMMs) such as Uniswap, Balancer, Curve, and mStable, among many others, make up some of the largest decentralized exchanges on Ethereum and other blockchains. Because all transactions are public in current implementations, a natural next question is if there exist similar decentralized exchanges which are privacy-preserving; i.e., if a transaction’s quantities are hidden from the public view, then an adversary cannot correctly reconstruct the traded quantities from other public information. In this note, we show that privacy is impossible with the usual implementations of CFMMs under most reasonable models of an adversary and provide some mitigating strategies.
  • Guillermo Angeris,
  • Alex Evans,
  • Tarun Chitra
  • Privacy
02.01.21
Optimal Fees for Geometric Mean Market Makers
Constant Function Market Makers (CFMMs) are a family of automated market makers that enable censorship-resistant decentralized exchange on public blockchains. Arbitrage trades have been shown to align the prices reported by CFMMs with those of external markets. These trades impose costs on Liquidity Providers (LPs) who supply reserves to CFMMs. Trading fees have been proposed as a mechanism for compensating LPs for arbitrage losses. However, large fees reduce the accuracy of the prices reported by CFMMs and can cause reserves to deviate from desirable asset compositions. CFMM designers are therefore faced with the problem of how to optimally select fees to attract liquidity. We develop a framework for determining the value to LPs of supplying liquidity to a CFMM with fees when the underlying process follows a general diffusion. Focusing on a popular class of CFMMs which we call Geometric Mean Market Makers (G3Ms), our approach also allows one to select optimal fees for maximizing LP value. We illustrate our methodology by showing that an LP with mean-variance utility will prefer a G3M over all alternative trading strategies as fees approach zero.
  • Guillermo Angeris,
  • Tarun Chitra,
  • Alex Evans,
  • Stephen Boyd
  • DeFi
01.04.21
Liquidity Provider Returns in Geometric Mean Markets
Geometric mean market makers (G3Ms), such as Uniswap and Balancer, comprise a popular class of automated market makers (AMMs) defined by the following rule: the reserves of the AMM before and after each trade must have the same (weighted) geometric mean. This paper extends several results known for constant-weight G3Ms to the general case of G3Ms with time-varying and potentially stochastic weights. These results include the returns and no-arbitrage prices of liquidity pool (LP) shares that investors receive for supplying liquidity to G3Ms. Using these expressions, we show how to create G3Ms whose LP shares replicate the payoffs of financial derivatives. The resulting hedges are model-independent and exact for derivative contracts whose payoff functions satisfy an elasticity constraint. These strategies allow LP shares to replicate various trading strategies and financial contracts, including standard options. G3Ms are thus shown to be capable of recreating a variety of active trading strategies through passive positions in LP shares.
  • Alex Evans
  • DeFi
06.01.20
Back

ZODA: An Explainer

  • Alex Evans,
  • Nicolas Mohnblatt,
  • Guillermo Angeris,
  • Sanaz Taheri,
  • Nashqueue
12.03.24
  • Share on Twitter
  • Copy Link

Data availability sampling (DAS) is critical to scaling blockchains while maintaining decentralization [ASB18,HASW23]. In our previous post, we informally introduced DAS and showed how to leverage interaction to reduce overhead for a promising construction based on the FRI protocol. Our newest work, ZODA, short for ‘Zero-Overhead Data Availability,’ takes these ideas a step further. In particular, we show that, with a small change, an extended data square construction, such as that used in Celestia, can be turned into a (validity) proof of its own correctness. Importantly, the change incurs negligible additional work for encoders and no incremental communication costs for the network. This post is an informal introduction to the ZODA protocol.

Basics of encoding. To explain ZODA, we briefly sketch how DAS protocols work today. In production, DAS protocols, including those contemplated by Ethereum and implemented by Celestia, block data is organized into a matrix $\tilde X$. This matrix is then encoded into a larger matrix $Z$, called the extended data square. For example, we may first encode by encoding its columns using a Reed–Solomon code matrix $G$. We can then encode the rows of the resulting matrix using another Reed–Solomon matrix $G’$ to obtain a matrix $Z = GXG’^T$ . (This encoding is called the tensor code resulting from encoding columns using $G$ and rows using $G’$.) The procedure is depicted below. In this example, we use a rate of 1/2 for each encoding, meaning the data square $Z$ is $4$ times larger than $\tilde X$.

Sampling. Light nodes then sample entries from this extended data square, $Z$. The example below illustrates this procedure for two light nodes sampling independently. The first node’s samples are represented in blue while the second’s are in red. Purple entries represent entries which both nodes sampled. In this example, each node samples 30 entries and land on the same entry 5 times. If light nodes collectively sample enough distinct entries from the rows or from the columns (for example, at least half of each row or column will suffice), they can get together and erasure decode $\tilde X$. In the below example, the majority of each row and each column was sampled. If the two nodes pool their samples, they can reconstruct the original data $\tilde X$ either from the rows or from the columns, using a standard erasure decoding algorithm for $G$ or $G’$.

Encoding correctness. However, if $Z$ not correctly encoded, nodes might not be able to decode $\tilde X$ (or anything at all). For DAS to work, we need some way of ensuring that $Z$ was constructed correctly. Existing protocols deal with this problem in one of two ways.

Fraud-proof construction. The first idea, initially proposed in [ASB18] and now used in the Celestia protocol, is for full nodes to download all of $\tilde X$ and compute $Z$ themselves as per the first figure. If the result doesn’t match $Z$ at any point, they can alert light nodes with a (compact) fraud proof. Either the row or the column containing the error could be sent to a light node that can check the error directly.

This approach assumes that each light node is connected to at least one honest full node that can furnish a fraud proof in the event of a bad encoding. While there is zero prover overhead, each full node needs to download the entire data and re-compute the extended data square $Z$ itself. Since we assume that at least one of the $N$ full nodes connected to each light node is honest, we ideally want $N$ to be reasonably large. This, in turn means that a lot of redundant data is downloaded (and compute is spent) in aggregate across the network. Making blocks too big also reduces the pool of full nodes that can participate in securing the network, as larger machines with more bandwidth are required as blocks get larger.

2D KZG construction. Another approach, initially proposed in [But20], is to commit to each row and column of $Z$ using the polynomial commitment scheme of KZG [KZG10]. We then compute KZG opening proofs for each entry of $Z$. Each entry of the matrix comes with an opening proof that verifies against the commitment for its associated row or column. The drawback of this approach is that it requires computing KZG commitments for each row and column of the matrix, as well as opening proofs for each individual entry. These proofs are concretely slow and expensive to compute, imposing cost and latency on the network. While it is possible to parallelize the process over many provers, significant network-wide proving overhead (and indeed, proof overhead, as every sampled element carries an associated proof) remains.

Drawbacks. While both approaches to proving the correctness of $Z$ are considered sufficiently practical, they have drawbacks in terms of both efficiency and trust. The fraud-proof-based construction requires redundant bandwidth and computation while assuming that each light node is connected to an honest full node. The KZG-based construction imposes significant proving overhead, assumes a trusted setup, and is not plausibly post-quantum secure.

Other systems. Recently, hashing-based proof systems have emerged as a potential alternative for DAS thanks to the works of [HASW23,HASW24]. These constructions feature low prover overhead, don’t require a trusted setup, and are plausibly post-quantum secure. However, they impose extremely high overheads on the network by requiring each light node to redundantly download a (large) non-interactive proof of proximity. In our previous post, we showed how to leverage interaction to reduce this overhead substantially. ZODA takes this idea to its logical extreme: what if we leveraged interaction to the maximum extent possible to reduce the proof overhead to zero?

ZODA. The main idea for ZODA is simple. In fact, ZODA looks almost identical to the fraud-proof-based construction we presented in the first figure, with one small tweak. As before, we start by encoding the columns of $\tilde X$. In ZODA, we commit to this intermediate encoding (for example, using a Merkle tree). We then use entropy from this commitment to generate a random vector, sampled from some (large enough) field. We then construct a diagonal matrix $D_{\bar r}$ with this vector along the diagonal and multiply $\tilde X$ with $D_{\bar r}$. (This is a linear-time operation which is concretely very fast relative to the encoding itself.) Finally, we then encode the rows again, as before. This amended construction is depicted below.

Surprisingly, this tweak makes the encoding provably correct! Specifically, nodes sample random rows and columns and check that these are valid codewords which are also consistent at their intersection. Repeating this procedure for enough rows and columns establishes two properties with high probability. First, the matrix $Z$ is close to an encoding of a (unique) $\tilde X$ and, second, that the sampled rows and columns whose verification passes are symbols of this unique codeword. In the ZODA paper, we prove these properties by adapting the proximity test of the Ligero [AHIV17] protocol. Intuitively, injecting randomness from the intermediate commitment prevents a malicious encoder from behaving adaptively. Importantly, these rows and columns are also valid samples as part of DAS and can be used to reconstruct the original square. In other words, if enough nodes verify the encoding was correctly constructed, they will, with high probability, also have gathered enough data to reconstruct $\tilde X$. There is zero overhead to the proof: nearly every bit of data that the node downloads as part of the correct-encoding proof is also useful for reconstruction. In the example below, nodes can decode if they collectively sample and validate either half the rows or half the columns.

Figure 1: ZODA verification. A light node verifies that a row and column are valid codewords and are consistent at their intersection (marked with a green checkmark).

Computational overhead. The construction also has essentially zero prover overhead. This may seem counter-intuitive: usually, when considering general computation, going from a fraud-proof to a validity-proof system (for example, switching from an optimistic to a ZK rollup) imposes a few orders of magnitude of prover overhead. Indeed, if we were to prove the construction of $Z$ in a general-purpose succinct proof, we would also incur similar overhead. However, in the case of ZODA, the validity-provable data square and the fraud-provable data square take about the same time to compute. Why is this? In DAS, encoding $\tilde X$ and committing to the result is already part of the protocol. Moreover, nodes already need to collectively download a large portion of the data square $Z$ in order to ensure that data is available. These features shift the trade-off space to heavily favor proof systems that natively rely on error correcting codes and hashing. In particular, notice that the fraud-provable data square construction of [ASB18] and Celestia already very closely resembles a Ligero prover. Injecting some randomness into the encoding procedure lets us leverage the results of Ligero directly to then ensure that the square is correctly encoded.

Conclusion. What are the implications in practice? The most obvious points relate to increasing scalability and lowering trust assumptions. It is possible to run ZODA with cryptographic security (soundness $2^{−80}$ or $2^{−100}$). Doing so requires downloading only a fraction of the block, while getting similar guarantees to Celestia full nodes today. It may be possible to entirely replace full nodes in a protocol like Celestia with ZODA nodes, saving on network-wide bandwidth and compute, especially for large blocks exceeding 1GB. In the setting of consensus, ZODA nodes can download even less data, enabling significantly higher data throughput as suggested in [Val24]. In the paper, we also discuss a few techniques for enabling resource-constrained light nodes to use ZODA directly to gain higher assurances about their samples. We hope to expand on these ideas in future posts, so stay tuned.

Acknowledgments

We’d like to deeply thank John Adler, Dev Ojha, and Mustafa Al-Bassam for their time and all of the helpful conversations regarding Celestia, data availability sampling, and applications of ZODA to consensus and scaling, which were invaluable in writing this paper. We would also like to thank Kobi Gurkan for helpful edits and suggestions to this post.

Citations

[AHIV17] Scott Ames, Carmit Hazay, Yuval Ishai, and Muthuramakrishnan Venkitasubramaniam. Ligero: Lightweight sublinear arguments without a trusted setup. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 2087–2104, Dallas Texas USA, October 2017. ACM.

[ASB18] Mustafa Al-Bassam, Alberto Sonnino, and Vitalik Buterin. Fraud proofs: Maximising light client security and scaling blockchains with dishonest majorities. CoRR, abs/1809.09044, 2018.

[But20] Vitalik Buterin. 2d data availability with kate commitments, 2020. Accessed: 14 September 2024.

[HASW23] Mathias Hall-Andersen, Mark Simkin, and Benedikt Wagner. Foundations of data availability sampling. Cryptology ePrint Archive, Paper 2023/1079, 2023.

[HASW24] Mathias Hall-Andersen, Mark Simkin, and Benedikt Wagner. FRIDA: Data availability sampling from FRI. Cryptology ePrint Archive, Paper 2024/248, 2024.

[KZG10] Aniket Kate, Gregory M. Zaverucha, and Ian Goldberg. Constant-size commitments to polynomials and their applications. In Masayuki Abe, editor, Advances in Cryptology – ASIACRYPT 2010, pages 177–194, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.

[Val24] ValarDragon. Use DAS to speedup consensus throughput, 2024. Accessed: 2024-11-09.

Share
  • Share on Twitter
  • Copy Link
Contents
  • Acknowledgments
  • Citations
BainCapital
  • Twitter
  • LinkedIn
  • Terms of Use
  • Disclosures