Solstat: A statistical approximation library
Waylon Jepsen
Colin Roberts
Smart Contracts,Mathematics,Technical
No table of contents available
Solstat: A statistical approximation library
Waylon Jepsen
Colin Roberts
Smart Contracts,Mathematics,Technical
<h2 id="numerical-approximations">Numerical Approximations</h2>
<p>There are many useful mathematical functions that engineers use in designing applications. This body of knowledge can be more widely described as <em>approximation theory</em> for which there are many <a href="https://xn--2-umb.com/22/approximation/">great resources</a>. An example of functions that need approximation and are also particularly useful to us at Primitive are those relating to the <em>Gaussian</em> (or <em>normal</em>) distribution. Gaussians are fundamental to statistics, probability theory, and engineering (e.g., <a href="https://en.wikipedia.org/wiki/Central_limit_theorem">the Central Limit Theorem</a>).</p>
<p>At Primitive, our RMM-01 trading curve relies on the (standard) Gaussian probability density function (PDF) $\phi(x)=\frac{1}{\sqrt{2\pi}} e^\frac{-x^2}{2}$, the cumulative probability distribution (CDF) $\Phi$, and its inverse $\Phi^{-1}$. These specific functions appear due to how <a href="https://en.wikipedia.org/wiki/Brownian_motion">Brownian motion</a> appears in the pricing of <a href="https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model">Black-Scholes European options</a>. The Black-Scholes model assumes <a href="https://en.wikipedia.org/wiki/Geometric_Brownian_motion">geometric Brownian motion</a> to get a concrete valuation of an option over its maturity.</p>
<p><code>solstat</code> is a Solidity library that approximates these Gaussian functions. It was built to achieve a high degree of accuracy when computing Gaussian approximations within compute constrained environments on the blockchain. Solstat is open source and available for anyone to use. An interesting use case being showcased by the team at <a href="https://asphodel.io/">asphodel</a> is for designing drop rates, spawn rates, and statistical distributions that have structured randomness in onchain games.</p>
<p>In the rest of this article we'll dive deep into function approximations, their applications, and methodology.</p>
<h2 id="approximating-functions-on-computers">Approximating Functions on Computers</h2>
<p>The first step in evaluating complicated functions with a computer involves determining whether or not the function can be evaluated "directly", i.e. with instructions native to the processing unit. All modern processing units provide basic binary operations of addition (possibly subtraction) and multiplication. In the case of simple functions like $f(x)=mx+b$ where $m$, $x$, and $b$ are integers, computing an output can be done <em>efficiently</em> and <em>accurately</em>.</p>
<p>Complex functions like the Gaussian PDF $\phi(x)$ come with their own unique set of challenges. These functions cannot be evaluated directly because computers only have native opcodes or logical circuits/gates that handle simple binary operations such as addition and subtraction. Furthermore, integer types are native to computers since their mapping from bits is canonical, but decimal types are not ubiquitous. There can be no exponential or logarithmic opcodes for classical bit CPUs as they would require infinitely many gates. There is no way to represent arbitrary real numbers without information loss in computer memory.</p>
<p>This begs the question: How can we compute $\phi(x)$ with this restrictive set of tools? Fortunately, this problem is extremely old, dating back to the human desire to compute complicated expressions by hand. After all, the first "computers" were people! Of course, our methodologies have improved drastically over time.</p>
<p>What is the optimal way of evaluating arbitrary functions in this specific environment? Generally, engineers try to balance the "best possible scores" given the computation economy and desired accuracy. If constrained to a fixed amount of numerical precision (e.g., max error of $10^{-18}$), what is the least amount of:</p>
<ul>
<li><strong>(Storage)</strong> How much storage is needed (e.g., to store coefficients)?</li>
<li><strong>(Computation)</strong> How many total clock cycles are available for the processor to perform?</li>
</ul>
<p>What is the best reasonable approximation for a fixed amount of storage/computational use (e.g., CPU cycles or bits)?</p>
<ul>
<li><strong>(Absolute accuracy)</strong> Over a given input domain, what is the worst-case in the approximation compared to the actual function?</li>
<li><strong>(Relative accuracy)</strong> Does the approximation perform well over a given input domain relative to the magnitude of the range of our function?</li>
</ul>
<p>The above questions are essential to consider when working with the Ethereum blockchain. Every computational step that is involved in mutating the machine's state will have an associated gas cost. Furthermore, DeFi protocols expect to be accurate down to the <code>wei</code>, which means practical absolute accuracy down to $10^{-18}$ ETH is of utmost importance. Precision to $10^{-18}$ is near the accuracy of an "atom's atom", so reaching these goals is a unique challenge.</p>
<h2 id="our-computers-toolbox">Our Computer's Toolbox</h2>
<p>Classical processing units deal with binary information at their core and have basic circuits implemented as logical opcodes. For instance, an <code>add_bits</code> opcode is just a realization of the following digital circuit:</p>
<p><img src="/assets/blog/solstat/full_adder.png" alt=""></p>
<p>These gates are adders because they define an addition operation over binary numbers. These full adders can be strung together with a carry-out pin for higher adders. For example, a <a href="https://en.wikipedia.org/wiki/Adder_(electronics)#Ripple-carry_adder">ripple carry adder</a> can be implemented this way, and extended to an arbitrary size, such as the 256bit requirements in Ethereum.</p>
<p>Note that adders introduce an error called <em>overflow</em>. Using <code>add_4bits</code> to add <code>0001</code> to <code>1111,</code> the storage space necessary to hold a large number is exhausted. This case must be handled within the program. Fortunately for Ethereum 256bit numbers, this overflow is far less of an issue due to the magnitude of the numbers expressed ($2^{256}\approx 10^{77}$). For perspective, to overflow 256bit addition one would need to add numbers on the order of the estimated number of atoms in the universe ($\approx 10^{79}$). Furthermore, the community best practices for handling overflows in the EVM are well understood.</p>
<p>At any rate, repeated addition can be used to build multiplication and repeated multiplication to get integer powers. In math/programming terms:</p>
<p>$$
3\cdot x =\operatorname{multiply}(x,3)=\underbrace{\operatorname{add}(x,\operatorname{add}(x,x))}_{2\textrm{ additions}}
$$</p>
<p>and for powers:</p>
<p>$$
x^3=\operatorname{pow}(x,3)=\underbrace{\operatorname{multiply}(x,\operatorname{multiply}(x,x))}_{2\textrm{ multiplications}}.
$$</p>
<p>Subtraction and division can also be defined for gate/opcode-level integers. Subtraction has similar overflow issues to addition. Division behaves in a way that returns the quotient and remainder. This can be extended to integer/decimal representations of rational numbers (e.g., fractions) using floating-point or fixed-point libraries like <a href="https://github.com/abdk-consulting/abdk-libraries-solidity">ABDK</a>, and the library in <a href="https://github.com/transmissions11/solmate/blob/ed67feda67b24fdeff8ad1032360f0ee6047ba0a/src/utils/FixedPointMathLib.sol">Solmate</a>. Depending on the implementation, division can be more computationally intensive than multiplication.</p>
<h3 id="more-functionality">More Functionality</h3>
<p>With extensions of division and multiplication, negative powers can be constructed such that:</p>
<p>$$
x^{-1}=\frac{1}{x}=\operatorname{divide}(1,x).
$$</p>
<p>None of these abstractions allow computers to express infinite precision with arbitrarily large accuracy. There can never be an exact binary representation of irrational numbers like $\pi$, $e$, or $\sqrt{2}$. Numbers like $\sqrt{2}$ <em>can</em> be represented precisely in <a href="https://en.wikipedia.org/wiki/Computer_algebra_system">computer algebra systems (CAS)</a>, but this is unattainable in the EVM at the moment.</p>
<p>Without computer algebra systems, quick and accurate algorithms for computing approximations of functions like $\sqrt{x}$ must be developed. Interestingly $\sqrt{x}$ arises in the infamous approximation from <a href="https://www.youtube.com/watch?v=p8u_k2LIZyo">Quake lll</a>, which is an excellent example of an approximation optimization yielding a significant performance improvement.</p>
<h3 id="rational-approximations">Rational Approximations</h3>
<p>The EVM provides access to addition, multiplication, subtraction, and division operations. With no other special-case assumptions as in the Quake square root algorithm, the best programs on the EVM can do is work directly with sums of <em>rational functions</em> of the form:</p>
<p>$$
P_{m,n}(x)=\frac{\alpha_0 +\alpha_1 x + \alpha_2 x^2 + \cdots + \alpha_m x^m}{\beta_0 + \beta_1 x + \beta_2 x^2 + \cdots + \beta_n x^n}.
$$</p>
<p>The problem is that most functions are not rational functions! EVM programs need a way to determine the coefficients $\alpha$ and $\beta$ for a rational approximation. A good analogy can be made to polynomial approximations and power series.</p>
<h2 id="using-our-small-toolbox">Using our Small Toolbox</h2>
<p>When dealing with approximations, an excellent place to start is to ask the following questions: Why is an approximation needed? What existing solutions already exist, and what is the methodology they employ? How many digits of accuracy are needed? The answers to these questions provide solid baseline to formulate approximation specifications.</p>
<p>Transcendental or special functions are analytical functions not expressed by a rational function with finite powers. Some examples of transcendental functions are the exponential function $\exp(x)$, the inverse logarithm $\ln(x)$, and exponentiation. However, if the target function being approximated has some nice properties (e.g., it is differentiable), it can be locally approximated with a polynomial. This is seen in the context of <a href="https://en.wikipedia.org/wiki/Taylor%27s_theorem">Taylor's theorem</a> and more broadly as the <a href="https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem">Stone-Weierstrass theorem</a>.</p>
<h3 id="power-series">Power Series</h3>
<p>Polynomials (like $P_N(x)$ below) are a useful theoretical tool that also allow for function approximations.</p>
<p>$$
P_N(x)=\sum_{n=0}^N a_nx^n=a_0+a_1x+a_2x^2+a_3x^3+\cdots + a_N x^N
$$</p>
<p>Only addition, subtraction, and multiplication are needed. There is no need for division implementations on the processor. More generally, an infinite polynomial called a <em>power series</em> can be written by specifying an infinite set of coefficients ${a_0,a_1,a_2,\dots}$ and combining them as:</p>
<p>$$
\sum_{n=0}^\infty a_n x^n.
$$</p>
<p>A specific way to get a power series approximation for a function $f$ around some point $x_0$ is by using Taylor's theorem to define the series by:</p>
<p>$$
\sum_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!}(x-x_0)^n = f(x_0) + f'(x_0)x + \frac{f''(x_0)}{2!}x^2 + \frac{f'''(x_0)}{3!}x^3 +...
$$</p>
<p>Intuitively, the Taylor series approximations are built by constructing the best "tangent polynomial," for example, the 1st order Taylor approximation of $f$ is the tangent line approximation to $f$ at $x_0$</p>
<p>$$
f(x)\approx f(x_0)+f'(x_0)(x-x_0)=\underbrace{f'(x_0)}<em>{\textrm{slope}}x+\underbrace{f(x_0)-f'(x_0)x_0}</em>{y-\textrm{intercept}}.
$$</p>
<p>For $\exp(x)$, there is the resulting series</p>
<p>$$
\exp(x)=\sum_{n=0}^\infty \frac{x^n}{n!}
$$</p>
<p>when approximating around $x_0=0$.</p>
<p>Since polynomials can locally approximate transcendental functions, the question remains where to center the approximations.</p>
<p>The infinite series is precisely equal to the function $\exp(x)$, and by truncating the series at some finite value, say $N$, there is a resulting <em>polynomial approximation</em>:</p>
<p>$$
\exp(x)\approx\sum_{n=0}^N \frac{x^n}{n!} = 1+x+\frac{x^2}{2}+\cdots + \frac{x^N}{N!}.
$$</p>
<p>The function $\phi(x)$ can be written by scaling the input and output of $\exp$ by</p>
<p>$$
\sqrt{2\pi}\phi(\sqrt{2}x)=\exp(-x^2)=\sum_{n=0}^N \frac{(-x)^n}{n!}.
$$</p>
<p>This demonstrates what various orders of polynomial approximation look like compared to the function itself.</p>
<p><img src="/assets/blog/solstat/polynomial_approx.png" alt=""></p>
<p>This solves the infinity problem and now these polynomials can be obtained procedurally at least for functions that are $N$ times differentiable. In theory the Taylor polynomial can be as accurate as needed. For example, increase $N$. However, there are some restrictions to keep in mind. For instance, since factorials $!$ grow exceptionally fast, there may not be enough precision to support numbers like $\frac{1}{N!}$. $20!>10^{18}$, so for tokens with 18 digits, the highest order polynomial approximation for $\exp(x)$ on Ethereum can only have degree 19. Furthermore, polynomials have some unique properties:</p>
<ol>
<li>(No poles) Polynomials never have vertical asymptotes.</li>
<li>(Unboundedness) Non-constant polynomials always approach either infinity or negative infinity as $x\to \pm\infty$.</li>
</ol>
<p>An excellent example of this failure is the function $\phi(x)$ which asymptotically approaches 0 as $x\to \pm \infty$. Polynomials don't do well approximating this! In the case of another even simpler function $f(x)=\frac{1}{x}$, this function can be approximated by polynomials away from $x=0$, but doing so is a bit tedious. Why do this when division can be used to compute $f(x)$? It's more expensive and decentralized application developers must be frugal when using the EVM.</p>
<h3 id="laurent-series">Laurent Series</h3>
<p>Polynomial approximations are a good start, but they have some problems. Succinctly, there are ways to more accurately approximate functions with <em>poles</em> or those that are <em>bounded</em>.</p>
<p>This form of approximation is rooted in complex analysis. In most cases, any real-valued function $f(x)=y$ can instead allow the inputs $z$ and outputs to be complex $f(z)=w$. This small change enables the <a href="https://en.wikipedia.org/wiki/Laurent_series"><em>Laurent series</em></a> expression for functions $f(z)$. A Laurent series includes negative powers, and, in general, looks like:</p>
<p>$$
f(z) = \sum_{n=-\infty}^{\infty}a_nz^n = \cdots+ a_{-2}\frac{1}{z^2} + a_{-1} \frac{1}{z} + a_{0} + a_1z + a_2z^2+ \cdots
$$</p>
<p>For a function like $f(x)=\frac{1}{x}$ the Laurent series is specified by the list of coefficients $a_{-1}=1$ and $a_n=0$ for $n\neq 1$. Whatever precision intended is the precision of the division algorithm if we implement $f(z)$ as a Laurent series!</p>
<p>The idea of the Laurent series is immensely powerful, but it can be economized further by writing down an approximate form of the function slightly differently.</p>
<h3 id="rational-approximations-1">Rational Approximations</h3>
<p>"If you sat down long enough and thought about ways to rearrange addition, subtraction, multiplication, and division in the context of approximations, you would probably write down an expression close to this":</p>
<p>$$
P_{m,n}(x)=\frac{\alpha_0 + \alpha_1 x + \alpha_2 x^2 + \cdots + \alpha_m x^m}{\beta_0+\beta_1 x + \beta_2 x^2 + \cdots +\beta_n x^n}.
$$</p>
<p>Specific ways to arrange the fundamental operations can benefit particular applications. For example, there are ways to determine coefficients $\alpha$ and $\beta$ that do not run into the issue of being smaller than the level of precision offered by the machine. Fewer total operations are needed, resulting in less total storage use for the coefficients simultaneously.</p>
<p>Aside from computational efficiency, another benefit of using rational functions is the ability to express degenerate function behavior such as singularities (poles/infinities), boundedness, or asymptotic behavior. Qualitatively, the functions $\exp(-x^2)=\sqrt{2\pi}\phi(\sqrt{2}x)$ and $\frac{1}{1+x^2}$ look very similar on all of $\R$ and the approximation fares far better than $1-x^2$ outside of a narrow range. See the labeled curves in the following figure.</p>
<p><img src="/assets/blog/solstat/rational_vs_polynomial.png" alt=""></p>
<h3 id="continued-fraction-approximations">Continued fraction approximations</h3>
<p>The degree of accuracy for a given approximation should be selected based on need and with respect to environmental constraints. The approximations in <code>solstat</code> are an economized continued fraction expansion:</p>
<p>$$
a_0+\frac{x}{a_1+\frac{x}{a_2+\frac{x}{a_3+\frac{x}{~~~\ddots}}}}
$$</p>
<p>This is typically a faster way to compute the value of a function. It's also a way of definining the Golden Ratio (the <strong>most</strong> irrational number):</p>
<p>$$
\varphi = 1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{~~~\ddots}}}}
$$</p>
<p>Finding continued fraction approximations can be done analytically or from Taylor coefficients. There are some specific use cases for functions that have nice recurrence relations (e.g., factorials) since they work well algebraically with continued fractions. The implementation for <code>solstat</code> is based on these types of approximations due to some special relationships defined later.</p>
<h3 id="finding-and-transforming-between-approximations">Finding and Transforming Between Approximations</h3>
<p>Thus far this article has not discussed getting these approximations aside from the case of the Taylor series. In each case, an approximation consists of a list of coefficients (e.g., Taylor coefficients ${a_0,a_1,\dots}$) and a map to some expression with finitely many primitive function calls (e.g., a polynomial $a_0+a_1x+\cdots$).</p>
<p>The cleanest analytical example is the Taylor series since the coefficients for well-behaved functions can be found by computing derivatives by hand. When this isn't possible, results can be computed numerically using finite difference methods, e.g., the first-order central difference, in order to extract these coefficients:</p>
<p>$$
f'(x)\approx \frac{f(x+h/2)-f(x-h/2)}{h}
$$</p>
<p>However, this can be impractical when the coefficients approach the machine precision level. Also, Laurent series coefficients are determined the same way.</p>
<p>Similarly, the coefficients of a rational function (or <a href="https://en.wikipedia.org/wiki/Pad%C3%A9_approximant">Pade</a>) approximation can be determined using an iterative algorithm (i.e., akin to <a href="https://en.wikipedia.org/wiki/Newton%27s_method">Newton's method</a>). Choose the order $m$ and $n$ of the numerator and denominator polynomials to find the coefficients. From there, many software packages have built-in implementations to find these coefficients efficiently, or a solver can be implemented to do something like <a href="https://mathworld.wolfram.com/WynnsEpsilonMethod.html">Wynn's epsilon algorithm</a> or the <a href="https://en.wikipedia.org/wiki/Minimax_approximation_algorithm">minimax approximation algorithm</a>.</p>
<p>All of the aforementioned approximations can be transformed into one another depending on the use case. Most of these transformations (e.g., turning a polynomial approximation into a continued fraction approximation) amount to solving a linear problem or determining coefficients through numerical differentiation. Try different solutions and see which is best for a given application. This can take some trial and error. Theoretically, these algorithms seek to determine the approximation with a minimized maximal error (i.e., minimax problems).</p>
<h3 id="breaking-up-the-approximations">Breaking up the approximations</h3>
<p>Functions $f\colon X \to Y$ also come along with domains of definition $X$. Intuitively, the error for functions with bounded derivatives has an absolute error proportional to the domain size. When trying to approximate $f$ over all of $X$, the smaller the set $X$, the better. It only takes $n+1$ points to define a polynomial of degree $n$. This means a domain $X$ with $n+1$ points can be perfectly computed with a polynomial.</p>
<p>For domains with infinitely many points, reducing the measure of the region approximated over is still beneficial especially when trying to minimize absolute error. For more complicated functions (especially those with large derivative(s)), breaking up the domain $X$ into $r$ different subdomains can be helpful.</p>
<p>For example, in the case of $X=[0,1]$, a 5th degree polynomial approximation for $f\colon [0,1]\to \R$ has max absolute error $10^{-4}$. After splitting the domain into $r=2$ even-sized pieces, the result is $f_1\colon [0,1/2]\to \R$ and $f_2\colon [1/2,1] \to \R$, which is used in the original algorithms to determine two distinct sets of coefficients for their approximations. In the domain of interest, $f_1$ and $f_2$ only have $10^{-6}$ in error. Yet, if extended $f_1$ outside of $[0,1/2]$, the error will have increased to $10^{-2}$. Each piece of the function is optimized purely for its reduced domain.</p>
<p>Breaking domains into smaller pieces allows for piecewise approximations that can be better than any non-piecewise implementation. At some point, piecewise approximations require so many conditional checks that it can be a headache, but this can also be incredibly efficient. Classic examples of piecewise approximations are piecewise linear approximations and <a href="https://en.wikipedia.org/wiki/Spline_(mathematics)">(cubic) splines</a>.</p>
<h2 id="ethereum-environment">Ethereum Environment</h2>
<p>In the Ethereum blockchain, every transaction that updates the world state costs gas based on how many computational steps are required to compute the state transition. This constraint puts pressure on smart contract developers to write efficient code. Onchain storage itself also has an associated cost!</p>
<p>Furthermore, most tokens on the blockchain occupy 256 bits of total storage for the account balance. Account balances can be thought of as <code>uint256</code> values. Fixed point math is required for accurate pricing to occur on smart contract based exchanges. These libraries take the <code>uint256</code> and recast it as a <code>wad256</code> value which assumes there are 18 decimal places in the integer expansion of the <code>uint256</code>. As a result, the most accurate (or even "perfect") approximations onchain are always precise to 18 decimal places.</p>
<p>Consequently, it is of great importance to be considerate of the EVM when making approximations onchain. All of the techniques above can be used to make approximations accurate near $10^{-18}$ in precision and economical simultaneously. To get full $10^{-18}$ precision, the computation for rational approximations would need coefficients with higher than 256bit precision and the associated operations.</p>
<h2 id="solstat-implementation">Solstat Implementation</h2>
<p>A continued fraction aproximation of the Gaussian distribution is performed in <a href="https://github.com/primitivefinance/solstat/blob/main/src/Gaussian.sol">Gaussian.sol</a>. <a href="https://github.com/transmissions11/solmate/blob/ed67feda67b24fdeff8ad1032360f0ee6047ba0a/src/utils/FixedPointMathLib.sol">Solmate</a> is used for fixed point operations alongside a custom library for units called <a href="https://github.com/primitivefinance/solstat/blob/main/src/Units.sol">Units.sol</a>. The majority of the logic is located in <a href="https://github.com/primitivefinance/solstat/blob/main/src/Gaussian.sol">Gaussian.sol</a>.</p>
<p>First, a collection of constants used for the approximation is defined alongside custom errors. These constants were found using a special technique to obtain a continued fraction approximation that is for a related function called the <a href="https://en.wikipedia.org/wiki/Gamma_function">gamma function</a> (or more specifically, the <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function">incomplete gamma function</a>). By changing specific inputs/parameters to the incomplete gamma function, the <a href="https://en.wikipedia.org/wiki/Error_function">error function</a> can be obtained. The error function is a shift and scaling away from being the Gaussian CDF $\Phi(x)$.</p>
<h3 id="gaussian">Gaussian</h3>
<p>The gaussian contract implements a number of functions important to the gaussian distributions. Importantly all of these implementations are for a mean $\mu = 0$ and variance $\sigma^2 = 1$.</p>
<p>These implementations are based on the <a href="https://e-maxx.ru/bookz/files/numerical_recipes.pdf">Numerical Recipes</a> textbook and its C implementation. <a href="https://e-maxx.ru/bookz/files/numerical_recipes.pdf">Numerical Recipes</a> cites the original text by Abramowitz and Stegun, <a href="https://personal.math.ubc.ca/~cbm/aands/abramowitz_and_stegun.pdf">Handbook of Mathematical Functions</a>, which can be read to understand these functions and the implications of their numerical approximations more thoroughly. This implementation is also differentially tested with the <a href="https://github.com/errcw/gaussian">javascript Gaussian library</a>, which implements the same algorithm.</p>
<h3 id="cumulative-distribution-function">Cumulative Distribution Function</h3>
<p>The implementation of the CDF aproximation algorithm takes in a random variable $x$ as a single parameter. The function depends on helper functions known as the error function <code>erf</code> which has a special symmetry allowing for the approximation of the function on half the domain $\R$.</p>
<p>$$
\operatorname{erfc}(-x) = 2 - \operatorname{erfc}(x)
$$</p>
<p>It is important to use symmetry when possible!</p>
<p>Furthermore, it has the other properties:</p>
<p>$$
\operatorname{erfc}(-\infty) = 2
$$</p>
<p>$$
\operatorname{erfc}(0) = 1
$$</p>
<p>$$
\operatorname{erfc}(\infty) = 0
$$</p>
<p>The reference implementation for the error function can be found on p221 of Numerical Recipes in section C 2e. <a href="https://mathworld.wolfram.com/Erfc.html">This page</a> is a helpful resource.</p>
<h3 id="probability-density-function">Probability Density Function</h3>
<p>The library also supports an approximation of the Probability Density Function(PDF) which is mathematically interpeted as $Z(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-(x - \mu)^2}{2\sigma^2}}$. This implementation has a maximum error bound of of $1.2\cdot 10^{-7}$ and can be refrenced <a href="https://mathworld.wolfram.com/ProbabilityDensityFunction.html">here</a>. The Gaussian PDF is even and symmetric about the $y$-axis.</p>
<h3 id="percent-point-function--quantile-function">Percent Point Function / Quantile Function</h3>
<p>Aproximation algorithms for the Percent Point Function (PPF), sometimes known as the inverse CDF or the quantile function, are also implemented. The function is mathmatically defined as $D(x) = \mu - \sigma\sqrt{2}\operatorname{ierfc}(2x)$, has a maximum error of $1.2\cdot 10^{-7}$, and depends on the inverse error function <code>ierf</code> which is defined by</p>
<p>$$
\operatorname{ierfc}(\operatorname{erfc}(x)) = \operatorname{erfc}(\operatorname{ierfc}(x))=x
$$</p>
<p>and has a domain in the interval $0 < x < 2$ along with some unique properties:</p>
<p>$$
\operatorname{ierfc}(0) = \infty
$$</p>
<p>$$
\operatorname{ierfc}(1) = 0
$$</p>
<p>$$
\operatorname{ierfc}(2) = - \infty
$$</p>
<h3 id="invariant">Invariant</h3>
<p><code>Invariant.sol</code> is a contract used to compute the invariant of the RMM-01 trading function such that $y$ is computed in:</p>
<p>$$
y = K\Phi(\Phi^{⁻¹}(1-x) - \sigma\sqrt{\tau}) + k
$$</p>
<p>This can be interpretted graphically with the following image:</p>
<p><img src="/assets/blog/solstat/rmm.png" alt=""></p>
<p>Notice the need to compute the normal CDF of a quantity. For a more detailed perspective on the trading function, take a look at the <a href="/papers/Whitepaper.pdf">RMM-01 whitepaper</a>.</p>
<h2 id="solstat-versions">Solstat Versions</h2>
<p>Solstat is one of Primitive's first contributions to improving the libraries availible in the Ethereum ecosystem. Future improvements and continued maintenance are planned as new techniques emerge.</p>
<h2 id="differential-testing">Differential Testing</h2>
<p>Differential testing by Foundry was critical for the development of Solstat. A popular technique, differential testing seeds inputs to different implementations of the same application and detects differences in the execution. Differential testing is an excellent complement to traditional software testing as it is well suited to detect semantic errors. This library used differential testing against the javascript <a href="https://github.com/errcw/gaussian">Gaussian library</a> to detect anomalies and varying bugs. Because of differential testing we can be confident in the performance and implementation of the library.</p>