GistTree.Com
Entertainment at it's peak. The news is by your side.

The unreasonable effectiveness of quasirandom sequences

0

I most modern a peculiar low discrepancy quasirandom sequence that affords many splendid enhancements over other popular sequences similar to the Sobol and Halton sequences.

Figure 1a. Comparability of the many low discrepancy quasirandom sequences. Imprint that the newly proposed $R$-sequence produces extra evenly spaced aspects than any of the other suggestions. Furthermore, all other most modern suggestions require cautious choice of foundation parameters, and if now not chosen fastidiously can consequence in degeneracy (eg high upright).

First published: 25th April, 2018

Closing as much as this level:    23 April, 2020

This weblog put up was as soon as featured on the front internet page of Hacker Files a whereas assist.

Peep here for the tremendous discussion.

   Subject matters Covered

In pick 1b, it goes to be viewed that the easy uniform random sampling of a level inner a unit sq. displays clustering of aspects and that there are also areas that absorb no aspects in any appreciate (‘white noise’) . A low discrepancy sequence  quasirandom sequence is a manner of setting up an (countless) sequence aspects in a deterministic manner that reduces the probability of clustering (discrepancy) whilst smooth making certain that the total home is uniformly lined (‘blue noise samples’).

That’s, Quasirandom Sequences functional to earn level distributions that appear much less out of the ordinary than lattices, however extra out of the ordinary than random sampling (survey pick 1b). Effectively identified quasirandom sequences consist of the Halton and Sobol sequences. They play a spacious position in loads of numerical computing solutions, including physics, finance and in extra most modern a long time laptop graphics.

Figure 1b. Comparability of a out of the ordinary lattice (left) with 3 varied quasirandom solutions (center), and a easy random distribution (upright). Stare that the quasirandom distributions appear much less out of the ordinary than a lattice however cease now not earn as many ‘clumps’ or ‘gaps’ as the random distribution.

The suggestions of making fully deterministic low discrepancy quasirandom sequences in a single dimension are extremely wisely studied, and mainly solved.  On this put up, I focal level nearly solely on delivery (countless) sequences, first in a single dimension and then extending to better dimensions. The elementary perfect thing about delivery sequences (that is, extensible in $n$ is that if the consequent errors in step with a finite choice of terms is too spacious, the sequence would possibly per chance be prolonged without discarding all the previously calculated aspects. There are a broad range of set delivery sequences. One skill to categorise the varied kinds is by the kind of setting up their foundation (hyper)-parameters:

  • irrational fractions: Kronecker, Richtmyer, Ramshaw
  • (co)high numbers: Van der Corput, Halton, Faure
  • Irreducible Polynomials : Niederreiter
  • Aged polynomials: Sobol’

For brevity, this put up mainly specializes in comparing this unusual additive recurrence $R$-sequence, which falls into the first class because it is a recurrence manner in step with irrational numbers (incessantly called Kronecker, Weyl  or Richtmyer sequences) which would possibly per chance be injurious 1 lattices, and the Halton sequence, which is in step with the canonical one-dimensional van der Corput sequence. The canonical Kronecker Recurrence sequence is defined as: $$R_1(alpha): ;; t_n = {s_0 + n alpha}, quad n=1,2,3,…$$ the set $alpha $ is any irrational amount. Imprint that the notation $ {x}$ indicates the fractional section of $x$. In computing, this feature is extra typically expressed in the next skill $$R_1(alpha): ;; t_n = s_0 + n alpha ; (textrm{mod} ; 1); quad n=1,2,3,…$$ For $s_0 = 0$, the first few terms of the sequence, $R(phi)$ are: $$t_n = 0.618, ; 0.236, ; 0.854, ; 0.472, ; 0.090, ; 0.708, ; 0.327, ; 0.944, ; 0.562, ; 0.180,; 798,; 416, ; 0.034, ; 0.652, ; 0.271, ; 0.888,…$$

It’s far main to note that the worth of $s_0$ would not earn an label on the total traits of the sequence, and in almost about all conditions is made up our minds to zero. On the other hand, in particular in computing the selection of $s neq 0$ provides a further diploma of freedom that is always functional. If $s neq 0$, the sequence is always called a ‘shifted lattice sequence.

The associated charge of ( alpha ) that affords the bottom that that you just would possibly per chance think discrepancy is executed if  ( alpha = 1/phi ), the set ( phi ) is the golden ratio. That’s, $$ phi equiv frac{sqrt{5}+1}{2} simeq 1.61803398875… ; $$ It’s far entertaining to note that there are an extensive choice of different values of  $alpha$ that also invent optimum discrepancy, and they are all connected by the Moebius transformation $$ alpha’ = frac{palpha+q}{ralpha+s} quad textrm{for all integers} ; p,q,r,s quad textrm{such that} |ps-qr|=1 $$ We now compare this recurrence manner to the wisely identified van der Corput reverse-radix sequences [van der Corput, 1935] . The van der Corput sequences are in fact a family of sequences, every defined by a particular hyper-parameter, b. The first few terms of the sequence for b=2 are: $$t_n^{[2]} =  frac{1}{2}, frac{1}{4},frac{3}{4}, frac{1}{8}, frac{5}{8},  frac{3}{8}, frac{7}{8}, frac{1}{16},frac{9}{16},frac{5}{16},frac{13}{16}, frac{3}{16}, frac{11}{16}, frac{7}{16}, frac{15}{16},…$$ The following section compares the classic traits and effectiveness of every of these sequences. Put in tips the activity of evaluating the sure integral $$ A = int_0^1 f(x) textrm{d}x $$ We can even approximate this by: $$ A  simeq A_n = frac{1}{n} sum_{i=1}^{n} f(x_i),  quad x_i in [0,1] $$

  • If the ( {x_i} ) are equal to  (i/n), here’s the rectangle rule;
  • If the ( {x_i} ) are chosen randomly, here’s the Monte Carlo manner; and
  • If the ( {x_i} ) are ingredients of a low discrepancy sequence, here’s the quasi-Monte Carlo manner.

The following graph displays the usual-or-garden error curves ( s_n = |A-A_n| ) for approximating a undeniable integral connected to the feature, ( f(x) = textrm{exp}(frac{-x^2}{2}), ; x in [0,1]  ) with: (i) quasi-random aspects in step with the additive recurrence, the set  ( alpha = 1/phi ), (blue); (ii) quasi-random aspects in step with the van der Corput sequence, (orange); (iii) randomly selected aspects, (inexperienced); (iv) Sobol sequence (pink).  It displays that for (n=10^6) aspects, the random sampling attain outcomes in an error of ( simeq 10^{-4}), the van der Corput sequence outcomes in an error of ( simeq 10^{-6}), whilst the (R(phi))-sequence outcomes in an error of ( simeq 10^{-7}), which is (sim)10x better than the van der Corput error and (sim) 1000x better than (uniform) random sampling.

Figure 2. Comparability of 1-dimensional Numerical integration the usage of rather about a Quasirandom Monte Carlo suggestions. Imprint smaller is better. The unusual $R_2$ sequence (blue) and the Sobol (pink) are clearly the final note.

Several issues from this pick are rate noting:

  • it is in step with the conception that the errors in step with uniform random sampling asymptotically decrease by $ 1/sqrt{n} $, whereas error curves in step with both quasi-random sequences tend to  $1/n $.
  • The outcomes for the $R_1(phi)$-sequence  (blue) and Sobol (pink) are the final note.
  • It displays that the van der Corput sequence provides factual, however extremely constant outcomes for integration considerations!
  • It displays that, for all values of $n$, the $R_1(phi)$-sequence produces outcomes better than the van der Corput sequence.

The unusual $R_1$ sequence, which is the Kronecker sequence the usage of the Golden ratio,  is without doubt one of many final note picks for one-dimensional Quasirandom Monte Carlo (QMC) integration suggestions

It’ll smooth even be renowned that even supposing (alpha = phi) theoretically provides the provably optimum choice, (sqrt{2}) and is terribly shut to optimum, and nearly any other irrational worth of (alpha) affords very perfect error curves for one-dimensional integration. Due to the this, (alpha = sqrt{p} ) for any high is terribly typically feeble.  Furthermore, from a computing viewpoint, selecting a random worth in the interval  $ alpha in [0,1] $ is with nearly sure wager going to be (within machine precision) an irrational amount, and subsequently a gentle desire for a low discrepancy sequence. For visual clarity, the above pick would not level to the effects the Niederreiter sequence as the effects  are almost about indistinguishable to that of the the Sobol and $R$ sequences.  The Neiderreiter and Sobol sequences (along with their optimized parameter choice) that had been feeble on this put up had been calculated by Mathematica the usage of what is documented as “closed, proprietary and fully optimized mills offered in Intel’s MKL library”.

Most most modern set better dimension low discrepancy simply combine (in a component-wise manner), (d) one-dimensional sequences collectively. For brevity, this put up mainly specializes in describing the Halton sequence [Halton, 1960] , Sobol sequence and the (d-)dimensional Kronecker sequence.

The Halton sequence is constructed fair by the usage of (d) varied one-dimensional van de Corput sequence every with a unfriendly that is somewhat high to all the others. That’s, pairwise co-high. By far, essentially the most frequent choice, as a consequence of its evident simplicity and sensibility, is to make a desire the first (d) primes. The distribution of the first 625 aspects defined by the (2,3)-Halton sequence is level to in pick 1. Even though many two-dimensional Halton sequences are very perfect sources for low discrepancy sequences, additionally it is far wisely identified that many are highly problematic and cease now not level to low discrepancies. To illustrate, pick 3 displays that the (11,13)-Halton sequence produces highly viewed traces. Principal effort has long gone into suggestions of selecting which pairs of ( (p_1, p_2) ) are exemplars and which of them are problematic. This misfortune is even extra problematic in better dimensions.

Kronecker recurrence suggestions mainly endure even bigger challenges when generalizing to better dimensions. That’s, even supposing the usage of ( alpha = sqrt{p}  ) produces very perfect one-dimensional sequences, it is very worrying to even gain pairs of high numbers  to be feeble as the foundation for the 2 dimensional case that are now not problematic! As a skill around this, some earn suggested the usage of different wisely-identified irrational numbers, similar to the ( phi,pi,e, … ). These fabricate somewhat acceptable solutions however now not are mainly now not feeble as they are on the total now not as factual as a wisely-chosen Halton sequence. A excellent deal of effort has contemplating these considerations with degeneracy.

Proposed solutions consist of skipping/burning, leaping/thinning. And for finite sequences scrambling is yet every other map that is always feeble to beat this misfortune. Scrambling can now not be feeble to earn an delivery (countless) low discrepancy sequence.

Figure 3. The (11,13)-Halton sequence is clearly now not a low discrepancy sequence (left). Nor is the (11,13)-high essentially based solely additive recurrence sequence (center). Some 2-dimensional additive recurrence sequences that incorporate wisely identified irrational numbers are reasonably factual (upright).

Equally, in spite of the mainly better performance of the Sobol sequence, its complexity and extra importantly the requirement of very cautious picks of its hyperparameters makes it now not as intelligent.

  • Thus reiterating, in $d$-dimensions:
  • the usual-or-garden Kronecker sequences require the selection of $d$ linearly just irrational numbers;
  • the Halton sequence requires $d$ pairwise coprime integers; and
  • the Sobol sequence requires selecting $d$ course numbers.

The unusual $R_d$ sequence is the final note $d$-dimensional low discrepancy quasirandom sequence that would not require any choice of foundation parameters.

#

Generalizing the Golden Ratio

tl;dr On this section, I level to be taught the technique to set a  unusual class of (d-)dimensional delivery (countless) low discrepancy sequence that cease now not require picking any foundation parameters, and which has wisely-known low discrepancy properties.

There are a broad range of that that you just would possibly per chance think ways to generalize the Fibonacci sequence and/or the Golden ratio. The following proposed manner of generalizing the golden ratio is now not unusual [Krcadinac, 2005]. Additionally the attribute polynomial is connected to many fields of algebra, including Perron Numbers, and Pisot-Vijayaraghavan numbers. On the other hand, what’s unusual is the particular connection between this generalized earn and the enchancment of better-dimensional low discrepancy sequences. We outline the generalized version of the golden ratio,( phi_d) as the outlandish sure root $ x^{d+1}=x+1 $. That’s,

For (d=1),  ( phi_1 = 1.6180339887498948482… ), which is the canonical golden ratio.

For (d=2), ( phi_2 = 1.324717957244746… ), which  is always called the plastic constant, and has some dazzling properties (survey also here). This worth was as soon as conjectured to presumably be the optimum worth for a connected two-dimensional misfortune [Hensley, 2002].

For (d=3), ( phi_3 = 1.22074408460575947536… ),

For $ d>3$, even supposing the roots of this equation cease now not earn a closed algebraic earn, we are able to without misfortune invent a numerical approximation both by commonplace skill similar to Newton-Rhapson, or by noting that for the next sequence, ( R_d(phi_d) ): $$ t_0=t_1 = … = t_{d} = 1;$$ $$t_{n+d+1} ;=;t_{n+1}+t_n, quad textrm{for} ; n=1,2,3,..$$

This particular sequence of  constants, $phi_d$ had been called ‘Harmonious numbers‘ by architect and monk, Hans van de Laan in 1928. These particular values would possibly per chance be expressed very elegantly as follows:

$$ phi_1 = sqrt{1+sqrt{1+sqrt{1+sqrt{1+sqrt{1+…}}}}} $$

$$ phi_2 = sqrt[3]{1+sqrt[3]{1+sqrt[3]{1+sqrt[3]{1+sqrt[3]{1+…}}}}} $$

$$ phi_3 = sqrt[4]{1+sqrt[4]{1+sqrt[4]{1+sqrt[4]{1+sqrt[4]{1+…}}}}} $$

We also earn the next very generous property: $$ phi_d  =lim_{nto infty} ;;frac{t_{n+1}}{t_n} .$$ This sequence, typically called the generalized or delayed Fibonacci sequence, has been studied rather broadly, [Kak 2004, Wilson 1993] and the sequence for (d=2) is always called the Padovan sequence [Stewart, 1996, OEIS A000931], whilst the (d=3) sequence is listed in [OEIS A079398]. As talked about sooner than, the principle contribution of this put up is to sing the particular connection between this generalized sequence and the enchancment of (d-)dimensional low-discrepancy sequences.

Principal Result: The following parameter-free (d-)dimensional delivery (countless) sequence (R_d(phi_d)), has tremendous low discrepancy traits in contrast to other present suggestions. $$ mathbf{t}_n = {n pmb{alpha} },  quad n=1,2,3,… $$ $$ textrm{the set} quad pmb{alpha} =(frac{1}{phi_d}, frac{1}{phi_d^2},frac{1}{phi_d^3},…frac{1}{phi_d^d}), $$ $$ textrm{and} ; phi_d textrm{is the outlandish sure root of }  x^{d+1}=x+1. $$

For two dimensions, this generalized sequence for (n=150), is shown in pick 1. The aspects are clearly extra evenly distributed  for the (R_2)-sequence in contrast to the (2, 3)-Halton sequence, the Kronecker sequence in step with ( (sqrt{3},sqrt{7}) ), the Niederreiter and Sobol sequences. (Due to the the complexity of the Niederreiter and Sobol sequences they had been calculated by Mathematica the usage of proprietary code supplied by Intel.) This model of sequence, the set the foundation vector $ pmb{alpha} $ is a feature of a single right worth, is always called a Korobov sequence [Korobov 1959]

Peep pick 1 all over again for a comparison between rather about a 2-dimensional low discrepancy quasirandom sequences.

Code and Demonstrations

In abstract in 1 dimension, the pseudo-code for the $n$-th interval of time ($n$ = 1,2,3,….) is  defined as

g = 1.6180339887498948482
a1 = 1.0/g
x[n] = (0.5+a1*n) %1

In 2 dimensions, the pseudo-code for the $x$ and $y$ coordinates of the $n$-th interval of time ($n$ = 1,2,3,….) are defined as

g = 1.32471795724474602596
a1 = 1.0/g
a2 = 1.0/(g*g)
x[n] = (0.5+a1*n) %1
y[n] = (0.5+a2*n) %1

The pseudo-code for 3 dimensions, $x$, $y$ and $z$ coordinates of the $n$-th interval of time ($n$ = 1,2,3,….) are defined as

g = 1.22074408460575947536
a1 = 1.0/g
a2 = 1.0/(g*g)
a3 = 1.0/(g*g*g)
x[n] = (0.5+a1*n) %1
y[n] = (0.5+a2*n) %1
z[n] = (0.5+a3*n) %1

Template python code. (Imprint that Python arrays and loops originate at zero!)


import numpy as np

# The usage of the above nested radical system for g=phi_d 
# or it is most likely you'll per chance additionally perfect onerous-code it. 
# phi(1) = 1.6180339887498948482 
# phi(2) = 1.32471795724474602596 
def phi(d): 
  x=2.0000 
  for i in vary(10): 
    x = pow(1+x,1/(d+1)) 
  return x

# Alternative of dimensions. 
d=2 

# choice of required aspects 
n=50 

g = phi(d) 
alpha = np.zeros(d) 
for j in vary(d): 
  alpha[j] = pow(1/g,j+1) %1 
z = np.zeros((n, d)) 

# This amount would possibly per chance be any right amount. 
# Fashioned default surroundings is on the total seed=0
# But seed = 0.5 is always better. 
for i in vary(n): 
  z[i] = (seed + alpha*(i+1)) %1 
print(z)

I earn written the code take care of this to be in step with the maths conventions feeble for the duration of this put up. On the other hand, for causes of programming conference and/or effectivity there are some adjustments rate interested by. Before everything, as $R_2$ is an additive recurrence sequence, yet every other system of $z$ which would not require floating level multiplication and maintains better levels of accuracy for terribly spacious $n$ is

 z[i+1] = (z[i]+alpha) %1 

Secondly, for those languages that allow vectorization, the fractional feature code would possibly per chance be vectorised as follows:

for i in vary(n):
  z[i] = seed + alpha*(i+1)
z = z %1

In the kill, it is most likely you’ll per chance additionally change this floating level additions with integer additions by multiplying all constants by $2^{32}$, and then editing the frac(.) feature accordingly. Listed below are some demonstrations with integrated code, by folks in step with this sequence:

Minimum Packing Distance

The unusual $R_2$-sequence  is the final note 2-dimensional low discrepancy quasirandom sequence the set the minimum packing distance falls ultimate by $1/sqrt{n}$.

Even though the commonplace technical diagnosis of quantifying discrepancy is by analysing the (d^*)-discrepancy, we first level out about a other geometric (and presumably extra intuitive ways!) of how this unusual sequence is preferable to other commonplace suggestions. If we denote the distance between aspects (i) and (j) to be (d_{ij}), and (d_0 = textrm{inf} ; d_{ij} ), then the next graph displays how (d_0(n)) varies for the (R)-sequence, the (2,3)-Halton, Sobol, Niederrieter and the random sequences.  This will be viewed in the next pick 6.

Just like the old pick, every minimum distance measure is normalized by an component of ( 1/sqrt{n} ). One can survey that after (n=300) aspects, it is nearly sure that for the random distribution (inexperienced) there will be two aspects that are extremely shut to one yet every other. It would possibly per chance possibly per chance also additionally be viewed that even supposing the (2,3)-Halton sequence (orange) is grand better than the random sampling, it also unfortunately asymptotically goes to zero. The cause why the normalized $d_0$ goes to zero for the Sobol sequence is on tale of Sobol himself showed that the Sobol sequence falls at a charge of $1/n$ — which is factual, however obviously grand worse than for $R_2$ which ultimate falls by $1/sqrt{n}$.

For the (R(phi_2)) sequence, (blue), the minimum distance between two aspects constantly falls between $0.549/sqrt{n} $ and $0.868/sqrt{n} $. Imprint that the optimum diameter of 0.868 corresponds to a packing piece of 59.2%. Compare this to other circle packings.

Additionally notice that the Bridson Poisson disc sampling which is now not extensible in (n), and on conventional suggested surroundings,  smooth ultimate produces a packing piece of 49.4%. Imprint that this plot of (d_0) intimately connects the ( phi_d ) low discrepancy sequences with badly approximable numbers/vectors in (d)-dimensions [Hensley, 2001]. Even though a itsy-bitsy is identified about badly approximable numbers in two dimensions, the enchancment of (phi_d) would possibly per chance per chance also offer a peculiar window of be taught for better badly approximable numbers in better dimensions.

Figure 4. Minimum pairwise distance for various low discrepancy sequences. Imprint that the $R_2$-sequence (blue) is constantly the final note choice, additionally it is far the final note sequence that the set the normalized distance would not tend to zero as ( n rightarrow infty ). The Halton sequence (orange) is subsequent ultimate, with the Sobol (inexperienced) and Niederreiter (pink) sequences now not as factual however smooth grand better than random (pink). Imprint bigger is better because it corresponds to an even bigger packing distance.

Voronoi Diagrams

One other manner of visualizing the evenness of a level distribution is to earn a Voronoi map in step with the first (n) aspects of  a 2-dimensional sequence, and then coloration every position in step with its home. In the pick below, displays the coloration-essentially based solely Voronoi diagrams for (i) the $R_2$-sequence; (ii) the (2,3)-Halton sequence, (iii) high-essentially based solely recurrence; and (iv) the easy random sampling; . The the same coloration scale is feeble for all figures. As soon as more, it is apparent that the $R_2$-sequence provides a miles extra even distribution than the Halton or easy random sampling. This is akin to above, however colored in accordance the the selection of vertices of every voronoi cell. No longer ultimate is it sure that the (R)-sequence provides a miles extra even distribution than the Halton or easy random sampling, however what’s extra hanging is that for key values of $n$ it ultimate consists of hexagons! If we absorb into consideration the generalised Fibonacci sequence, $A_1=A_2=A_3=1; quad A_{n+3} = A_{n+1}+A{n}$. That’s, $A_n:$, delivery{array}{r} 1& 1& 1& 2& 2& 3& 4& 5& 7\ 9& textbf{12}& 16& 21& 28& 37& textbf{49}& 65& 86\ 114& textbf{151}& 200& 265& 351& 465& textbf{616}& 816& 1081 \ 1432& textbf{1897}& 2513& 3329& 4410& 5842& textbf{7739}& 10252& 13581\ 17991& textbf{23833}& 31572& 41824& 55405& 73396& textbf{97229}& 128801& 170625\ 226030& textbf{299426}& 396655& 525456& 696081& 922111& textbf{1221537}& 1618192& 2143648 \ cease{array} All values the set $n= A_{9k-2}$ or $n= A_{9k+2}$ will consist better of hexagons.

Figure 5a. Visualization of the form of the Voronoi diagrams in step with the home of every Voronoi polygon for (i) the $R_2$-sequence; (ii) (2,3) high essentially based solely recurrence; (iii) the (2,3)-Halton sequence, (iv) Niederretier; (v) Sobol; and (iv) the easy random sampling. The colors signify the selection of aspects for every Voronoi polygon. As soon as more, it is apparent that the (R(phi))-sequence provides a miles extra even distribution than any of the other low discrepancy sequences.

For particular values of $n$, the Voronoi mesh of the $R_2$ sequence consists better of hexagons.

Figure 5b. Visualization of the form of the Voronoi diagrams in step with the selection of aspects of every Voronoi polygon for (i) the $R_2$-sequence; (ii) (2,3) high essentially based solely recurrence; (iii) the (2,3)-Halton sequence, (iv) Niederretier; (v) Sobol; and (iv) the easy random sampling. The colors signify the selection of aspects for every Voronoi polygon. As soon as more, it is apparent that the (R(phi))-sequence provides a miles extra even distribution than any of the other low discrepancy sequences.

#

Quasiperiodic Delaunay Tiling of the plane

The $R$-sequence is the final note low discrepancy quasirandom sequence that would possibly per chance be feeble to fabricate $d$-dimensional quasiperiodic tilings by its Delaunay mesh.

Delaunay Triangulation (typically spelt ‘Delone Triangulation’), which is the dual of the Voronoi graph, provides yet every other strategy of viewing these distributions. Extra importantly, even supposing, Delaunay triangulation, provides a peculiar manner of making quasiperiodic tilings of the plane.  Delaunay triangulation of the $R_2$-sequence offer a miles extra out of the ordinary pattern than that of the Halton or Random sequences,. Extra particularly, the Delaunay triangulations of level distributions the set $n$ is equal to any of the generalised Fibonacci sequence $:A_N=$ 1,1,1,2,3,4,5,7,9,12,16,21,28,37,… then the Delaunay triangulation consists of  of ultimate 3 identically paired triangles, that is, parallelograms (rhomboids)!  (excepting those triangles that earn a vertex on the total with the convex hull.) Furthermore,

For values of $n=A_k$, the Delaunay triangulation of  the $R_2$-sequence earn quasiperiodic tilings that every consist better of three unfriendly triangles (pink, yellow, blue) which would possibly per chance be always paired to earn a wisely-defined quasiperiodic tiling of the plane by three parallelograms (rhomboids).

Figure 6. Visualization of the Delaunay Triangulation in step with (i) the ( R(phi_2))-sequence; (ii) the (2,3)-Halton sequence, (iii) high-essentially based solely recurrence; and (iv) the easy random sampling. The colors signify the home of every triangle. The the same scale is feeble in all four diagrams. As soon as more, it is apparent that the (R(phi_2))-sequence provides a grand extra even distribution than any of the other low discrepancy sequences.

Imprint that $R_2$ is in step with $phi_2=1.32471795724474602596$ is the smallest Pisot amount, (and $phi = 1.61803…$ is the final note Pisot amount. The association of quasiperiodic tilings with quadratic and cubic Pisot numbers is now not unusual [Elkharrat and also Masakova] , however I take into consideration that here’s the first time a quasiperiodic tiling has been constructed in step with $phi_2$ =1.324719….

(Imprint: a put up titled “Shattering the plane with twelve unusual Substitution Tilings” [Pegg Jr., 2019] is presumably connected to this $R_2$ tiling, however I could doubtless desire a separate put up to explore the that that you just would possibly per chance think connections.)

This animation below displays how the Delaunay mesh of the $R_2$ sequence changes as aspects are successively added. Imprint that every time the selection of aspects is equal to a interval of time in the generalised Fibonacci sequence, then the total Delaunay mesh consists better of pink, blue and yellow parallelograms (rhomboids) arranged in a 2-fold quasiperiodic manner.

Figure 7.

Even though the areas of pink parallelograms level to substantial regularity, one can clearly survey that the blue and yellow parallelograms are spaced in a quasiperiodic manner. The fourier spectrum of this lattice would possibly per chance be viewed in pick 11, and displays the traditional level-essentially based solely spectra.   (Imprint that the high-essentially based solely recurrence sequence also appears to be quasiperiodic in the frail sense that it is an ordered pattern that is now not repeating. On the other hand, its pattern over a vary of $n$ is now not so constant and also seriously reckoning on the selection of foundation parameters. For this cause, we focal level our passion of quasiperiodic tilings solely on the $R_2$ sequence.) It consists of ultimate three triangles: pink, yellow, blue. Imprint that for this R((phi_2))  sequence, all the parallelograms of every coloration are the right same measurement and form. The ratio of the areas of these person triangles is intensely generous. Particularly, $$ textrm{Teach(pink) : Teach(yellow) : Teach(blue) }= 1 :  phi_2 : phi_2^2 $$ And so is the relative frequency of the triangles, which is: $$ f(textrm{pink}) : f(textrm{yellow}) : f(textrm{blue}) = 1 :  phi_2 : 1 $$ From this, it follows that the total relative home lined in home by these three triangles is: $$ f(textrm{pink}) : f(textrm{yellow}) : f(textrm{blue}) = 1 :  phi_2^2 : phi_2^2$$ One would also presume that we are able to also additionally earn this quasiperiodic tiling by a substitution in step with the A series. That’s, $$ A rightarrow B; quad B rightarrow C; quad C rightarrow BA $$. For 3 dimensions, if we absorb into consideration the generalised Fibonacci sequence, $B_1=B_2=B_3=B_4=1; quad B_{n+4} = B_{n+1}+B{n}$. That’s, $$ B_n = { 1,1,1,1,2,2,2,3,4,4,5,7,8,9,12,15,17,21,27,32,38,48,59,70,86,107,129,… }

For particular values of $n=B_k$, the 3D-Delaunay mesh connected to the $R_3$-sequence defines a quasiperiodic crystal lattice.

Discretised Packing. section 2

In the next pick, the first (n=2500) aspects for every two-dimensional low discrepancy sequence are shown. Furthermore, every of the 50×50=2500 cells are colored  inexperienced ultimate if that cell contains precisely 1 level.  That’s, extra inexperienced squares indicates a extra even distribution of the 2500 aspects across the 2500 cells. The proportion of inexperienced cells for every of these figures is: (R_2) (75%), Halton (54%), Kronecker (48%), Neiderreiter (54%), Sobol (49%) and Random (38%).  

#

Multiclass Low Discrepancy sets

One of the crucial low discrepancy sequences level to what called be termed ‘multiclass  low discrepancy’. To this level now we earn assumed that after we want to distribute $n$ aspects as evenly as that that you just would possibly per chance think that each one the aspects are the same and indistinguishable. On the other hand, in loads of conditions there are varied styles of aspects. Put in tips the misfortune of evenly distributing the $n$ aspects so as that now not ultimate are all the aspects evenly separated, however also all aspects of the same class are evenly distributed. Particularly, teach that if there are $n_k$ aspects of form $k$, (the set $n_1+n_2+n_3 +… +n_k= n$) , then a multiset low discrepancy distribution is the set every of the $n_k$ aspects are evenly distributed On this case, we discover that the $R$-sequence and the Halton sequence would possibly per chance be without misfortune adapted to alter into multiset low discrepancy sequences, simply allocating consecutively allocating the aspects of every form.

The pick below displays how $n=150$ aspects were distributed such that 75 are blue, 40 inexperienced 25 inexperienced and 10 pink. For the additive recurrence sequence here’s trivially executed by simply surroundings the first 75 terms to correspond to blue, the next 40 to orange, the next 25 to inexperienced and the final 10 to pink. This machine works  nearly works for the Halton and Kronecker sequences however fares very poorly for Niederreiter and Sobol sequences. Furthermore, there must not any identified ways for constantly generating multiscale level distributions for Niederreiter or Sobol sequences. This displays that multiclass level distributions similar to those in the eyes of chickens, can now be described and constructed straight by low discrepancy sequences.

The $R_2$ sequence is a low discrepancy quasirandom sequence that admits a easy constructing of multiclass low discrepancy.

Figure 9. Multiscale Low discrepancy Sequences. For the $R$ sequence, now not ultimate are all the aspects evenly distributed, however also aspects of every particular coloration are evenly distributed.

Quasirandom Parts on a sphere

It’s very total in the fields of laptop graphics and physics to want to distribute aspects on the skin of a 3-sphere as evenly as that that you just would possibly per chance think. The usage of delivery (countless) quasi-random sequences, this activity merely mapping (lifting) the quasi-random aspects that are distributed evenly in the unit sq. onto the skin of a sphere by a Lambert equal-home projection. The commonplace Lambert projection transformation that maps a level ( (u,v) in U[0,1] to (x,y,z)in S^2), is : $$ (x,y,z) = (cos lambda cos phi, cos lambda sin phi, sin lambda), $$ $$ textrm{the set} quad  cos (lambda -pi/2) = (2u-1); quad phi = 2pi v. $$ As this (phi_2-)squence is fully delivery, it lets you plot an extensive sequence of aspects onto the skin of a sphere – one level at a time. This is in distinction to other present suggestions similar to the Fibonacci spiral lattice which requires shimmering upfront, the selection of aspects. As soon as more, by visual inspection, we are able to clearly survey that for (n=1200), the unusual (R(phi_2))-sequence is much extra even than the Halton mapping or Kronecker sampling which in turn is much extra even than random sampling.

Figure 10.

#

Dithering in laptop graphics

Most explain of the art work dithering ways (similar to those in step with Floyd-Steinberg) are in step with error-diffusion, which would possibly per chance be now not take care of minded for parallel processing and/or command GPU optimisation. In these conditions, level-essentially based solely dithering with static dither masks (ie fully just of the target image) offer very perfect performance traits. Presumably essentially the most neatly-known, and smooth broadly feeble dither masks are in step with Bayer matrices, alternatively, newer ones strive to extra straight mimic blue noise traits. The non-trivial misfortune with creating dither masks in step with low discrepancy sequences and/or blue noise, is that these are low discrepancy sequences plot an integer (Z) to a two-dimensional level in the interval ( [0,1)^2 ). In contrast, a dither mask requires a function that maps two-dimensional integer coordinates of the rastered mask to a real valued intensity/threshold in the interval ( [0,1) ).

I propose the following approach which is based on the $R$-sequence. For each pixel (x,y) in the mask, we set its intensity to be (I(x,y) ), where: $$ I(x,y) = alpha_1 x + alpha_2 y ; ( textrm{mod} 1); $$ $$ textrm{where} quad pmb{alpha} =(alpha_1,alpha_2) = (frac{1}{phi_2}, frac{1}{phi_2^2}), $$ $$ textrm{and} ; phi_2 textrm{is the unique positive root of }  x^3;=x+1.$$ That is, $x =  1.324717957…$, and thus $$alpha_1  = 0.7548776662; alpha_2 = 0.56984029$$ Furthermore, if an additional triangular wave function is included in order to eliminate the discontinuity caused by the frac(.) function at each integer boundary: $$ T(z) =begin{cases} 2z, & text{if } 0leq z <1/2 \ 2-2z, & text{if} 1/2 leq z < 1 end{cases} $$ $$ I(x,y) = T left[ alpha_1 x + alpha_2 y ; ( textrm{mod} 1) right]; $$ the veil and its fourier/periodogram are even extra improved. Additionally, we notice that on tale of $$ lim_{n rightarrow infty} frac{A_n}{A_{n+1}} = 0.754878 ; quad lim_{n rightarrow infty} frac{A_n}{A_{n+2}} = 0.56984 $$ The earn of the above expression is connected to the next linear congruential equation $$  A_n x + A_{n+1} y ;  (textrm{mod}  A_{n+2}) textrm{ for integers } ;;  x,y$$

The R-dither masks fabricate outcomes that are competitive with explain of the art work suggestions blue noise masks. But unlike blue noise masks, they cease now not want to be pre-computed, as they would per chance be calculated in right-time.

Imprint that this structure has also been suggested by Mittring however he finds the coefficients empirically (and would not quote the final values). Furthermore, it assists in conception why the empirical system by Jorge Jimenez  when making “Call of Accountability”, incessantly termed “Interleaved Gradient Noise” works so wisely. $$I(x,y)  = (textrm{FractionalPart}[52.9829189*textrm{FractionalPart}[0.06711056*x + 0.00583715*y]] $$ On the other hand, this required 3 floating level multiplications and two %1 operators, whereas the prior system displays that we are able to cease it with perfect 2 floating level multiplications and a single %1 operation. Extra importantly, this put up provides a firmer math conception as to why a dither veil of this earn is so efficient, if now not optimum. The outcomes of this dither matrix are shown below for the traditional 256×256 “Lena” test image, as wisely as a checkered test pattern. The outcomes the usage of the commonplace Bayer dither masks will be shown, as wisely as one in step with blue noise. The 2 most total suggestions for blue noise is Void-and-Cluster and Poisson disc sampling. For brevity I earn ultimate integrated outcomes the usage of the Void and cluster manner. [Peters]. The interleaved gradient noise works better than Bayer and blue noise, however now not rather as factual as the $R$-dither. One can survey that the Bayer dither displays noticeable white dissonance in the gentle gray areas. The $R$-sequence and blue noise dither are mainly similar even supposing exiguous variations would possibly per chance be discerned. About a issues to note about the R-dither:

  • It’s far now not isotropic! The Fourier spectra displays ultimate distinct and discrete aspects. This is the traditional signature of quaisperiodic tilings and diffraction spectra of quaiscrystals. Particularly, the Fourier spectra of the $R$-veil is in step with the fact that the The R-sequence is a linear recurrence.
  • The R-dither when combined with a triangular wave produces an extremely uniform veil!
  • in spite of the differing visual appearances of the 2 R-dither masks, there would possibly per chance be almost no distinction in final dithered outcomes.
  • Taking a contemplate about at Lena’s lips and shoulders, one would possibly per chance per chance also argue that the R-dither produces clearer outcomes than the blue noise veil. This is even extra noticeable when the usage of 512×512 dithering matrices (however now not shown).
  • The depth $(I(x,y)$ is intrinsically a right-valued amount and so the masks naturally scales to arbitrary bit-depths.
Figure 11a. From left-to-upright: (i) Raw image (ii) R-sequence composited with triangular wave feature; (iii) R-sequence by myself, ; (iv) blue noise dither veil and (v) commonplace Bayer; and  (upright). The R-sequence dither masks is competitive in opposition to other explain of the art work masks. Imprint how R2 displays better quality for Lena’s face and shoulders. Additionally, unlike blue noise masks, the R-dither veil is mainly easy, it would not require pre-computing.
Figure 11b. From left-to-upright: (i) Floyd-Steinberg error diffusion, (ii) $R_2$-sequence dither veil; (iii) blue noise dither veil. Error diffusion is clearly the final note for Lena’s image, however notice the R-sequence doesn’t endure from the noticeable lagged points that can would possibly per chance be viewed in the lighter-squares of the test pattern. The $R_2$ pattern is also sharper than the blue noise veil.

Same to the prior section, however for five (5) dimensions, the graph below displays the (global) minimum distance between any two aspects, for the ( R(phi_5))-sequence, the (2,3,5,7,11)-Halton and the random sequences. This time, the normalized minimum distance measure is normalized by an component of ( 1/ sqrt[5]{n} ). One can survey that, as a final consequence of  ‘the curse of dimensionality’, the random distribution is better than all the low discrepancy sequences — rather then the $R_5$-sequence. The (R(phi_5))-sequence , even for ( n simeq 10^6) aspects, the minimum distance between two aspects is smooth constantly around ( 0.8/sqrt{n} ), and always above ( 0.631/sqrt{n} ).

The $R_2$ sequence is the final note $d$-dimensional low discrepancy sequence, the set the packing distance ultimate falls at a charge of $n^{-1/d}$.

Figure 12. This displays that the R-sequence (blue) is constantly better than the Halton (orange); Sobol (inexperienced); Niederreiter (pink); and random (pink). Imprint bigger is better because it corresponds to an even bigger packing distance.

Numerical Integration

The following graph displays the usual-or-garden error curves ( s_n = |A-A_n| ) for approximating a undeniable integral connected to a Gaussian feature of half-width (sigma = sqrt{d}), ( f(x) = textrm{exp}(frac{-x^2}{2nd}), ; x in [0,1]  ), with: (i) ( R_{phi} )  (blue); (ii) Halton sequence, (orange); (iii) randomly  (inexperienced); (iv) Sobol  (pink)  It displays that for (n=10^6) aspects, the differential between random sampling is the Halton sequence is slightly much less now. On the other hand, as was as soon as viewed in the 1-dimensional case, the $R$-sequence and the Sobol are constantly better than the Halton sequence. It also suggests that the Sobol sequence is marginally better than the $R$-sequence.

Figure 13. Quasirandom Monte Carlo suggestions for 8-dimensional integration. Imprint smaller is better. The unusual R-sequence and Sobol create constantly better than the Halton sequence.

My name is Dr Martin Roberts, and I’m a contract Principal Files Science consultant, who loves working on the intersection of maths and computing.



“I remodel and modernize organizations by progressive data suggestions solutions.”

You would possibly per chance per chance contact me by any of these channels.

LinkedIn: https://www.linkedin.com/in/martinroberts/

Twitter: @TechSparx  https://twitter.com/TechSparx

email: Martin (at) RobertsAnalytics (dot) com

Extra main aspects about me would possibly per chance be chanced on here.

Read More

Leave A Reply

Your email address will not be published.