{"id":730,"date":"2025-07-26T08:05:00","date_gmt":"2025-07-26T07:05:00","guid":{"rendered":"https:\/\/noiseonthenet.space\/noise\/?p=730"},"modified":"2025-07-26T09:50:13","modified_gmt":"2025-07-26T08:50:13","slug":"oxidizing-lagrange-polynomials-for-machine-learning","status":"publish","type":"post","link":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/","title":{"rendered":"Oxidizing Lagrange Polynomials for Machine Learning"},"content":{"rendered":"<div id=\"table-of-contents\" role=\"doc-toc\">\n<h2>Table of Contents<\/h2>\n<div id=\"text-table-of-contents\" role=\"doc-toc\">\n<ul>\n<li><a href=\"#org23b3940\">What are Lagrange Polynomials<\/a><\/li>\n<li><a href=\"#org673bb89\">Why should Lagrange Polynomials matter in Machine Learning<\/a><\/li>\n<li><a href=\"#org163b87f\">F statistics to identify chaotic behavior vs ordered behavior<\/a><\/li>\n<li><a href=\"#org863f262\">An efficent Rust implementation of Lagrange Polynomials<\/a>\n<ul>\n<li><a href=\"#org923a685\">Weights calculation<\/a><\/li>\n<li><a href=\"#orga86c94d\">Polar representation of Polynomials<\/a><\/li>\n<\/ul>\n<\/li>\n<li><a href=\"#orgaf4bc47\">Connecting with regular Polynomials in Rust<\/a><\/li>\n<li><a href=\"#org8177c30\">Conclusions<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div id=\"orgd723286\" class=\"figure\"> <p><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg?ssl=1\" alt=\"jay-heike-eNXEbyvTzJA-unsplash.jpg\" \/> <\/p> <\/div>\n\n<p> Photo by <a href=\"https:\/\/unsplash.com\/@jayrheike?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash\">Jay Heike<\/a> on <a href=\"https:\/\/unsplash.com\/photos\/gray-pulley-eNXEbyvTzJA?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash\">Unsplash<\/a> <\/p>\n\n<p> In recent years, machine learning techniques popolarized well known algorithms like linear regression, even if in a larger data context. Polynomial regression is one of these examples. <\/p>\n\n<p> In the past, researchers were defing formulae upfront and then regression was applied to fit model parameters. The machine learning approach in this case consists of using the data to evaluate which degree of the polynomial does best fit the observations without overfitting. <\/p>\n\n<p> The code for this post is available <a href=\"https:\/\/github.com\/noiseOnTheNet\/post006_lagrange_polynomials\">here<\/a> and is connected with a previous post <a href=\"https:\/\/noiseonthenet.space\/noise\/?p=97\">Polynomials ring<\/a> <\/p>\n\n<p> If you see any error or woud like to suggest a more efficient implementation, please let me know in the comments <\/p>\n<div id=\"outline-container-org23b3940\" class=\"outline-2\">\n<h2 id=\"org23b3940\">What are Lagrange Polynomials<\/h2>\n<div class=\"outline-text-2\" id=\"text-org23b3940\">\n<p> <a href=\"https:\/\/en.wikipedia.org\/wiki\/Lagrange_polynomial\">Lagrange polynomials<\/a> are used to find an interpolate monovariate polynomial (i.e. a polynomial of a single variable only). <\/p>\n\n<p> Given a list <i>x<\/i> of n points in R (we will call them nodes from now on) and a list <i>y<\/i> of real values associated with them it is possible to find at least one polynomial of degree n &#8211; 1 passing through them. <\/p>\n\n<p> The problem is equivalent to the following linear problem <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cleft%5B+%5Cbegin%7Bmatrix%7D+1+%26+x_1+%26+x%5E2_1+%26+%5Cdots+%26+x%5E%7Bn-1%7D_1+%5C%5C+1+%26+x_2+%26+x%5E2_2+%26+%5Cdots+%26+x%5E%7Bn-1%7D_2+%5C%5C+%5Cdots+%5C%5C+1+%26+x_n+%26+x%5E2_n+%26+%5Cdots+%26+x%5E%7Bn-1%7D_n+%5C%5C+%5Cend%7Bmatrix%7D+%5Cright%5D+%5Ctimes+%5Cleft%5B+%5Cbegin%7Bmatrix%7D+a_0+%5C%5C+a_1+%5C%5C+%5Cdots+%5C%5C+a_%7Bn-1%7D+%5C%5C+%5Cend%7Bmatrix%7D+%5Cright%5D+%3D+%5Cleft%5B+%5Cbegin%7Bmatrix%7D+y_0+%5C%5C+y_1+%5C%5C+%5Cdots+%5C%5C+y_%7Bn-1%7D+%5C%5C+%5Cend%7Bmatrix%7D+%5Cright%5D++++&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;left[ &#92;begin{matrix} 1 &amp; x_1 &amp; x^2_1 &amp; &#92;dots &amp; x^{n-1}_1 &#92;&#92; 1 &amp; x_2 &amp; x^2_2 &amp; &#92;dots &amp; x^{n-1}_2 &#92;&#92; &#92;dots &#92;&#92; 1 &amp; x_n &amp; x^2_n &amp; &#92;dots &amp; x^{n-1}_n &#92;&#92; &#92;end{matrix} &#92;right] &#92;times &#92;left[ &#92;begin{matrix} a_0 &#92;&#92; a_1 &#92;&#92; &#92;dots &#92;&#92; a_{n-1} &#92;&#92; &#92;end{matrix} &#92;right] = &#92;left[ &#92;begin{matrix} y_0 &#92;&#92; y_1 &#92;&#92; &#92;dots &#92;&#92; y_{n-1} &#92;&#92; &#92;end{matrix} &#92;right]    \" class=\"latex\" \/> <\/p> \n\n<p> The lagrange Polynomial base (denoted here by <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=l_i%5Bx%5D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"l_i[x]\" class=\"latex\" \/>) is a way to transform the problem into an equivalent linear problem with a diagonal matrix, i.e. <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=I+%5Ctimes+%5Cleft%5B+%5Cbegin%7Bmatrix%7D+b_0+%5C%5C+b_1+%5C%5C+%5Cdots+%5C%5C+b_%7Bn-1%7D+%5C%5C+%5Cend%7Bmatrix%7D+%5Cright%5D+%3D+%5Cleft%5B+%5Cbegin%7Bmatrix%7D+y_0+%5C%5C+y_1+%5C%5C+%5Cdots+%5C%5C+y_%7Bn-1%7D+%5C%5C+%5Cend%7Bmatrix%7D+%5Cright%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"I &#92;times &#92;left[ &#92;begin{matrix} b_0 &#92;&#92; b_1 &#92;&#92; &#92;dots &#92;&#92; b_{n-1} &#92;&#92; &#92;end{matrix} &#92;right] = &#92;left[ &#92;begin{matrix} y_0 &#92;&#92; y_1 &#92;&#92; &#92;dots &#92;&#92; y_{n-1} &#92;&#92; &#92;end{matrix} &#92;right] \" class=\"latex\" \/> <\/p> \n\n<p> so the solution becomes <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P%5Bx%5D+%3D+%5Csum_%7Bi%7Db_i+l_i%5Bx%5D+%3D+%5Csum_%7Bi%7Dy_i+l_i%5Bx%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P[x] = &#92;sum_{i}b_i l_i[x] = &#92;sum_{i}y_i l_i[x] \" class=\"latex\" \/> <\/p> \n\n<p> Each polynomial is designed so to have roots in all nodes but one; in that node it will be equal to 1 <\/p>\n\n<p> Here is a simple example: suppose we have oly two nodes in 0 and 1: the Lagrange polynomial base will be composed of the follwing: <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_0%5Bx%5D+%3D+1+-+x+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_0[x] = 1 - x \" class=\"latex\" \/> <\/p> \n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_1%5Bx%5D+%3D+x+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_1[x] = x \" class=\"latex\" \/> <\/p> \n\n<div id=\"orgf4406a7\" class=\"figure\"> <p><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/degree1.png?ssl=1\" alt=\"degree1.png\" \/> <\/p> <\/div>\n\n<p> if we have say <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=y%280%29+%3D+1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"y(0) = 1\" class=\"latex\" \/> and <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=y%281%29+%3D+3&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"y(1) = 3\" class=\"latex\" \/> the interpolating polynomial becomes <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P%5Bx%5D+%3D+1+%2A+P_0%5Bx%5D+%2B+3+%2A+P_1%5Bx%5D+%3D+%281+-+x%29+%2B+3x+%3D+2x+%2B+1+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P[x] = 1 * P_0[x] + 3 * P_1[x] = (1 - x) + 3x = 2x + 1 \" class=\"latex\" \/> <\/p> \n\n<p> In the same way if the nodes are placed in -1, 0 and 1 we have: <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_%7B-1%7D%5Bx%5D+%3D+%5Cfrac%7B1%7D%7B2%7D+x+%28x-1%29+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_{-1}[x] = &#92;frac{1}{2} x (x-1) \" class=\"latex\" \/> <\/p> \n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_0%5Bx%5D+%3D+-1+%28x%2B1%29%28x-1%29+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_0[x] = -1 (x+1)(x-1) \" class=\"latex\" \/> <\/p> \n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_1%5Bx%5D+%3D+%5Cfrac%7B1%7D%7B2%7D+%28x%2B1%29+x+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_1[x] = &#92;frac{1}{2} (x+1) x \" class=\"latex\" \/> <\/p> \n\n<div id=\"org12a6622\" class=\"figure\"> <p><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/degree2.png?ssl=1\" alt=\"degree2.png\" \/> <\/p> <\/div>\n<\/div>\n<\/div>\n<div id=\"outline-container-org673bb89\" class=\"outline-2\">\n<h2 id=\"org673bb89\">Why should Lagrange Polynomials matter in Machine Learning<\/h2>\n<div class=\"outline-text-2\" id=\"text-org673bb89\">\n<p> polynomial regression is a typical example of how machine learning instructed existing knowledge. <\/p>\n\n<p> While polynomial fitting is a well known application of linear regression, the introduction of concepts such overfitting, regularization, validation etc. clarified greatly how to better use these tools. <\/p>\n\n<p> As these are iterative approaches, they require a repeated evaluation of the fitting weights; thus any way to make this calculation faster can greatly impact in the overall efficiency of the process. <\/p>\n\n<p> As lagrange polynomials provide an efficient space base to perform these calculations this approach can have a great application <\/p>\n\n<p> Let&rsquo;s set an example context: suppose we have a large number of samples of the form <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%28x%2Cy%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"(x,y)\" class=\"latex\" \/> where <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=x+%5Cin+R&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"x &#92;in R\" class=\"latex\" \/> is the independent variable or feature and <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=y+%5Cin+R&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"y &#92;in R\" class=\"latex\" \/> is the dependent variable or target <\/p>\n\n<p> We may want to partition the range where <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"x\" class=\"latex\" \/> spans with <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=n&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"n\" class=\"latex\" \/> compact, connected, semiclosed and limited subsets <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Csigma_i%2C+i+%5Cin+1..n&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;sigma_i, i &#92;in 1..n\" class=\"latex\" \/> e.g. <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Csigma_i+%3D+%28%5Cdelta+%28i-1%29+%2B+min%28x%29%2C+%5Cdelta+i+%2B+min%28x%29%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;sigma_i = (&#92;delta (i-1) + min(x), &#92;delta i + min(x)] \" class=\"latex\" \/> <\/p> \n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cdelta+%3D+%5Cfrac%7Bmax%28x%29+-+min%28x%29%7D%7Bn%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;delta = &#92;frac{max(x) - min(x)}{n} \" class=\"latex\" \/> <\/p> \n\n<p> In each <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Csigma_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;sigma_i\" class=\"latex\" \/> we can identify a point <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cbar%7Bx%7D_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;bar{x}_i\" class=\"latex\" \/> which will be used to define the Lagrange basis. There is no assumption about which point to choose but it would be reasonable to select one which is about &ldquo;in the middle&rdquo; of the range (or if you prefer the one which minimizes the integral of squares of the distances with all others) <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cbar%7Bx_i%7D+%3D+argmin_%7Bx+%5Cin+%5Csigma_i%7D%5Cint_%7By+%5Cin+%5Csigma_i%7D%7B%28x-y%29%5E2%7Ddy+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;bar{x_i} = argmin_{x &#92;in &#92;sigma_i}&#92;int_{y &#92;in &#92;sigma_i}{(x-y)^2}dy \" class=\"latex\" \/> <\/p> \n\n<p> in our example that would be <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cbar%7Bx_i%7D+%3D+min%28x%29+%2B+%5Cfrac%7B2i-1%7D%7B2%7D+%5Cdelta+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;bar{x_i} = min(x) + &#92;frac{2i-1}{2} &#92;delta \" class=\"latex\" \/> <\/p> \n\n<p> The starting value in the <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"i\" class=\"latex\" \/> -th segment for the iteration would then be the mean of all the <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=y_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"y_j\" class=\"latex\" \/> whose <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=x_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"x_j\" class=\"latex\" \/> is in <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Csigma_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;sigma_i\" class=\"latex\" \/> (where <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"j\" class=\"latex\" \/> is the sample id) <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cbar%7By_i%7D+%3D+E_%7Bx_j+%5Cin+%5Csigma_i%7D%5By_j%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;bar{y_i} = E_{x_j &#92;in &#92;sigma_i}[y_j] \" class=\"latex\" \/> <\/p> \n\n<p> The error evaluation would then be <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cepsilon+%3D+%5Csum_%7Bj%7D%7Berr%28y_j-P%5Bx_j%5D%29%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;epsilon = &#92;sum_{j}{err(y_j-P[x_j])} \" class=\"latex\" \/> <\/p> \n\n<p> where the function <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=err%28r%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"err(r)\" class=\"latex\" \/> would be our error metric (<img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"r\" class=\"latex\" \/> is a residual). <\/p>\n\n<p> Let&rsquo;s assume <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=err%28r%29+%3D+r%5E2+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"err(r) = r^2 \" class=\"latex\" \/> <\/p> \n\n<p> and let&rsquo;s also assume the lagrange polynomial with the initial values <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P%5Bx%5D+%3D+%5Csum_%7Bi%7D%5Cbar%7By%7D_i+l_i%5Bx%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P[x] = &#92;sum_{i}&#92;bar{y}_i l_i[x] \" class=\"latex\" \/> <\/p> \n\n<p> at each iteration we can calculate <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cfrac%7Bd%7D%7Bd%5Cbar%7By%7D_i%7D%5Cepsilon+%3D+2+%5Csum_%7Bj+%5Cin+samples%7D%7Bl_i%5Bx_j%5D%28%5Cbar%7By%7D_il_i%5Bx_j%5D-y_j%29%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;frac{d}{d&#92;bar{y}_i}&#92;epsilon = 2 &#92;sum_{j &#92;in samples}{l_i[x_j](&#92;bar{y}_il_i[x_j]-y_j)} \" class=\"latex\" \/> <\/p> \n\n<p> which returns the gradient we need to step into the next iteration. <\/p>\n<\/div>\n<\/div>\n<div id=\"outline-container-org163b87f\" class=\"outline-2\">\n<h2 id=\"org163b87f\">F statistics to identify chaotic behavior vs ordered behavior<\/h2>\n<div class=\"outline-text-2\" id=\"text-org163b87f\">\n<p> We then need a metric to figure out if we have a very large cloud of points in each area (which may call for a caotic behavior) or the dependency looks quite good. <\/p>\n\n<p> One possble approach is to compare wether the variation of the input values (response value) between each section (segment of the input feature) is comparable with the variation within each section. <\/p>\n\n<p> I have been inspired by F statistics used in ANOVA: we are comparing the variation of the means of each section with the mean of the variations. <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=%5Cchi+%3D+%5Cfrac%7BE%5BVar%5BY_i%5D%5D%7D%7BVar%5BE%5BY_i%5D%5D%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"&#92;chi = &#92;frac{E[Var[Y_i]]}{Var[E[Y_i]]} \" class=\"latex\" \/> <\/p> \n\n<p> In a ordered behavior we expect the numerator to be quite a lot smaller than the denominator. <\/p>\n<\/div>\n<\/div>\n<div id=\"outline-container-org863f262\" class=\"outline-2\">\n<h2 id=\"org863f262\">An efficent Rust implementation of Lagrange Polynomials<\/h2>\n<div class=\"outline-text-2\" id=\"text-org863f262\">\n<\/div>\n<div id=\"outline-container-org923a685\" class=\"outline-3\">\n<h3 id=\"org923a685\">Weights calculation<\/h3>\n<div class=\"outline-text-3\" id=\"text-org923a685\">\n<p> Pre-calculating baricentral weights reduces the overall computation time. <\/p>\n\n<p> As these depends only on the nodes they can be stored upfront in a dedicated data structure. <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=w_i+%3D+%5Cprod_%7B1+%5Cleq+j+%5Cleq+k%2C+j+%5Cneq+i%7D%5Cfrac%7B1%7D%7B%28x_i-x_j%29%7D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"w_i = &#92;prod_{1 &#92;leq j &#92;leq k, j &#92;neq i}&#92;frac{1}{(x_i-x_j)} \" class=\"latex\" \/> <\/p> \n\n<p> This is a possible implementation in Rust <\/p>\n\n<div class=\"org-src-container\">\n<label class=\"org-src-name\"><em><\/em><\/label>\n<pre class=\"src src-rust\" id=\"nil\">\n<span style=\"color: #cba6f7;\">impl<\/span> <span style=\"color: #f9e2af;\">Baricentric<\/span> <span style=\"color: #f38ba8;\">{<\/span>\n    <span style=\"color: #cba6f7;\">pub<\/span> <span style=\"color: #cba6f7;\">fn<\/span> <span style=\"color: #89b4fa;\">new<\/span><span style=\"color: #fab387;\">(<\/span><span style=\"color: #cdd6f4;\">points<\/span>: &amp; <span style=\"color: #f9e2af;\">[<\/span><span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #f9e2af;\">]<\/span><span style=\"color: #fab387;\">)<\/span> -&gt; <span style=\"color: #f9e2af;\">Baricentric<\/span> <span style=\"color: #fab387;\">{<\/span>\n        <span style=\"color: #cba6f7;\">let<\/span> <span style=\"color: #cdd6f4;\">weights<\/span>: <span style=\"color: #f9e2af;\">Vec<\/span><span style=\"color: #f9e2af;\">&lt;<\/span><span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #f9e2af;\">&gt;<\/span> = points\n            .iter<span style=\"color: #f9e2af;\">()<\/span>\n            .enumerate<span style=\"color: #f9e2af;\">()<\/span>\n            .map<span style=\"color: #f9e2af;\">(<\/span>|<span style=\"color: #a6e3a1;\">(<\/span>i, x0<span style=\"color: #a6e3a1;\">)<\/span>| <span style=\"color: #a6e3a1;\">{<\/span>\n                points\n                    .iter<span style=\"color: #f38ba8;\">()<\/span>\n                    .enumerate<span style=\"color: #f38ba8;\">()<\/span>\n                    .filter<span style=\"color: #f38ba8;\">(<\/span>|<span style=\"color: #fab387;\">(<\/span>j, _<span style=\"color: #fab387;\">)<\/span>| *j != i<span style=\"color: #f38ba8;\">)<\/span>\n                    .fold<span style=\"color: #f38ba8;\">(<\/span><span style=\"color: #fab387;\">1.0<\/span>, |acc, <span style=\"color: #fab387;\">(<\/span>_, x1<span style=\"color: #fab387;\">)<\/span>| acc * <span style=\"color: #fab387;\">1.0<\/span> \/ <span style=\"color: #fab387;\">(<\/span>*x0 - *x1<span style=\"color: #fab387;\">)<\/span><span style=\"color: #f38ba8;\">)<\/span>\n            <span style=\"color: #a6e3a1;\">}<\/span><span style=\"color: #f9e2af;\">)<\/span>\n            .collect<span style=\"color: #f9e2af;\">()<\/span>;\n        <span style=\"color: #f9e2af;\">Baricentric<\/span> <span style=\"color: #f9e2af;\">{<\/span>\n            <span style=\"color: #cdd6f4;\">points<\/span>: points.to_owned<span style=\"color: #a6e3a1;\">()<\/span>,\n            <span style=\"color: #cdd6f4;\">weights<\/span>: weights,\n        <span style=\"color: #f9e2af;\">}<\/span>\n    <span style=\"color: #fab387;\">}<\/span>\n<span style=\"color: #f38ba8;\">}<\/span>\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"outline-container-orga86c94d\" class=\"outline-3\">\n<h3 id=\"orga86c94d\">Polar representation of Polynomials<\/h3>\n<div class=\"outline-text-3\" id=\"text-orga86c94d\">\n<p> It is possible to represent these polynomials by only listing the poles location and their weight <\/p>\n\n<div class=\"org-src-container\">\n<label class=\"org-src-name\"><em><\/em><\/label>\n<pre class=\"src src-rust\" id=\"nil\"><span style=\"color: #cba6f7;\">pub<\/span> <span style=\"color: #cba6f7;\">struct<\/span> <span style=\"color: #f9e2af;\">PolarPoly<\/span><span style=\"color: #f38ba8;\">{<\/span>\n    <span style=\"color: #cdd6f4;\">poles<\/span>: <span style=\"color: #f9e2af;\">Vec<\/span><span style=\"color: #fab387;\">&lt;<\/span><span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #fab387;\">&gt;<\/span>,\n    <span style=\"color: #cdd6f4;\">weight<\/span>: <span style=\"color: #f9e2af;\">f64<\/span>\n<span style=\"color: #f38ba8;\">}<\/span>\n<\/pre>\n<\/div>\n\n<p> Then we may create them by either: <\/p>\n\n<ol class=\"org-ol\">\n<li>pass these values directly<\/li>\n<li>calculate them from a baricentric object: as this contains the information we need for all of the basis, we may want to get each vector individually: this requires an API similar to the vector get method which returns an Option<\/li>\n<\/ol>\n\n<div class=\"org-src-container\">\n<label class=\"org-src-name\"><em><\/em><\/label>\n<pre class=\"src src-rust\" id=\"nil\"><span style=\"color: #cba6f7;\">impl<\/span> <span style=\"color: #f9e2af;\">PolarPoly<\/span><span style=\"color: #f38ba8;\">{<\/span>\n    <span style=\"color: #cba6f7;\">pub<\/span> <span style=\"color: #cba6f7;\">fn<\/span> <span style=\"color: #89b4fa;\">new<\/span><span style=\"color: #fab387;\">(<\/span><span style=\"color: #cdd6f4;\">poles<\/span>: &amp; <span style=\"color: #f9e2af;\">[<\/span><span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #f9e2af;\">]<\/span>, <span style=\"color: #cdd6f4;\">weight<\/span>: <span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #fab387;\">)<\/span> -&gt; <span style=\"color: #f9e2af;\">PolarPoly<\/span><span style=\"color: #fab387;\">{<\/span>\n        <span style=\"color: #f9e2af;\">PolarPoly<\/span><span style=\"color: #f9e2af;\">{<\/span>\n            <span style=\"color: #cdd6f4;\">poles<\/span>: poles.to_owned<span style=\"color: #a6e3a1;\">()<\/span>,\n            weight\n        <span style=\"color: #f9e2af;\">}<\/span>\n    <span style=\"color: #fab387;\">}<\/span>\n    <span style=\"color: #cba6f7;\">pub<\/span> <span style=\"color: #cba6f7;\">fn<\/span> <span style=\"color: #89b4fa;\">from_baricentric<\/span><span style=\"color: #fab387;\">(<\/span><span style=\"color: #cdd6f4;\">bar<\/span>: &amp; <span style=\"color: #f9e2af;\">Baricentric<\/span>, <span style=\"color: #cdd6f4;\">i<\/span>: <span style=\"color: #f9e2af;\">usize<\/span><span style=\"color: #fab387;\">)<\/span> -&gt; <span style=\"color: #f9e2af;\">Option<\/span><span style=\"color: #fab387;\">&lt;<\/span><span style=\"color: #f9e2af;\">PolarPoly<\/span><span style=\"color: #fab387;\">&gt;{<\/span>\n        <span style=\"color: #cba6f7;\">let<\/span> weight=*<span style=\"color: #f9e2af;\">(<\/span>bar.weights.get<span style=\"color: #a6e3a1;\">(<\/span>i<span style=\"color: #a6e3a1;\">)<\/span><span style=\"color: #f38ba8; font-weight: bold;\">?<\/span><span style=\"color: #f9e2af;\">)<\/span>;\n        <span style=\"color: #cba6f7;\">let<\/span> poles=bar.points\n            .iter<span style=\"color: #f9e2af;\">()<\/span>\n            .enumerate<span style=\"color: #f9e2af;\">()<\/span>\n            .filter<span style=\"color: #f9e2af;\">(<\/span>|<span style=\"color: #a6e3a1;\">(<\/span>j,_<span style=\"color: #a6e3a1;\">)<\/span>|<span style=\"color: #a6e3a1;\">{<\/span>\n                *j != i\n            <span style=\"color: #a6e3a1;\">}<\/span><span style=\"color: #f9e2af;\">)<\/span>\n            .map<span style=\"color: #f9e2af;\">(<\/span>|<span style=\"color: #a6e3a1;\">(<\/span>_,<span style=\"color: #f38ba8;\">(<\/span>x, _<span style=\"color: #f38ba8;\">)<\/span><span style=\"color: #a6e3a1;\">)<\/span>|<span style=\"color: #a6e3a1;\">{<\/span>\n                *x\n            <span style=\"color: #a6e3a1;\">}<\/span><span style=\"color: #f9e2af;\">)<\/span>\n            .collect<span style=\"color: #f9e2af;\">()<\/span>;\n        <span style=\"color: #f9e2af;\">Some<\/span><span style=\"color: #f9e2af;\">(<\/span>\n            <span style=\"color: #f9e2af;\">PolarPoly<\/span><span style=\"color: #a6e3a1;\">{<\/span>\n                poles,\n                weight\n            <span style=\"color: #a6e3a1;\">}<\/span>\n        <span style=\"color: #f9e2af;\">)<\/span>\n    <span style=\"color: #fab387;\">}<\/span>\n<span style=\"color: #f38ba8;\">}<\/span>\n<\/pre>\n<\/div>\n\n<p> Note that each polar polynomial will have a degree equals to <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=deg%5Bl_i%5Bx%5D%5D+%3D+len%28nodes%29+-1+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"deg[l_i[x]] = len(nodes) -1 \" class=\"latex\" \/> <\/p> \n\n<p> The other operations we may want are: <\/p>\n\n<ol class=\"org-ol\">\n<li>evaluate the polynomials in a given point of its domain<\/li>\n<li>transform them into regular polynomials to acquire the whole algebraic operation set and thus calculate a linear combination of the lagrange basis: but this require some more work that is explained in the next section<\/li>\n<\/ol>\n<div class=\"org-src-container\">\n<label class=\"org-src-name\"><em><\/em><\/label>\n<pre class=\"src src-rust\" id=\"nil\"><span style=\"color: #cba6f7;\">impl<\/span> <span style=\"color: #f9e2af;\">PolarPoly<\/span><span style=\"color: #f38ba8;\">{<\/span>\n    <span style=\"color: #cba6f7;\">pub<\/span> <span style=\"color: #cba6f7;\">fn<\/span> <span style=\"color: #89b4fa;\">eval<\/span><span style=\"color: #fab387;\">(<\/span>&amp; <span style=\"color: #cba6f7;\">self<\/span>, <span style=\"color: #cdd6f4;\">x<\/span>: <span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #fab387;\">)<\/span> -&gt; <span style=\"color: #f9e2af;\">f64<\/span> <span style=\"color: #fab387;\">{<\/span>\n        <span style=\"color: #cba6f7;\">self<\/span>.weight * <span style=\"color: #cba6f7;\">self<\/span>.poles\n            .iter<span style=\"color: #f9e2af;\">()<\/span>\n            .map<span style=\"color: #f9e2af;\">(<\/span>|p| x - p <span style=\"color: #f9e2af;\">)<\/span>\n            .fold<span style=\"color: #f9e2af;\">(<\/span><span style=\"color: #fab387;\">1.0<\/span>, |acc, val| val * acc<span style=\"color: #f9e2af;\">)<\/span>\n    <span style=\"color: #fab387;\">}<\/span>\n    <span style=\"color: #cba6f7;\">pub<\/span> <span style=\"color: #cba6f7;\">fn<\/span> <span style=\"color: #89b4fa;\">to_poly<\/span><span style=\"color: #fab387;\">(<\/span>&amp; <span style=\"color: #cba6f7;\">self<\/span><span style=\"color: #fab387;\">)<\/span> -&gt; poly.<span style=\"color: #f9e2af;\">Poly<\/span>&lt;<span style=\"color: #f9e2af;\">f64<\/span>&gt;<span style=\"color: #fab387;\">{<\/span>\n        <span style=\"color: #f9e2af;\">todo!<\/span><span style=\"color: #f9e2af;\">(<\/span><span style=\"color: #a6e3a1;\">\"implement this stuff\"<\/span><span style=\"color: #f9e2af;\">)<\/span>;\n    <span style=\"color: #fab387;\">}<\/span>\n<span style=\"color: #f38ba8;\">}<\/span>\n<\/pre>\n<\/div>\n<p> here are two examples of output obtained plotting the basis generated for four poles. <\/p>\n\n<p> In the first example poles are not equispaced and are placed in 1, 2, 5 and 5.5 . From the grid you may notice how each element of the basis has value 1 in only one pole and 0 in all others <img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/plot_base.png?ssl=1\" alt=\"plot_base.png\" \/> <\/p>\n\n<p> In the second example there is an equispaced grid, poles are in 1, 2, 3 and 4. <img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/plot_equispaced.png?ssl=1\" alt=\"plot_equispaced.png\" \/> <\/p>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"outline-container-orgaf4bc47\" class=\"outline-2\">\n<h2 id=\"orgaf4bc47\">Connecting with regular Polynomials in Rust<\/h2>\n<div class=\"outline-text-2\" id=\"text-orgaf4bc47\">\n<p> We need an intermediate data structure to efficiently calculate the coefficients <\/p>\n\n<p> The idea is to recursively multiply all of the components of degree 1; each multiplication is the sum of the current intermediate result times <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=x&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"x\" class=\"latex\" \/> (which is just as increasing the power of each monomial) and subtract with the same times the pole value: <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_%7Bi%2B1%7D%5Bx%5D+%3D+%28x+-+p_%7Bi%2B1%7D%29P_i%5Bx%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_{i+1}[x] = (x - p_{i+1})P_i[x] \" class=\"latex\" \/> <\/p> \n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_%7Bi%2B1%7D%5Bx%5D+%3D+x+P_i%5Bx%5D+-+p_%7Bi%2B1%7D+P_i%5Bx%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_{i+1}[x] = x P_i[x] - p_{i+1} P_i[x] \" class=\"latex\" \/> <\/p> \n\n<p> by analogy with the binary operation I call the first pass &ldquo;shift&rdquo; <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_%7Bi%2B1%7D%5Bx%5D+%3D+shift%5BP_i%5Bx%5D%5D+-+p_%7Bi%2B1%7D+P_i%5Bx%5D+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_{i+1}[x] = shift[P_i[x]] - p_{i+1} P_i[x] \" class=\"latex\" \/> <\/p> \n\n<p> This iterative sequence can be made with a fixed size data structure; unfortunately the size is known only at runtime so we may need to use the heap memory. <\/p>\n\n<p> Iterating on all poles, each iteration will return an intermediate result which will have a degree increased by one: this can be simulated without moving any data but just moving backward the 0th degree coefficient position in the array. <\/p>\n\n<p> We can start with a constant polynomial equal to 1; this is equivalent to an array of length 1 with just this value: [1] <\/p>\n\n<p> in the first iteration we will get exactly the first polynomial <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_%7B1%7D%5Bx%5D+%3D+1+%28x+-+p_1%29+%3D+x+-+p_1+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_{1}[x] = 1 (x - p_1) = x - p_1 \" class=\"latex\" \/> <\/p> \n\n<p> this is equivalent to an array like the following [-p_1, 1] <\/p>\n\n<p> in the second iteration we have <\/p>\n\n<p style=\"text-align:center\"> <img decoding=\"async\" src=\"https:\/\/s0.wp.com\/latex.php?latex=P_%7B2%7D%5Bx%5D+%3D+%28x+-+p_1%29%28x+-+p_2%29+%3D+x%5E2+%2B+%28%28-p_1%29+%2B+%28-p_2%29%29+x+%2B+%28-p_1%29+%28-p_2%29+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002\" alt=\"P_{2}[x] = (x - p_1)(x - p_2) = x^2 + ((-p_1) + (-p_2)) x + (-p_1) (-p_2) \" class=\"latex\" \/> <\/p> \n\n<p> this is equivalent to an array like the following [(-p_1)(-p_2),-p_1-p_2, 1] etc. <\/p>\n\n<p> The first element is always equal to 1; the others are obtained by subtracting the current value (which is 0 for the lowest cell) with the product of the next cell times the pole. <\/p>\n\n<p> Once all multiplication have been performed we can multiply all coefficients by the weight. <\/p>\n\n<p> Now our code looks like this: <\/p>\n\n<div class=\"org-src-container\">\n<label class=\"org-src-name\"><em><\/em><\/label>\n<pre class=\"src src-rust\" id=\"nil\">    <span style=\"color: #cba6f7;\">pub<\/span> <span style=\"color: #cba6f7;\">fn<\/span> <span style=\"color: #89b4fa;\">to_poly<\/span><span style=\"color: #f38ba8;\">(<\/span>&amp; <span style=\"color: #cba6f7;\">self<\/span><span style=\"color: #f38ba8;\">)<\/span> -&gt; <span style=\"color: #fab387;\">poly<\/span>::<span style=\"color: #f9e2af;\">Poly<\/span><span style=\"color: #f38ba8;\">&lt;<\/span><span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #f38ba8;\">&gt;<\/span> <span style=\"color: #f38ba8;\">{<\/span>\n        <span style=\"color: #6c7086;\">\/\/ <\/span><span style=\"color: #6c7086;\">the polynomial degree should be = to the number of poles<\/span>\n        <span style=\"color: #6c7086;\">\/\/ <\/span><span style=\"color: #6c7086;\">thus the no. of coefficients should be equal to the no. of poles + 1<\/span>\n        <span style=\"color: #cba6f7;\">let<\/span> <span style=\"color: #cdd6f4;\">degree<\/span> = <span style=\"color: #cba6f7;\">self<\/span>.poles.len<span style=\"color: #fab387;\">()<\/span>;\n        <span style=\"color: #cba6f7;\">let<\/span> <span style=\"color: #cba6f7;\">mut<\/span> <span style=\"color: #cdd6f4;\">coeff<\/span>: <span style=\"color: #f9e2af;\">Vec<\/span><span style=\"color: #fab387;\">&lt;<\/span><span style=\"color: #f9e2af;\">f64<\/span><span style=\"color: #fab387;\">&gt;<\/span> = <span style=\"color: #f9e2af;\">vec!<\/span><span style=\"color: #fab387;\">[<\/span><span style=\"color: #fab387;\">0.0<\/span>; <span style=\"color: #fab387;\">1<\/span> + degree<span style=\"color: #fab387;\">]<\/span>;\n\n        <span style=\"color: #6c7086;\">\/\/ <\/span><span style=\"color: #6c7086;\">the highest coefficient should be equal to 1<\/span>\n        coeff<span style=\"color: #fab387;\">[<\/span>degree<span style=\"color: #fab387;\">]<\/span> = <span style=\"color: #fab387;\">1.0<\/span>;\n\n        <span style=\"color: #6c7086;\">\/\/ <\/span><span style=\"color: #6c7086;\">after each multiplication the degree is increased by one<\/span>\n        <span style=\"color: #6c7086;\">\/\/ <\/span><span style=\"color: #6c7086;\">this can be simulated by counting each iteration backward<\/span>\n        <span style=\"color: #cba6f7;\">for<\/span> <span style=\"color: #fab387;\">(<\/span>id, pole<span style=\"color: #fab387;\">)<\/span> <span style=\"color: #cba6f7;\">in<\/span> <span style=\"color: #cba6f7;\">self<\/span>.poles.iter<span style=\"color: #fab387;\">()<\/span>.enumerate<span style=\"color: #fab387;\">()<\/span> <span style=\"color: #fab387;\">{<\/span>\n            <span style=\"color: #cba6f7;\">for<\/span> <span style=\"color: #cdd6f4;\">j<\/span> <span style=\"color: #cba6f7;\">in<\/span> <span style=\"color: #f9e2af;\">(<\/span>degree - id - <span style=\"color: #fab387;\">1<\/span><span style=\"color: #f9e2af;\">)<\/span>..degree <span style=\"color: #f9e2af;\">{<\/span>\n                coeff<span style=\"color: #a6e3a1;\">[<\/span>j<span style=\"color: #a6e3a1;\">]<\/span> = coeff<span style=\"color: #a6e3a1;\">[<\/span>j<span style=\"color: #a6e3a1;\">]<\/span> - coeff<span style=\"color: #a6e3a1;\">[<\/span>j + <span style=\"color: #fab387;\">1<\/span><span style=\"color: #a6e3a1;\">]<\/span> * pole;\n            <span style=\"color: #f9e2af;\">}<\/span>\n        <span style=\"color: #fab387;\">}<\/span>\n\n        <span style=\"color: #6c7086;\">\/\/ <\/span><span style=\"color: #6c7086;\">finelly mult by the weight and return<\/span>\n        <span style=\"color: #fab387;\">poly<\/span>::<span style=\"color: #f9e2af;\">Poly<\/span>::new<span style=\"color: #fab387;\">(<\/span>coeff.iter<span style=\"color: #f9e2af;\">()<\/span>.map<span style=\"color: #f9e2af;\">(<\/span>|x| <span style=\"color: #cba6f7;\">self<\/span>.weight * <span style=\"color: #a6e3a1;\">(<\/span>*x<span style=\"color: #a6e3a1;\">)<\/span><span style=\"color: #f9e2af;\">)<\/span>.collect<span style=\"color: #f9e2af;\">()<\/span><span style=\"color: #fab387;\">)<\/span>\n    <span style=\"color: #f38ba8;\">}<\/span>\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"outline-container-org8177c30\" class=\"outline-2\">\n<h2 id=\"org8177c30\">Conclusions<\/h2>\n<div class=\"outline-text-2\" id=\"text-org8177c30\">\n<p> Lagrange polynomials are quite an old tool but are still interesting in modern applications. I mentioned machine learning but another important field is Solomon-Reed error correction codes. <\/p>\n\n<p> Developing some base structure in Rust has been an interesting journey into the language and optimization. <\/p>\n\n<p> If you read this far, well I&rsquo;m impressed! Let me know your comments. <\/p>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"Lagrange polynomials are well known as an interpolation tool may be interesting for machine learning too: here is an efficient Rust implementation","protected":false},"author":1,"featured_media":727,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","inline_featured_image":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[8],"tags":[5],"class_list":["post-730","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-series","tag-rust"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Oxidizing Lagrange Polynomials for Machine Learning - Noise On The Net<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Oxidizing Lagrange Polynomials for Machine Learning - Noise On The Net\" \/>\n<meta property=\"og:description\" content=\"Lagrange polynomials are well known as an interpolation tool may be interesting for machine learning too: here is an efficient Rust implementation\" \/>\n<meta property=\"og:url\" content=\"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/\" \/>\n<meta property=\"og:site_name\" content=\"Noise On The Net\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-26T07:05:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-26T08:50:13+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"800\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"marco.p.v.vezzoli\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"marco.p.v.vezzoli\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/\"},\"author\":{\"name\":\"marco.p.v.vezzoli\",\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/#\\\/schema\\\/person\\\/88c3a70f2b23480197bc61d6e1e2e982\"},\"headline\":\"Oxidizing Lagrange Polynomials for Machine Learning\",\"datePublished\":\"2025-07-26T07:05:00+00:00\",\"dateModified\":\"2025-07-26T08:50:13+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/\"},\"wordCount\":1687,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/#\\\/schema\\\/person\\\/88c3a70f2b23480197bc61d6e1e2e982\"},\"image\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/i0.wp.com\\\/noiseonthenet.space\\\/noise\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1\",\"keywords\":[\"Rust\"],\"articleSection\":[\"Series\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/\",\"url\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/\",\"name\":\"Oxidizing Lagrange Polynomials for Machine Learning - Noise On The Net\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/i0.wp.com\\\/noiseonthenet.space\\\/noise\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1\",\"datePublished\":\"2025-07-26T07:05:00+00:00\",\"dateModified\":\"2025-07-26T08:50:13+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#primaryimage\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/noiseonthenet.space\\\/noise\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/noiseonthenet.space\\\/noise\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1\",\"width\":1200,\"height\":800},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/2025\\\/07\\\/oxidizing-lagrange-polynomials-for-machine-learning\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Oxidizing Lagrange Polynomials for Machine Learning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/#website\",\"url\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/\",\"name\":\"Noise On The Net\",\"description\":\"Sharing adventures in code\",\"publisher\":{\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/#\\\/schema\\\/person\\\/88c3a70f2b23480197bc61d6e1e2e982\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/#\\\/schema\\\/person\\\/88c3a70f2b23480197bc61d6e1e2e982\",\"name\":\"marco.p.v.vezzoli\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g\",\"caption\":\"marco.p.v.vezzoli\"},\"logo\":{\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g\"},\"description\":\"Self taught assembler programming at 11 on my C64 (1983). Never stopped since then -- always looking up for curious things in the software development, data science and AI. Linux and FOSS user since 1994. MSc in physics in 1996. Working in large semiconductor companies since 1997 (STM, Micron) developing analytics and full stack web infrastructures, microservices, ML solutions\",\"sameAs\":[\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/marco-paolo-valerio-vezzoli-0663835\\\/\"],\"url\":\"https:\\\/\\\/noiseonthenet.space\\\/noise\\\/author\\\/marco-p-v-vezzoli\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Oxidizing Lagrange Polynomials for Machine Learning - Noise On The Net","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/","og_locale":"en_US","og_type":"article","og_title":"Oxidizing Lagrange Polynomials for Machine Learning - Noise On The Net","og_description":"Lagrange polynomials are well known as an interpolation tool may be interesting for machine learning too: here is an efficient Rust implementation","og_url":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/","og_site_name":"Noise On The Net","article_published_time":"2025-07-26T07:05:00+00:00","article_modified_time":"2025-07-26T08:50:13+00:00","og_image":[{"width":1200,"height":800,"url":"https:\/\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg","type":"image\/jpeg"}],"author":"marco.p.v.vezzoli","twitter_card":"summary_large_image","twitter_misc":{"Written by":"marco.p.v.vezzoli","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#article","isPartOf":{"@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/"},"author":{"name":"marco.p.v.vezzoli","@id":"https:\/\/noiseonthenet.space\/noise\/#\/schema\/person\/88c3a70f2b23480197bc61d6e1e2e982"},"headline":"Oxidizing Lagrange Polynomials for Machine Learning","datePublished":"2025-07-26T07:05:00+00:00","dateModified":"2025-07-26T08:50:13+00:00","mainEntityOfPage":{"@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/"},"wordCount":1687,"commentCount":0,"publisher":{"@id":"https:\/\/noiseonthenet.space\/noise\/#\/schema\/person\/88c3a70f2b23480197bc61d6e1e2e982"},"image":{"@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1","keywords":["Rust"],"articleSection":["Series"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/","url":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/","name":"Oxidizing Lagrange Polynomials for Machine Learning - Noise On The Net","isPartOf":{"@id":"https:\/\/noiseonthenet.space\/noise\/#website"},"primaryImageOfPage":{"@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#primaryimage"},"image":{"@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1","datePublished":"2025-07-26T07:05:00+00:00","dateModified":"2025-07-26T08:50:13+00:00","breadcrumb":{"@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#primaryimage","url":"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1","contentUrl":"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1","width":1200,"height":800},{"@type":"BreadcrumbList","@id":"https:\/\/noiseonthenet.space\/noise\/2025\/07\/oxidizing-lagrange-polynomials-for-machine-learning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/noiseonthenet.space\/noise\/"},{"@type":"ListItem","position":2,"name":"Oxidizing Lagrange Polynomials for Machine Learning"}]},{"@type":"WebSite","@id":"https:\/\/noiseonthenet.space\/noise\/#website","url":"https:\/\/noiseonthenet.space\/noise\/","name":"Noise On The Net","description":"Sharing adventures in code","publisher":{"@id":"https:\/\/noiseonthenet.space\/noise\/#\/schema\/person\/88c3a70f2b23480197bc61d6e1e2e982"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/noiseonthenet.space\/noise\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":["Person","Organization"],"@id":"https:\/\/noiseonthenet.space\/noise\/#\/schema\/person\/88c3a70f2b23480197bc61d6e1e2e982","name":"marco.p.v.vezzoli","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g","caption":"marco.p.v.vezzoli"},"logo":{"@id":"https:\/\/secure.gravatar.com\/avatar\/b9d9aab1df560bc14d73b0b442198f196ce39e7c7a38df1dc22fec0b97f17da9?s=96&d=mm&r=g"},"description":"Self taught assembler programming at 11 on my C64 (1983). Never stopped since then -- always looking up for curious things in the software development, data science and AI. Linux and FOSS user since 1994. MSc in physics in 1996. Working in large semiconductor companies since 1997 (STM, Micron) developing analytics and full stack web infrastructures, microservices, ML solutions","sameAs":["https:\/\/noiseonthenet.space\/noise\/","https:\/\/www.linkedin.com\/in\/marco-paolo-valerio-vezzoli-0663835\/"],"url":"https:\/\/noiseonthenet.space\/noise\/author\/marco-p-v-vezzoli\/"}]}},"jetpack_featured_media_url":"https:\/\/i0.wp.com\/noiseonthenet.space\/noise\/wp-content\/uploads\/2025\/07\/jay-heike-eNXEbyvTzJA-unsplash.jpg?fit=1200%2C800&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pdDUZ5-bM","jetpack-related-posts":[],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/posts\/730","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/comments?post=730"}],"version-history":[{"count":6,"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/posts\/730\/revisions"}],"predecessor-version":[{"id":745,"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/posts\/730\/revisions\/745"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/media\/727"}],"wp:attachment":[{"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/media?parent=730"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/categories?post=730"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/noiseonthenet.space\/noise\/wp-json\/wp\/v2\/tags?post=730"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}