Command Prompt Blog Feed: Justin Graf's Bloghttps://commandprompt.com/rss/author/justin/en-usTue, 13 Sep 2022 10:41:00 +0000PostgreSQL and Financial Calculations - Part Fivehttps://commandprompt.com/blog/postgresql-and-financial-calculations-part-five/<p data-block-key="9pkwa">The fifth and last in a series of blogs covering common mistakes in Database and Application designs for financial calculations. </p><h2 data-block-key="20v8g" id="method-of-rounding:">Method of Rounding: </h2><p data-block-key="95dvq">There are many methods of rounding </p><ol><li data-block-key="69jm1">Half Round Up</li><li data-block-key="9laje">Half Round Down</li><li data-block-key="5kagp">Round Towards Zero</li><li data-block-key="fig02">Round Away from Zero</li><li data-block-key="9v7tq">Round Half To Even</li><li data-block-key="9f2c9">Round Half To Odd</li><li data-block-key="cv6op">Random Round </li></ol><p data-block-key="4f4h6"></p><p data-block-key="atduc">The built-in method of rounding in PostgreSQL is Half Round Up. Unfortunately, it is not the best approach, as it is biased to a higher value. Being biased to a higher value is a well understood problem and why there are so many rounding methods to choose from. To avoid the biased results, the oldest and most common rounding method used is Round Half to Even (commonly referred to as convergent rounding, statistician's rounding, Dutch rounding, Gaussian rounding, or banker’s rounding). </p><p data-block-key="75lm7"></p><h2 data-block-key="bgiv2" id="consider-5:">Consider 5:</h2><p data-block-key="bjvnu"></p><pre class="code" data-block-key="2dpgj">CREATE SCHEMA ol_code;</pre><pre class="code" data-block-key="62koq">CREATE OR REPLACE FUNCTION ol_code.round(val numeric, prec integer default 0)</pre><pre class="code" data-block-key="cekbf"> RETURNS numeric</pre><pre class="code" data-block-key="615lr">LANGUAGE 'plpgsql'</pre><pre class="code" data-block-key="fms8c">COST 1</pre><pre class="code" data-block-key="2au5j">STRICT PARALLEL SAFE</pre><pre class="code" data-block-key="8j0bm">as $$</pre><pre class="code" data-block-key="63l5h">DECLARE</pre><pre class="code" data-block-key="bdor0">_last_digit numeric = TRUNC(ABS((val * (10::numeric^prec) %1::numeric )),1);</pre><pre class="code" data-block-key="6gb83">BEGIN</pre><pre class="code" data-block-key="5ci39">IF _last_digit = 0.5 THEN --the digit being rounded is 5 </pre><pre class="code" data-block-key="2ignm">-- lets find out if the leading digit is even or odd </pre><pre class="code" data-block-key="2iabk">IF TRUNC(ABS(val * (10::numeric^prec))) %2::numeric = 0 THEN</pre><pre class="code" data-block-key="1idmk">RETURN trunc(val::numeric,prec);</pre><pre class="code" data-block-key="9p0u5">END IF ;</pre><pre class="code" data-block-key="7lu35">END IF ;</pre><pre class="code" data-block-key="3c8vl">IF val > 0.0 AND _last_digit >= 0.5 THEN</pre><pre class="code" data-block-key="e1fgn">RETURN trunc(val::numeric + (1/ (10::numeric^prec)), prec) ;</pre><pre class="code" data-block-key="1earf">ELSEIF val > 0.0 AND _last_digit < 0.5 THEN</pre><pre class="code" data-block-key="aribc">RETURN trunc(val::numeric, prec);</pre><pre class="code" data-block-key="21dj1">ELSEIF val < 0.0 AND _last_digit >= 0.5 THEN</pre><pre class="code" data-block-key="8klqq">RETURN trunc(val::numeric - (1/ (10::numeric^prec)), prec) ;</pre><pre class="code" data-block-key="1ailh">ELSE</pre><pre class="code" data-block-key="b4o02">RETURN trunc(val::numeric, prec);</pre><pre class="code" data-block-key="9vi1u">END IF;</pre><pre class="code" data-block-key="9m533">END ;</pre><pre class="code" data-block-key="btgsh">$$;</pre><pre class="code" data-block-key="3c0v2">WITH cc as (select random()::numeric as random from generate_series(0,100000) )</pre><pre class="code" data-block-key="9ig68">select sum(pg_catalog.round(random,2)) round_half_up, </pre><pre class="code" data-block-key="7fo5p">sum(ol_code.round(random,2)) as round_to_even,</pre><pre class="code" data-block-key="7kimt">sum(trunc(random,3)) correct_value , </pre><pre class="code" data-block-key="75bvi">sum(ol_code.round(random,2)) - sum(trunc(random,3)) round_even_error,</pre><pre class="code" data-block-key="6kihg">sum(pg_catalog.round(random,2)) - sum(trunc(random,3)) round_up_error</pre><pre class="code" data-block-key="b3t0n">from cc</pre><embed alt="Figure One_JG_Blog Post Five" embedtype="image" format="fullwidth" id="845"/><p data-block-key="qe12">0.1% error Round Half Up vs 0.0025% error Round To Even </p><p data-block-key="fib0e">As we can see above, the rounding method we are all taught in school creates error biasing the value to the high side compared to banker’s rounding, which we should all be using. </p><h2 data-block-key="drmu1" id="the-solution:">The solution:</h2><p data-block-key="137vk">To fix the rounding in PostgreSQL, we need to implement a custom rounding function and overload the default round function by setting the search path like so:</p><p data-block-key="b8m46">SET search_path to ol_code, pg_catalog, public </p><p data-block-key="10qip">This assumes the custom round function is named round(numeric,integer) and placed in the schema ol_code (ol_code is short for overloaded code). This schema is where I place any function overloading the default behavior of PostgreSQL.</p><p data-block-key="3r7ub">There are two well known standards for rounding: ASTM E29 and IEEE 754. Both specify the Round Half to Even method. To maintain the highest level of accuracy, Round Half Up should be replaced with Round to Even, as it is the preferred method. </p><h2 data-block-key="bj0f9" id="closing-thoughts:">Closing Thoughts:</h2><p data-block-key="cvcto">If all the issues discussed in this series were trivial problems we would not have numeric types, independent Math libraries, international standard documents or math papers to address the problems. </p><p data-block-key="8pk8d">Rounding and precision math errors can not be stopped, only contained and limited. It is up to us to use the appropriate tools and techniques to contain the error, limiting the havoc it will create. </p><p data-block-key="1tb9m">These are solved problems, we just need to use the solution. </p>Tue, 13 Sep 2022 10:41:00 +0000https://commandprompt.com/blog/postgresql-and-financial-calculations-part-five/PostgreSQL and Financial Calculations - Part Fourhttps://commandprompt.com/blog/postgresql-and-financial-calculations-part-four/<p data-block-key="kzekd">The fourth in a series of blogs covering common mistakes in Database and Application designs for financial calculations. </p><h2 data-block-key="4ms0q" id="database-driver-or-application-framework-created-error:">Database Driver, or Application Framework Created Error:</h2><p data-block-key="14j71">The database driver or application framework created errors are probably the hardest to find, as we are the consumer not the writer of the tool, with many assuming the casting is correct. However, we must review the documentation or the library’s code to know how the data type is mapped in the framework. Keep in mind, PostgreSQL numeric type does not always have a comparable data type in many frameworks. </p><p data-block-key="cv8ch"><b>Frameworks and languages casting numeric to less accurate type:</b></p><p data-block-key="a71ts">Java Script</p><ul><li data-block-key="6u8di">Numeric => 64 bit float</li><li data-block-key="4mp9p">Probably the weakest framework to use for accurate math calculations.</li><li data-block-key="26o48">In its defense, it was never designed to do things it is asked to do today. (cough NodeJS cough )</li></ul><p data-block-key="55tcm">PHP </p><ul><li data-block-key="d0vjv">Numeric => 64 bit float</li><li data-block-key="ednps">Independent Math libraries must be used to accurately calculate results. Creates problems sending data back to PostgreSQL, as conversion to string or float must be used in the database driver.</li></ul><p data-block-key="767oa">Ruby </p><ul><li data-block-key="f8hfk">Numeric => String</li><li data-block-key="2lrbf">Numeric => BigDecimal</li><li data-block-key="e6h9i">Ruby has a full featured Math library to accurately calculate results. Same problem that PHP has sending data back to PostgreSQL.</li></ul><p data-block-key="9g3fo">GO</p><ul><li data-block-key="egdcp">Numeric => unknown </li><li data-block-key="cg4ld">The drivers have no clear documentation on how numeric is being cast. Most likely it is being cast to a 64 bit float.</li><li data-block-key="7oodm">There are libraries available to accurately represent PostgreSQL numeric types and do math operations on them, but they have the same problem PHP, and Ruby have. </li></ul><p data-block-key="eluda"><b>Frameworks and languages with accurate types for numeric:</b></p><p data-block-key="2m2o7">Python</p><ul><li data-block-key="37gcr">Numeric => Decimal </li><li data-block-key="6s7uo">Decimal is equivalent to Numeric and has appropriate Math library</li><li data-block-key="eejbn">Pyscopg is a PostgreSQL community created driver </li></ul><p data-block-key="3dnml">.Net </p><ul><li data-block-key="56su7">Numeric => Decimal</li><li data-block-key="8ara5">Decimal is equivalent to Numeric and has appropriate Math library</li><li data-block-key="6n0sj">Npgsql is a PostgreSQL community created driver</li></ul><p data-block-key="2ffnp">Java</p><ul><li data-block-key="1ksce">Numeric => BigDecimal</li><li data-block-key="d2ibl">BigDecimal is equivalent to Numeric and has appropriate Math library</li><li data-block-key="66lgp">The database driver is supported by the PostgreSQL community </li></ul><p data-block-key="fsdga">C/C++</p><ul><li data-block-key="3eddl">Direct access to the libpq library and PostgreSQL numeric type</li><li data-block-key="arb3a">Direct access to Math libraries to be used with the numeric type</li></ul><p data-block-key="cpvbo">As these two lists show, the application framework can add errors to the calculations just through type casting, which later affects calculations. </p><h2 data-block-key="bo7t9" id="the-solution:">The solution:</h2><p data-block-key="c5d1f">The application framework needs to be reviewed to make sure calculations are not using floating point types when it’s a critical calculation. Review the database driver layer to ensure incorrect type casting is not being performed. Use available Math libraries to do all critical calculations.</p><p data-block-key="8ivsv"><b>Personal Comment</b>: It is kind of amazing to me with so many new frameworks and programming languages being developed over the last decade, how all the new toys lack accurate math libraries. It seems they all choose to go for speed over accuracy first, then go <i>“Whoops! We need to be accurate too!”</i> This is probably one of the reasons I see this mistake repeated so many times in application design, as no one realizes how these tiny errors stack up and bite. </p><h2 data-block-key="bqm6l" id="closing-thought:">Closing Thought:</h2><p data-block-key="c2vh3">Use the appropriate data types in the database and throughout the application stack. Python, C, C++ Java, .Net, all have libraries and functions that make doing these calculations easy.</p>Mon, 05 Sep 2022 13:55:00 +0000https://commandprompt.com/blog/postgresql-and-financial-calculations-part-four/PostgreSQL and Financial Calculations - Part Threehttps://commandprompt.com/blog/postgresql-and-financial-calculations-part-three/<p data-block-key="n1nza">The third in a series of blogs covering common mistakes in Database and Application designs for financial calculations. </p><h2 data-block-key="4khuf" id="order-of-operations-and-storing-aggregate-results:">Order of Operations and Storing Aggregate Results:</h2><p data-block-key="7ai11">When working with float data types, order of operations will affect the ending value. </p><h3 data-block-key="ev3ep" id="consider-3:">Consider 3:</h3><pre class="code" data-block-key="2irkj">Python3:</pre><pre class="code" data-block-key="4f0el"><b>justin@Debian10</b>:<b>~</b>$ python3</pre><pre class="code" data-block-key="cnfkh">Python 3.7.3 (default, Jul 25 2020, 13:03:44) </pre><pre class="code" data-block-key="2ngcl">>>> (1234.567 * 3.333333) + (1.234567 * 3.333333)</pre><pre class="code" data-block-key="1vlht">4119.338,144,732,812</pre><pre class="code" data-block-key="5e5q4">>>> (1234.567 + 1.234567 ) * 3.333333</pre><pre class="code" data-block-key="fungp">4119.338,144,732,811</pre><p data-block-key="8dick">As can be seen with the example, order of operations affects the result even though associative and communicative rules state it should not. The degree to which it affects the result depends on if it is a 32 or 64 bit floating type.</p><p data-block-key="cl75i">Many are going to state that it's only 1 digit of error, however this is happening in one operation and should not happen. This error is amplified when stacking results on results.</p><p data-block-key="529m3">A real world example of this problem is with cost-based accounting, recalculating the cost of items using a Weighted Average formula to calculate the average cost.</p><p data-block-key="3a63r">Formula:</p><p data-block-key="cp1a3"></p><embed alt="Figure One_Blog Three" embedtype="image" format="fullwidth" id="647"/><p data-block-key="7btkd">Where:</p><ul><li data-block-key="5u555">W = weighted average</li><li data-block-key="7to0c">n = number of terms to be averaged</li><li data-block-key="46qac">wi = weights applied to x values</li><li data-block-key="6ri80">Xi = data values to be averaged</li></ul><p data-block-key="2f16r">Or more simply expressed below to calculate system/application wide unit cost:</p><p data-block-key="2u7bt"> SystemItem UnitCost = ((QtyOnhand/(QtyOnHand + NewQty)) * CurrentCost) + ((NewQty/(QtyOnHand + NewQty)) * NewCost) </p><p data-block-key="431gb">The above formula will return the new cost of an item on a <i>per unit of measure basis</i>. The application must independently track the cost and the quantity on hand for each item. To calculate the total value of inventory is done like so:</p><p data-block-key="8eq4d">QtyOnHand * SystemItemUnitCost = SystemItemTotalCost </p><p data-block-key="8pra9">However, many implementations calculate the <i>Weighted Cost</i> using an aggregate value of the Total Cost. This makes calculating the new weighted average easier:</p><p data-block-key="1vop7">SystemItemTotalCost = CurrentItemTotalCost + (NewItemTotalCost)</p><p data-block-key="bhfg1">This approach is simpler and appears to work, but errors start creeping in as the “per unit of measure cost” is calculated at every transaction, which updates the system total cost and quantity on hand.</p><p data-block-key="asr5s">PerUnitCost = CurrentItemTotalCost/QtyOnHand </p><p data-block-key="dcptr">SystemItemTotalCost = CurrentItemTotalCost +/- (QtyOfTransaction*PerUnitCost)</p><p data-block-key="55e5n">QtyOnHand = QtyOnHand +/- QtyOfTransaction</p><p data-block-key="7atdm">The bulk of the error comes from recalculating the PerUnitCost at every transaction, adding rounding and floating point errors to the new results. </p><h3 data-block-key="1mb7a" id="consider-4:">Consider 4:</h3><pre class="code" data-block-key="7pmbq">SELECT '1: Calculate new cost', round(((139.00/(139.00+75.00) * 1.25)) + ((75.00/(139.00+75.00))*1.35),4), round((((139.00 *1.25) +(75.00*1.35))), 4)</pre><pre class="code" data-block-key="bvt2b">UNION</pre><pre class="code" data-block-key="4r8fb">SELECT '2: do a transactions subtract 5.333, value moved', round( 1.2850 * 5.333, 4), round(275.00/(75.00+139.00) * 5.333, 4) </pre><pre class="code" data-block-key="8cfci">UNION</pre><pre class="code" data-block-key="8f96o">SELECT '3: do a transactions subtract 37.482', value moved, round( 1.2850 * 37.482, 4), round((268.1468/208.667) * 37.482, 4) </pre><pre class="code" data-block-key="88vvp">UNION </pre><pre class="code" data-block-key="d159v">SELECT '4: do a transactions subtract 68.57, value moved', round( 1.2850 * 68.57, 4), round((219.9807/171.185) *68.57, 4) </pre><pre class="code" data-block-key="6u2cs">UNION</pre><pre class="code" data-block-key="5esb1">SELECT '5: Total inventory value', round(1.2850 * 102.615,4), 130.8247</pre><pre class="code" data-block-key="b7qe3">UNION</pre><pre class="code" data-block-key="eqotu">SELECT '6: Current per unit value', 1.2850 , round(130.8247/102.615,4) </pre><pre class="code" data-block-key="c1idn">ORDER BY 1</pre><p data-block-key="m22g"></p><embed alt="Figure Two_Blog Three" embedtype="image" format="fullwidth" id="648"/><p data-block-key="5cm2n">We see here how the order of operations and storing of an aggregate value distorts calculations rather quickly. With only 3 transactions, the total inventory value was distorted by 0.04 units.</p><p data-block-key="7gprc">The above calculations were done with PostgreSQL numeric type, not a floating point type. Using float type would make the deviation worse. If we threw in units of measurement conversions, the deviation would grow. See Consider 2 for effects of unit of measure conversion. </p><h2 data-block-key="fs8ro" id="the-solution:">The solution:</h2><p data-block-key="2t2nl">If the application uses aggregates in critical calculations, the quickest solution is to increase precision of the stored values and in all calculations. This delays the stack up error from wreaking havoc. The error can not be completely removed by only increasing the precision. The best approach is to keep the stored values in the lowest common value, and avoid storing or using aggregated, sum, or grand totals that are used in later calculations. For example do not store the total inventory value (Qty * UnitCost); store the quantity on hand and the unit cost in separate columns. Best to avoid using mix precision, rounding, or simplifying formulas with calculated aggregated values. Don’t cheat or shortcut the math formulas to save processing time. CPU time is cheap today; this is not the 1970’s. </p><h2 data-block-key="18m3f" id="closing-thoughts:">Closing thoughts:</h2><p data-block-key="b0h8f">Watch the structure of formulas and order of operations. It's easy to add errors to a result that will go unnoticed for thousands of operations. Always investigate what will happen if a formula is called thousands or millions of times.</p><p data-block-key="fe4ju">Is the result correct? </p><p data-block-key="4hdog">How much error can be tolerated?</p>Tue, 30 Aug 2022 22:31:00 +0000https://commandprompt.com/blog/postgresql-and-financial-calculations-part-three/PostgreSQL and Financial Calculations - Part Twohttps://commandprompt.com/blog/postgresql-and-financial-calculations-part-two/<p data-block-key="d2r56">The second in a series of blogs covering common mistakes in Database and Application designs for financial calculations.</p><h2 data-block-key="bdh6k" id="inconsistent-precision-scaling:">Inconsistent precision scaling:</h2><p data-block-key="cqm4p">This is probably the most common mistake in database design that I observe. It is understood to use exact data types (such as numeric) and the precision must be fixed, but for whatever reason the decision is made that it’s OK for one table to use numeric(12,4),a second table to use numeric(12,2), and then a third to use numeric(12,6). It’s common to see mixed precision even within the same table. On the surface this does not sound bad, as numeric types are storing the results and math being used is exact. However, this ignores the basic math rule that the least precise number sets the accuracy limit and stack up error.</p><h3 data-block-key="61xo8" id="consider-2:">Consider 2:</h3><pre class="code" data-block-key="crgot">CREATE TABLE uom_convert(
part_number text,
uom_from text,
uom_to text,
uom_ratio numeric(14,6)
);
CREATE TABLE inventory_on_hand(
part_number text,
uom text,
qty_on_hand numeric(12,4),
cost_per_uom numeric(12,2)
);
CREATE TABLE inventory_transactions(
part_number text,
qty_moved numeric(12,4),
total_moved numeric(12,2)
);
INSERT INTO uom_convert VALUES
('gold', 'grams', 'troyoz', 0.0321507);
INSERT INTO inventory_on_hand VALUES
('gold', 'troyoz', 0, 1735.25 );
truncate inventory_transactions;
INSERT INTO inventory_transactions
(SELECT 'gold', rqty * uom_ratio, --cost_per_uom,
(rqty * uom_ratio) * cost_per_uom
FROM uom_convert
CROSS JOIN (SELECT round((random()*100)::numeric,0)::numeric rqty --using only
integer no fractions values. Fractions values make this look even worse.
FROM generate_series(0,99)) dd
LEFT JOIN inventory_on_hand ON
inventory_on_hand.part_number = uom_convert.part_number
WHERE uom_convert.part_number = 'gold'
AND uom_from = 'grams'
AND uom_to = 'troyoz' );
UPDATE inventory_on_hand SET
qty_on_hand = (SELECT SUM(qty_moved)
FROM inventory_transactions
WHERE part_number = 'gold')
WHERE part_number = 'gold' ;
SELECT 'inventory_on_hand', qty_on_hand, round(qty_on_hand * cost_per_uom,2)
FROM inventory_on_hand where part_number = 'gold'
UNION
SELECT 'inventory_transactions', sum(qty_moved), SUM(total_moved)
FROM inventory_transactions where part_number = 'gold';</pre><embed alt="Figure One Blog Two" embedtype="image" format="fullwidth" id="503"/><p data-block-key="6jqtd">Comparing the inventory transaction table value to the actual value, the inventory transactions are over stated by $0.26 in just 100 transactions. When you run this example you will get different values due to the random() function being used to set the quantity being moved. The returned value will be either higher or lower, however, the inventory_transactions value will always be overstated.</p><h2 data-block-key="bw47i" id="the-solution:">The solution:</h2><p data-block-key="d21f2">Increasing the precision of the tables. I have come to like numeric(20,8). This gives us a big number: 999 trillion and 8 digits of precision. If the local currency is something akin to Zimbabwe dollars, we will need a larger numeric value. Taking the above example, edit the tables to use numeric(20,8) the result will look something like this.</p><embed alt="Figure Two_JG Blog Two" embedtype="image" format="fullwidth" id="504"/><p data-block-key="1hfkg">Errors do not start to appear in the calculations until the 6th decimal position. Many consider this level of error contained, as it will take tens of thousands of transactions for this to cause stack up error. In testing, it took a million transactions to create a 0.003 discrepancy between the two tables. This method is sometimes called a <b><i>Check Digit</i></b>, where math operations keep an extra digit of precision to avoid truncation or rounding errors.</p><h2 data-block-key="cp7q3" id="closing-thoughts:">Closing Thoughts:</h2><p data-block-key="1oc07">Changing the rounding method not just in PostgreSQL, but throughout the entire application stack wherever calculations are done.</p>Tue, 23 Aug 2022 21:50:00 +0000https://commandprompt.com/blog/postgresql-and-financial-calculations-part-two/PostgreSQL and Financial Calculations - Part Onehttps://commandprompt.com/blog/postgresql-and-financial-calculations-part-one/<p data-block-key="gnehb">A series on the use of data types to insure accurate financial calculations with your application.</p><p data-block-key="8er39">Over my multi-decade career, I have often noticed the problematic use of real, floating, double, and fixed precision types to store and calculate financials. Most believe the application only needs two digits to the right of the decimal point for financial data. The use of only two digits assumes that many financial calculations do not need more than two decimal points; for example, the units of measure conversion or currency exchange. These transactions represent a significant monetary value that cannot be represented with only two digits of precision.</p><p data-block-key="8m0u7">In this series we will go over common mistakes, what happens when you choose an incorrect data type, inconsistent precision scaling, conversion errors caused by frameworks and common calculation mistakes within PostgreSQL.</p><h2 data-block-key="1dvu2" id="choosing-the-wrong-data-type:">Choosing the wrong data type:</h2><p data-block-key="2rt0m">One of the most common mistakes may be the most destructive. It is the use of the variable-precision data type (such as real, float and double precision) for financial calculations. Many view this as a non-issue because PostgreSQL stores and returns the value as it is received from the application. However, floating point types being 32 or 64 bit suffer from inexact math in an attempt to balance the precision of the values with the speed to calculate those values. This balance will introduce what appears to be a minor error, however, it will stack up to significant error. Below is an example showing what happens with different data types and variable precision when compared to the default use of numeric type.</p><h3 data-block-key="jac6h" id="consider-1:">Consider 1:</h3><p data-block-key="24sbd">PostgreSQL SQL</p><pre class="code" data-block-key="adhmk">SELECT
(1234.567 * 3.333333) + (1.234567 * 3.333333) as Correct_Value_Numeric,
(1234.567 * 3.333333)::float(25) + (1.234567 * 3.333333)::float(25) as High,
(1234.567 + 1.234567 )::float(24) * 3.333333::float(25) as Low_Mixed_Float_Size,
(1234.567 * 3.333333)::real + (1.234567 * 3.333333)::real as Rounded_High,
(1234.567 + 1.234567 )::real * 3.333333 as Low</pre><p data-block-key="86brm"></p><embed alt="Figure One_JG_Blog One" embedtype="image" format="fullwidth" id="437"/><p data-block-key="ei3oh"></p><p data-block-key="e9d8l">With the above SQL query using casting, PostgreSQL uses floating type math operators demonstrating the errors that occur. Take note of the Low_Mixed_Float_Size column and odd math results when 32 and 64 bit floating point types. This kind of mixing of datatypes happens all the time in applications, which increases the value of the error.</p><p data-block-key="9os39">Many will say this error is acceptable as it never impacts the 100th position and the rounded value is unchanged. This line of thinking is erroneous and ignores stackup. Consider what happens when buying a million widgets using the rounded price, or selling at the low price. Another counter argument is that only a few applications need this level of precision in financial calculations.</p><p data-block-key="7f4di">Let's look at a real world example of how precision and stackup are exploited. Gasoline pumps measure hydrocarbons to the 1,000th, but are priced in two digits of precision. Measuring the fuel to the 1,000th position to avoid rounding benefits the seller. Consider the following scenario where we compare the transaction using 3 digits of precision vs 2 digits:</p><p data-block-key="3tet5"></p><embed alt="Figure Two_JG_Blog One" embedtype="image" format="fullwidth" id="438"/><p data-block-key="4pqtr"></p><p data-block-key="1phmp">By keeping the fuel measurement at a higher precision and leaving the price at two digits, the seller is exploiting the stack up of selling a 1,000th of gallon instead of rounding it to the same precision as the price. This allows the seller to extract a tiny fraction of money over billions of measurements and transactions.</p><h2 data-block-key="f0ck9" id="the-solution:">The solution:</h2><p data-block-key="7rg19">Be consistent with the data types. Don't mix real, double precision and numeric types (or the equivalent) in your favorite framework. Tiny errors will appear in casting of data type, either by truncation or rounding artifacts.</p><h2 data-block-key="09f8v" id="closing-thoughts:">Closing Thoughts:</h2><p data-block-key="c5r6d">The single biggest improvement in accuracy for calculations is increasing the level of precision of the numeric type, as this delays error creeping into the calculations.</p><p data-block-key="c1tck"></p>Wed, 17 Aug 2022 15:00:00 +0000https://commandprompt.com/blog/postgresql-and-financial-calculations-part-one/