# Guarded Significant Figures/Precision

 arithmetic on numbers presumed significant to the last digit needs more gradually signified precision

[HTML superscripted]

In a computer, 'n' bits [binary digits] yield 2n states, but in public, yield 2n+1-1 states, including those partially blank, having suppressed trailing zeroes. Decimal affords fewer public states - and is not efficiently translated to public binary. The binary representation of these public, less precise, states - thus efficiently requires one additional bit saved, in the computer. [Precise computer representation must maintain more information on cumulated precision-spread, and thus needs many bits more - also, adjacent numbers must overlap indistinguishably by steps of sigma~1 - More discussion shall be available on measurement and precision] The public states of a binary fraction run from maximally precise to imprecise (blank) - the two least precise states indicating either no specified precision (entirely blank) or that the next higher (nonexistent) bits are fully precise: slightly oxymoronic.

In precise binary fractions the lowest bit indicates publicly the progress of the higher-order bits and blanks. We choose then: An odd binary fraction is most precise - albeit by 2's, that is, significant in the next higher bits. [For rounding purposes adjacent fractions are indistinguishable, next-adjacent are distinguishable, though this breaks transitivity: ibid] An even binary fraction is less precise: the next higher bit specifies its precision, by 4's. [This is as gradual as possible, for public display] Thus the least significant non-zero bit LSB designates its precision. [This coïncides with efficiently computer-represented probabilities where +0.5 is presumed, .00005-.99995 being the range of a 4-digit probability]

Because in higher radices, eg. decimal, the number of public states is fewer than public binary, there is no efficacy to recovering trailing blanks differently than trailing zeroes. Thus publicly we shall henceforth suppress trailing zeroes, and regard them identically - this is merely convenient.

Now, if we were representing in hexadecimal (or octal, etc.) the assignments were easy: The last (non-zero) hexadecimal digit indicates its own precision:
Numbers ending in:
8, which is half the higher digit's unit, are coarsely precise;
4, C, which are odd quarters, are a bit more precise;
2, 6, A, E, odd eighths, are 2 bits more precise; and,
1, 3, 5, 7, 9, B, D, F, odd sixteenths, are most precise, 3 bits;
trailing 0's being suppressed.
And a next digit of 8 gains the 4th next bit of precision, and so on.

Decimal is more challenging, and we have a considerable option:

Either by symmetric halves:
Numbers ending in 5, half the next higher unit, are coarsest, by 10's;
3, 7, symmetric-near odd quarters, are ~1 bit more precise, by 5's; and,
1, 2, 4, 6, 8, 9, symmetric-near odd tenths, are ~2.322 bits, most precise, by 2's;
trailing 0's being suppressed. [see new revision below]
[As adjacent fractions are indistinguishable, near-digits may suffice]

Or by odds:
Numbers ending in:
1, 3, 5, 7, 9, odd tenths, are most precise, by 2's;
2, 8, symmetric-near odd quarters, are ~1.322 bits less precise, by 5's;
4 or 6, near the odd half, are ~2.322 bits less precise, 10's;
trailing 0's being suppressed.
[Curiously '4' is more multiplicative in appearance, where 3 is near ~3.162, the half-factor of ten - and '6' may be disused]

[new revision to 'by-halves']
Retracing a step, as this is for public-use: if in public representation we do not suppress trailing zeroes, we can assign: 0, 2, 4, 6, 8, the even tenths, the finest precision, by 2's, fully symmetric and evenly distributed; and 3, 7, symmetric-near odd quarters, less precise, by 4-6's averaging 5's; and 5 least precise, by 10's - but thereby disusing 1, 9.

[new revision to 'by-odds']

Alternatively - and best, as the original intent of halving to 5's for slightly increasing the presumed full precision of the next higher digit, but in fact, now we see only precision by 20's - we might return to signifying 0's as the least precise;
and then, 1, 3, 5, 7, 9, the odd tenths, as most precise, by 2's; and 2, 4, 6, 8, as curiously overlapped, skipping 0, redundant symmetric odd quarters, intermediately precise, by 4's.

Arithmetic then proceeds normally, converting the end digit as needed: precision spread accumulates more rapidly for this near-binary progression, than for decimal.

A premise discovery under the title,