Hi, I'm trying to investigate a weird mathematical problem I've come across using G++. I only have G++ on SuSE 9.1, and so I'm unable to test the code on other systems, kernels etc. I would be grateful if someone could compile and run this code and let me have the results:
#include <iostream> #include <math.h>
int main() { float l = 45.4567; float d = floorf(l); float m = floorf((l - d) * 60.); float s = (((l - d) * 60.) - m) * 60.;
std::cout << "l = " << l << std::endl; std::cout << "d = " << d << std::endl; std::cout << "l-d = " << l - d << std::endl; std::cout << "(l-d) * 60.0 = " << (l - d) * 60. << std::endl; std::cout << "m = " << m << std::endl; std::cout << "s = " << s << std::endl;
float r = (s / 3600.) + (m / 60.) + d; std::cout << "reconstructed= " << r << std::endl;
return 0; }
Basically, I getting an odd value printed for l - d (it comes out as 0.456699 and not 0.4567 as expected).
Many thanks,
Stuart.
float l = 45.4567; float d = floorf(l); std::cout << "l-d = " << l - d << std::endl;
Basically, I getting an odd value printed for l - d (it comes out as 0.456699 and not 0.4567 as expected).
"float" is not that acurate. You might have more luck with "double".
Good luck! Tim.
On Saturday 30 October 2004 13:59, Tim Green wrote:
float l = 45.4567; float d = floorf(l); std::cout << "l-d = " << l - d << std::endl;
Basically, I getting an odd value printed for l - d (it comes out as 0.456699 and not 0.4567 as expected).
"float" is not that acurate. You might have more luck with "double".
Good luck! Tim.
Neither float nor double can truly represent some numbers. Just as 1/3 or 1/7 cannot be expressed exactly in decimal, some numbers - e.g. 0.1 - do not have an exact binary representation. Hence the tendency for the result to come back as a recurring decimal. Your numbers should never have gone to binary in the first place.
The solution depends on the environment. If you are dealing with financial quantities and never need to go better than one penny precision, you might consider using a large integer representing pennies, and adjust the position of the decimal point before printing. This doesn't completely do away with the problem but makes it acceptable in that context. Financial software rarely uses binary floating point; more commonly it uses a decimal form. In Java we have classes like BigDecimal for this purpose; no doubt there are C++ libraries to do a similar job.
Actually it's quite a minefield. BigDecimal has seven different rounding options, which is perhaps why my BT bill often computes the VAT to be a penny less than that given by my accounts program.
Interestingly (to me at least), in the days when we programmed everything in assembler, 8-bit microprocessors had a set of BCD (Binary Coded Decimal) instructions that allowed computations to mirror what we expect in real life. They were put there for use by pocket calculators, which is why these machines give the *right* answers and recur a third rather than a tenth. I expect the instructions are still there, but I doubt a C compiler makes use of them.
-- GT
On Sat, 30 Oct 2004 14:23:01 +0100, Graham gt@pobox.com wrote:
Neither float nor double can truly represent some numbers. Just as 1/3 or 1/7 cannot be expressed exactly in decimal, some numbers - e.g. 0.1 - do not have an exact binary representation. Hence the tendency for the result to come back as a recurring decimal.
I didn't want to disallution him so soon.
Actually it's quite a minefield. BigDecimal has seven different rounding options, which is perhaps why my BT bill often computes the VAT to be a penny less than that given by my accounts program.
BT seem to calculate to 1/10 of a penny, then always round down to the whole penny for the final figure. Can you image the complaints if BT rounded up? (Extra £10,000 revenue per million customers)
On Saturday 30 October 2004 2:45 pm, Tim Green wrote:
BT seem to calculate to 1/10 of a penny, then always round down to the whole penny for the final figure. Can you image the complaints if BT rounded up? (Extra £10,000 revenue per million customers)
I thought standard financial practice was to calculate to 1/10 of a penny round down if less than .5p, round up if .5 or higher ?
On Sat, 30 Oct 2004 18:21:52 +0100, Wayne Stallwood aluglist@digimatic.plus.com wrote:
I thought standard financial practice was to calculate to 1/10 of a penny round down if less than .5p, round up if .5 or higher ?
My former mole in Inland Revenue says they always rounded in favour of the "customer". Which is nice.
On 30-Oct-04 Graham wrote:
On Saturday 30 October 2004 13:59, Tim Green wrote:
float l = 45.4567; float d = floorf(l); std::cout << "l-d = " << l - d << std::endl;
Basically, I getting an odd value printed for l - d (it comes out as 0.456699 and not 0.4567 as expected).
"float" is not that acurate. You might have more luck with "double".
Good luck! Tim.
Neither float nor double can truly represent some numbers. Just as 1/3 or 1/7 cannot be expressed exactly in decimal, some numbers
- e.g. 0.1 - do not have an exact binary representation. Hence
the tendency for the result to come back as a recurring decimal.
Very true!
Your numbers should never have gone to binary in the first place.
Contentious!
[...] Interestingly (to me at least), in the days when we programmed everything in assembler, 8-bit microprocessors had a set of BCD (Binary Coded Decimal) instructions that allowed computations to mirror what we expect in real life. They were put there for use by pocket calculators, which is why these machines give the *right* answers and recur a third rather than a tenth. I expect the instructions are still there, but I doubt a C compiler makes use of them.
Which just goes to show that what you gain on the swings you lose on the roundabouts.
The majority of numerical programs work to a fixed precision in one base or another. Fortran (and C and C++ ... and most numerical software which depends on their in-built arithmetic) work to a finite number of binary digits (depending on the type of the variable). However, you could program a BCD calculator in C, though you would not be directly using the math library routines.
As pointed out, 0.1 does not have an exact binary representation to any number of digits. So things get a bit ragged with several decimal places. On the other hand, any inverse power of 2 (1/2^k) is exactly stored so long as k does not exceed the length of the mantissa (in bits) in the number storage. Because 2 is prime, the only division that can give an exact result is division by 2.
The BCD representation is exact for decimal fractions 1/10^k so long as k does not exceed the number of 4-bit blocks available in number storage. Because 10 has factors 2 and 5, divisions by 2 and 5 can give exact results.
In both cases, divisions by other numbers (1/3, 1/7, 1/9) will always give inexact results.
There are, however, programs which can allow you an arbitrary degree of precision. One such (and it should be on any Linux system) is the program 'bc' (an early version of which was included as a C coding example in Kernighan and Ritchie's book):
"man bc" -> bc(1) NAME bc - An arbitrary precision calculator language
It is quite fully programmable (loops, conditionals and stuff), so if you really want to break out of the limited precision of standard language it is worth considering. However, it has only a very limited repertoire of mathematical functions.
Example (pi to 1000 decimal places; note that the "atan" function a() is the only way to tell bc about pi):
$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale=1000 pi=4*a(1) print pi 3.141592653589793238462643383279502884197169399375105820974944592307\ 81640628620899862803482534211706798214808651328230664709384460955058\ 22317253594081284811174502841027019385211055596446229489549303819644\ 28810975665933446128475648233786783165271201909145648566923460348610\ 45432664821339360726024914127372458700660631558817488152092096282925\ 40917153643678925903600113305305488204665213841469519415116094330572\ 70365759591953092186117381932611793105118548074462379962749567351885\ 75272489122793818301194912983367336244065664308602139494639522473719\ 07021798609437027705392171762931767523846748184676694051320005681271\ 45263560827785771342757789609173637178721468440901224953430146549585\ 37105079227968925892354201995611212902196086403441815981362977477130\ 99605187072113499999983729780499510597317328160963185950244594553469\ 08302642522308253344685035261931188171010003137838752886587533208381\ 42061717766914730359825349042875546873115956286388235378759375195778\ 18577805321712268066130019278766111959092164201988 quit $
Ted.
-------------------------------------------------------------------- E-Mail: (Ted Harding) Ted.Harding@nessie.mcc.ac.uk Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 30-Oct-04 Time: 15:11:06 ------------------------------ XFMail ------------------------------
On 2004-10-30 14:23:01 +0100 Graham gt@pobox.com wrote:
Neither float nor double can truly represent some numbers. Just as 1/3 or 1/7 cannot be expressed exactly in decimal, some numbers - e.g. 0.1 - do not have an exact binary representation. [...]
One reason I find Scheme very useful is that it includes a numeric tower and the concepts of exact and inexact representations. You can read about this in the Revised^5 Report on Scheme at http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_sec_...
I think pretty much the same reasoning works in C-family languages, but you usually have to cast around explicitly a lot more and keep track of exactness yourself.
Graham wrote:
Interestingly (to me at least), in the days when we programmed everything in assembler, 8-bit microprocessors had a set of BCD (Binary Coded Decimal) instructions that allowed computations to mirror what we expect in real life. They were put there for use by pocket calculators, which is why these machines give the *right* answers and recur a third rather than a tenth.
Do you have any reference to this use of BCD? I only know of the use of BCD instructions with integer values, where their purpose is somewhat different (in particular the ease with which BCD can be displayed on 7-segment displays with very little processing capability, compared with decimal values).
On 31-Oct-04 Mark Rogers wrote:
Graham wrote:
Interestingly (to me at least), in the days when we programmed everything in assembler, 8-bit microprocessors had a set of BCD (Binary Coded Decimal) instructions that allowed computations to mirror what we expect in real life. They were put there for use by pocket calculators, which is why these machines give the *right* answers and recur a third rather than a tenth.
Do you have any reference to this use of BCD? I only know of the use of BCD instructions with integer values, where their purpose is somewhat different (in particular the ease with which BCD can be displayed on 7-segment displays with very little processing capability, compared with decimal values).
[Digs out old "Z80-CPU Z80A-CPU Technical Manual", September 1978, Copyright© 1977 by Zilog Inc.]
p. 29 (Sn. 5.3): "The flags and decimal adjust instruction (DAA) in the Z80 ... allow arithmetic operations for: multiprecision packed BCD numbers ... "
p. 31 (ditto): "Two BCD digit rotate instructions (RRD and RLD) allow a digit in the accumulator to be rotated with the two digits in a memory location ... These instructions allow for efficient BCD arithmetic."
p. 39 (Sn. 6.0): "There are also two non-testable bits in the flag register. Both of these are used for BCD arithmetic. They are: 1) Half carry (H) -- this is the BCD carry or borrow result from the least significant four bits of operation. When using the DAA (Decimal Adjust Instruction) this flag is used to accorect the result of a previous packed decimal add or subtract. 2) Subtract Flag (N) -- Since the algorithm for correcting BCD operations is different for addition or subtraction, this flag is used to specify what type of instruction was executed last so that the DAA operation will be correct for either addition or subtraction.
p. 48 (Table 7.0.5): "DAA (Decimal Adjust Accuulator): Converts accumulator content into packed BCD following add or subtract with packed BCD operands."
The basic idea was that an 8-bit byte would store two decimal digits, one in each half-byte ("nibble") of 4 bits, of course in binary form "0" = 0000, "1" = 0001, ... , "8" = 1000, "9" = 1001. The only arithmetic available is add or subtract, which operate on full 8-bit or 16-bit binary numbers. However, the DAA instruction converts the binary result of these binary operations on the binary byte corresponding to the BCD representation back into a BCD result. Anything more complex had to be programmed outside the CPU.
I used to use (and still have) a Texas Instruments TI-59 calculator, which was excellent for its day (about 1977). This used BCD storage (I discovered an undocumented trick which enabled me to see the 1-byte register contents on the display), both for numerical data (in "floating-point" format) and for program code. There were 1000 bytes available which could be partitioned between program code (per-byte) and numbers (8 bytes/number). There were supplementary chips which were hard-wired with special purpose programs which could be slotted in. On of these was a "matrix pack" for addition, subtraction and multiplication, and with these subroutines on call I was able to extract eigenvalues and eigenvectors of matrices up to 15x15 (provided symmetric -- there was just enough number storage to hold the numbers). Those were the days!
Best wishes to all, Ted.
-------------------------------------------------------------------- E-Mail: (Ted Harding) Ted.Harding@nessie.mcc.ac.uk Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 31-Oct-04 Time: 19:30:10 ------------------------------ XFMail ------------------------------
Ted Harding writes:
The basic idea was that an 8-bit byte would store two decimal digits, one in each half-byte ("nibble") of 4 bits, of course in binary form "0" = 0000, "1" = 0001, ... , "8" = 1000, "9" = 1001. The only arithmetic available is add or subtract, which operate on full 8-bit or 16-bit binary numbers. However,
This bit I understand (and, in fact, still have to use today; it's still used a lot in industrial kit, although to be honest "misused" would usually be fairer).
A key advantage in pocket calculators and similar is that the raw data as presented on the databus can easily be split out four bits (ie four "wires") at a time, each going to a seperate 7-seg display, which have internal logic to decode those four bits into the 0-9 digits. Without that a *lot* of extra logic is required.
These days it is probably the single biggest cause of confusion amongst new industrial programmers, and is rarely needed but often encountered.
However, I'm still not clear on the specific connection with fractional numbers. BCD to me is just a different way of representing decimal integers (and at its simplest means internal instructions which allow incrementing 9 to result in 16(dec) = 10(hex)). Maybe I'm looking too hard...
PS: I also have that Zilog manual, although it's not as readily to hand as yours clearly is :-) On of my first major programming applications, written in my teens and as far as I know still in use today in a pretty high-profile location, was a Z80-based protocol converter allowing two radically different fire alarm systems to talk to each other. I have fond memories of that chip!
"Mark Rogers" mark@quarella.co.uk writes:
However, I'm still not clear on the specific connection with fractional numbers. BCD to me is just a different way of representing decimal integers (and at its simplest means internal instructions which allow incrementing 9 to result in 16(dec) = 10(hex)). Maybe I'm looking too hard...
I believe the original point was that you could use decimal floating point instead of binary floating point, and thus avoid surprising answers when converted back to decimal for display. 4-bits-per-digit BCD is one way of doing that but not the only way.
It looks like decimal floating point may be better supported in the future, see e.g. http://www2.hursley.ibm.com/decimal/
On 01-Nov-04 Mark Rogers wrote:
Ted Harding writes:
The basic idea was that an 8-bit byte would store two decimal digits, one in each half-byte ("nibble") of 4 bits, of course in binary form "0" = 0000, "1" = 0001, ... , "8" = 1000, "9" = 1001. The only arithmetic available is add or subtract, which operate on full 8-bit or 16-bit binary numbers. However,
[...] A key advantage in pocket calculators and similar is that the raw data as presented on the databus can easily be split out four bits (ie four "wires") at a time, each going to a seperate 7-seg display, which have internal logic to decode those four bits into the 0-9 digits. Without that a *lot* of extra logic is required.
Agreed! (BTW, anyone remember the earliest vacuum-tube based calculators where the display was a rack of tubes with chunks of wire in which glowed? Similar visual principle to the 7-seg LCD display, and I suppose might be supported by a similar logic; though I doj't know about that, and used to wonder ... ).
[...] However, I'm still not clear on the specific connection with fractional numbers. BCD to me is just a different way of representing decimal integers (and at its simplest means internal instructions which allow incrementing 9 to result in 16(dec) = 10(hex)). Maybe I'm looking too hard...
Well, I suppose it's swings and roundabouts again. Nothing stops you programming arbitrary-precision arithmetic in binary throughout -- indeed it's probably simpler that way. You just set up a long string of bytes with as many bits as you are going to need, and off you go. The native instruction set will cope directly with arithmetic on the short "words" in the string, and a bit of footwork with the "carry" bits enables you to glue it all together.
However, when it comes to presenting the decimal output to the user, it gets more complicated. Suppose you have been working to (say) 3 Kbit binary (about 1000 decimal digits). Stripping out the integer part of the decimal digits would'nt be too hard; but 0.1, 0.01m 0.001, ... is more of a mess: each is theoretically an infinite string of binary bits, so you have to first work out what 0.1 is to 3K binary places, subtract that till it goes negative (goving you the multiple of 0.1), then the same for 0.01, and so on.
It's much easier in that context to work in BCD throughout. Programming the BCD arithmetic operations is straightforward, and the human-readable answer is then just sitting there. Indeed, the program 'bc' which I described earlier stores its numbers internally in decimal.
PS: I also have that Zilog manual, although it's not as readily to hand as yours clearly is :-)
Well, that's only the case by virtue of the coincidence that I've lately been "organising" stacks of things that I've been hoarding for years. A few days ago I came across my box of stuff from the Z80-CP/M era, and there it was. A week ago, and I wouldn't have known where mine was either!
On of my first major programming applications, written in my teens and as far as I know still in use today in a pretty high-profile location, was a Z80-based protocol converter allowing two radically different fire alarm systems to talk to each other. I have fond memories of that chip!
So do I! My first major personal computer was a Sharp MZ-80B, very nice, and Z80-A based. And CP/M was remarkable in what it could fit into the top few K of 64K of RAM.
But I'd first learned the chip on the Sinclair ZX-81, actually disassembling the ROM by hand with pencil and paper (and the Z80 ref manual to hand). Incidentally, the ZX-81 stored numbers, and did arithmetic on them, in a very strange format. Not BCD at all, and not straight binary floats either, but binary floats with weird extras. I never worked out why they did it that way.
-- Mark Rogers, More Solutions Ltd :: Tel: 0845 45 89 555
main@lists.alug.org.uk http://www.alug.org.uk/ http://lists.alug.org.uk/mailman/listinfo/main Unsubscribe? See message headers or the web site above!
-------------------------------------------------------------------- E-Mail: (Ted Harding) Ted.Harding@nessie.mcc.ac.uk Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 01-Nov-04 Time: 10:44:50 ------------------------------ XFMail ------------------------------
Ted Harding writes:
Well, I suppose it's swings and roundabouts again. Nothing stops you programming arbitrary-precision arithmetic in binary throughout -- indeed it's probably simpler that way. [...] However, when it comes to presenting the decimal output to the user, it gets more complicated.
I guess this is the point I was missing; essentially you work in integers either way (and keep track of where the decimal point should be seperately). But when it comes to handling the integers, BCD is best if you have a simple (low intelligence) display to drive.
I'm guessing that 'bc' uses integers throught, for example, but not BCD - but whilst I've had occassional cause to use bc I've never looked at its code.
But I'd first learned the chip on the Sinclair ZX-81, actually disassembling the ROM by hand with pencil and paper (and the Z80 ref manual to hand).
I'm probably showing my (lack of) age here, but the ZX81 was my first exposure to computing (£79.99 mail order, as I recall, and we had to wait ages for it). I was probably around 11 at the time. I didn't start playing with Z80 assembly until the Spectrum, by which time I was using it to write code which I transfered via serial cable to a prom blower to program EPROMs for a bizzare clock-radio-cum-timing-device which I designed and programmed for my O-Level Computing course - great fun, as the teacher hadn't a clue what I was doing. I'm sure I passed tha course by default because the examiner hadn't a clue either :-)
I never disassembled the ZX-81/Spectrum ROMs, but I did get the Spectrum ROM dissassembly book from the library (the village library only had about 5 computing books to choose from), and I enjoyed reading it.
Funnily enough, I never had many friends at that age.
(Ted Harding) Ted.Harding@nessie.mcc.ac.uk writes:
But I'd first learned the chip on the Sinclair ZX-81, actually disassembling the ROM by hand with pencil and paper (and the Z80 ref manual to hand). Incidentally, the ZX-81 stored numbers, and did arithmetic on them, in a very strange format. Not BCD at all, and not straight binary floats either, but binary floats with weird extras. I never worked out why they did it that way.
I believe it used the same mechanism as the Spectrum, which used binary floating point with a one byte exponent and a 4 byte mantissa, but could also represent integers in [-65535,65535] as straight integers for faster calculation (and got the conversion wrong at one of the edge cases).
On 01-Nov-04 Richard Kettlewell wrote:
(Ted Harding) Ted.Harding@nessie.mcc.ac.uk writes:
But I'd first learned the chip on the Sinclair ZX-81, actually disassembling the ROM by hand with pencil and paper (and the Z80 ref manual to hand). Incidentally, the ZX-81 stored numbers, and did arithmetic on them, in a very strange format. Not BCD at all, and not straight binary floats either, but binary floats with weird extras. I never worked out why they did it that way.
I believe it used the same mechanism as the Spectrum, which used binary floating point with a one byte exponent and a 4 byte mantissa, but could also represent integers in [-65535,65535] as straight integers for faster calculation (and got the conversion wrong at one of the edge cases).
Yes, you're right! The exponent is 1 byte, and its value is 0x80 + the true binary exponent relative to binary 0.1... (so 1/8 has exponent 0x80 + (-2) = 0x7E, 1 has exponent 0x81). Hence the exponent can range from -127 to +127 (a zero value in this byte is reserved for an exact zero, represented as 00 00 00 00 00). The mantissa is 4 bytes, with the highest-order bit equal to 1 -- except that it is also used as a "sign" flag for the number, being 1 only for *negative* numbers and being set to 0 for *positive*.
I've verified this by digging another book out of the "CP/M" box, and now I recall why I remember this as "weird extras" -- the explanation in this book was a spelt-out pencil-and-paper algorithm for working out what the binary representation should be, which managed to avoid describing the basic structure of the representation, and therefore managed to confuse me at the time!
Cheers, Ted.
-------------------------------------------------------------------- E-Mail: (Ted Harding) Ted.Harding@nessie.mcc.ac.uk Fax-to-email: +44 (0)870 094 0861 [NB: New number!] Date: 01-Nov-04 Time: 14:38:07 ------------------------------ XFMail ------------------------------
On Sunday 31 October 2004 17:37, Mark Rogers wrote:
Graham wrote:
Interestingly (to me at least), in the days when we programmed everything in assembler, 8-bit microprocessors had a set of BCD (Binary Coded Decimal) instructions that allowed computations to mirror what we expect in real life. They were put there for use by pocket calculators, which is why these machines give the *right* answers and recur a third rather than a tenth.
Do you have any reference to this use of BCD? I only know of the use of BCD instructions with integer values, where their purpose is somewhat different (in particular the ease with which BCD can be displayed on 7-segment displays with very little processing capability, compared with decimal values).
All of the processors of the day had BCD instructions. I just hauled out a 68000 manual and that too has the instructions, though whether anyone ever used them on a 16-bit CPU is open to question.
The following is entirely from memory so could be inaccurate on any count:
The first true micro was the 4004, designed in the early 70's for Busicom, a Japanese calculator manufacturer. They'd asked a small outfit called Intel to design a custom chipset for a new machine, but the guys at Intel figured they could do the job better with a stored-program chip, so the 4004 was born. It had a truly awful architecture and 4-bit serial hardware interface, but as far as I know included the all-important BCD instructions (and not much else!). It was followed by the 4040, a parallel device that was a bit easier to use, then in about 1974 the 8080 came along; the first machine to have a half-way decent instruction set and reasonable interfaces. Motorola followed a year or so later with the 6800, the first single-voltage device, then the 6802, which included an on-chip oscillator; this made it a doddle to build a system around in your back room (which I did). Ah, those blissful days!
Neither Intel nor Motorola had the courage to abandon compatibility with the past (can't speak for Itania and PPCs), which is why later chips have a number of little-used legacy instructions. Not that you can run a 4004 calculator program on a Pentium, but I suppose it could be translated pretty well word for word. Rather useful, eh?
-- GT
I'm not a C++ programmer, I'm of the Java variety, but I know that this is expected behaviour.
Float has quite a low precision (4 bytes in Java) so you get results like this:-
https://lists.xcf.berkeley.edu/lists/advanced-java/2002-January/018167.html
Consider using the 'double' type instead (I assume it is the same as in Java) which uses 8 bytes.
It's all to do with the intricacies (inaccuracies) of storing floating point values in binary.
Matt
On Saturday 30 Oct 2004 13:44, Stuart Bailey wrote:
Hi, I'm trying to investigate a weird mathematical problem I've come across using G++. I only have G++ on SuSE 9.1, and so I'm unable to test the code on other systems, kernels etc. I would be grateful if someone could compile and run this code and let me have the results:
#include <iostream> #include <math.h>
int main() { float l = 45.4567; float d = floorf(l); float m = floorf((l - d) * 60.); float s = (((l - d) * 60.) - m) * 60.;
std::cout << "l = " << l << std::endl; std::cout << "d = " << d << std::endl; std::cout << "l-d = " << l - d << std::endl; std::cout << "(l-d) * 60.0 = " << (l - d) * 60. << std::endl; std::cout << "m = " << m << std::endl; std::cout << "s = " << s << std::endl;
float r = (s / 3600.) + (m / 60.) + d; std::cout << "reconstructed= " << r << std::endl;
return 0; }
Basically, I getting an odd value printed for l - d (it comes out as 0.456699 and not 0.4567 as expected).
Many thanks,
Stuart.