What is the meaning of ‘precision’ in floating-point numbers? Then what is meant by ‘single’ precision and ‘double’ precision?
Numbers are represented through binary or bits in computer. Precision refers to the way in which floating numbers are represented. For e.g. the value of pi is 3.1415926535… but in general we represent it as 3.14. So similarly, precision means how we represent the numbers. If a floating number is stored in single precision it would take 32 bits, for double precision 64 bits. There is also half precision where the number is represented in 16 bits.
Thank you for your answer.