X# Runtime and VO SDK Reference

## Functions.SetDecimal Method |

Return and optionally change the setting that determines the number of decimal places used to display numbers.

Syntax

Request Example
View Source#### Return Value

Type: DWord

If nNewSetting is not specified, SetDecimal() returns the current setting.

If nNewSetting is specified, the previous setting is returned.

If nNewSetting is not specified, SetDecimal() returns the current setting.

If nNewSetting is specified, the previous setting is returned.

Remarks

SetDecimal() determines the number of decimal places displayed in the results of numeric functions and calculations.

Its operation depends directly on the SetFixed() setting: If SetFixed() is FALSE, SetDecimal() establishes the minimum number of decimal digits displayed by Exp(), Log(), SqRt(), and division operations.

If SetFixed() is FALSE, SetDecimal() is still in effect. If SetFixed() is TRUE, all numeric values are displayed with exactly the number of decimal places specified by SetDecimal(). Note that neither SetDecimal() nor SetFixed() affects the actual numeric precision of calculations — only the display format is affected. All variables containing a FLOAT type have internal picture information relating to digits and decimals (see FloatFormat()).

For database fields of type FLOAT, such picture information resides in the database header structure and is obtainable by FieldVal(). With the Val() function and with literal floats (that is, hard-coded in source code and thus known at compile time), the number of decimals is derived from the hard-coded decimal portion and the number of digits is taken from SetDigit(). If new floats are generated in expressions or by functions, the number of digits is always taken from SetDigit() and the number of decimals is determined as follows: Operators Number of decimals + or - Maximum of two operands * Sum of two operands / Current setting of SetDecimal() For functions, the number of decimals is determined by the current setting of SetDecimal() or a zero for special cases such as the Integer() function.

Its operation depends directly on the SetFixed() setting: If SetFixed() is FALSE, SetDecimal() establishes the minimum number of decimal digits displayed by Exp(), Log(), SqRt(), and division operations.

If SetFixed() is FALSE, SetDecimal() is still in effect. If SetFixed() is TRUE, all numeric values are displayed with exactly the number of decimal places specified by SetDecimal(). Note that neither SetDecimal() nor SetFixed() affects the actual numeric precision of calculations — only the display format is affected. All variables containing a FLOAT type have internal picture information relating to digits and decimals (see FloatFormat()).

For database fields of type FLOAT, such picture information resides in the database header structure and is obtainable by FieldVal(). With the Val() function and with literal floats (that is, hard-coded in source code and thus known at compile time), the number of decimals is derived from the hard-coded decimal portion and the number of digits is taken from SetDigit(). If new floats are generated in expressions or by functions, the number of digits is always taken from SetDigit() and the number of decimals is determined as follows: Operators Number of decimals + or - Maximum of two operands * Sum of two operands / Current setting of SetDecimal() For functions, the number of decimals is determined by the current setting of SetDecimal() or a zero for special cases such as the Integer() function.

Examples

These examples show various results of the SetDecimal() function:

X#

1SetDecimal(2) 2? 2.0/4.0 // 0.50 3? 1.0/3.0 // 0.33 4SetDecimal(4) 5? 2.0/4.0 // 0.5000 6? 1.0/3.0 // 0.3333

See Also