|
ArrayFromFP16
Copies an array of type ushort into an array of float or double type with the given format.
bool ArrayFromFP16(
|
Overloading for the double type
bool ArrayFromFP16(
|
Parameters
dst_array[]
[out] Receiver array of type float or double.
src_array[]
[in] Source array of type ushort.
fmt
[in] Copying format from the ENUM_FLOAT16_FORMAT enumeration.
Return Value
Returns true if successful or false otherwise.
Note
Formats FLOAT16 and BFLOAT16 are defined in the ENUM_FLOAT16_FORMAT enumeration and are used in MQL5 only for operations with ONNX models.
If the output parameters obtained from the OnnxRun function execution are of type FLOAT16 and BFLOAT16, you can use this function to convert the result to float or double arrays.
FLOAT16, also known as half-precision float, uses 16 bits to represent floating-point numbers. This format provides a balance between accuracy and computational efficiency. FLOAT16 is widely used in deep learning algorithms and neural networks, which require high-performance processing of large datasets. This format accelerates computations calculations by reducing the size of numbers, which is especially important when training deep neural networks on GPUs.
BFLOAT16 (or Brain Floating Point 16) also uses 16 bits but differs from FLOAT16 in the approach to format representation. In this format, 8 bits are allocated for representing the exponent, while the remaining 7 bits are used for representing the mantissa. This format was developed for use in deep learning and artificial intelligence, especially in Google's Tensor Processing Unit (TPU). BFLOAT16 demonstrates excellent performance in neural network training and can effectively accelerate computations.
Example: function from the article Working with ONNX models in float16 and float8 formats
//+------------------------------------------------------------------+
|
See also