Skip to content

Conversation

@Icbears
Copy link
Contributor

@Icbears Icbears commented Oct 24, 2025

Introduced RBFSinglePrecision class for memory optimization.
edit_mesh.zip
test.zip

Introduced RBFSinglePrecision class for memory optimization.
@Icbears
Copy link
Contributor Author

Icbears commented Oct 24, 2025

Introduced RBFSinglePrecision class for memory optimization. edit_mesh.zip test.zip

A fluent mesh sample is attached, along with the code for mesh deformation.

@ndem0
Copy link
Member

ndem0 commented Dec 2, 2025

Dear @Icbears, thank you! The PR is fine, but looking at the code, I've seen that the new class actually introduces a parameter for the precision (float32/64). To avoid code repetition, I ask you to add that parameter in the __init__ in the existing RBF class. It's more maintainable and in principle it allows us also to use other types (complex, float128, ...)

Added dtype parameter for precision control and updated related methods to handle different data types (fp16, fp32, fp64, fp96, and fp128), with default dtype fp64. Added warnings for unsupported precision types.
@Icbears
Copy link
Contributor Author

Icbears commented Dec 3, 2025

Dear @ndem0, thank you for your advice. I have added a dtype parameter (default set to fp64) to the RBF class, allowing users to select from the following precisions: fp16, fp32, fp64, and fp128. I tested these data types on my devices, and while fp16 consumes even less memory, it may give inaccurate mesh outputs. Additionally, my Windows 11 laptop does not support np.float128, although it works on a Linux workstation. By the way, the RBFSinglePrecision class was removed because it may be redundant. Please see the changes from the file changed section, and feel free to let me know your thoughts.

@ndem0 ndem0 merged commit 48180d6 into mathLab:master Dec 4, 2025
3 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants