Skip to content

Conversation

@jade-cho
Copy link
Contributor

@jade-cho jade-cho commented Dec 3, 2025

Description of the issue

  • Do refactoring onednn::layout_to_memory_desc() function to improve readability and maintainability.

The code and line that caused this issue

Checklist

  • Is it a proper fix? (not a workaround)
  • Did you include test case for this fix, if necessary?
  • Did you review existing test that can be extended to cover this scenario? Which test did you review?

Tickets:

  • 175118

@github-actions github-actions bot added the category: GPU OpenVINO GPU plugin label Dec 3, 2025
@jade-cho jade-cho force-pushed the refactor_layout_to_mem_desc branch from fddcebe to e6fa4ba Compare December 3, 2025 04:27
@jade-cho jade-cho marked this pull request as ready for review December 3, 2025 04:27
@jade-cho jade-cho requested review from a team as code owners December 3, 2025 04:27
@jade-cho jade-cho force-pushed the refactor_layout_to_mem_desc branch from e6fa4ba to 52ff74f Compare December 3, 2025 04:29
@isanghao isanghao requested a review from Copilot December 3, 2025 08:31
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the onednn::layout_to_memory_desc function by decomposing it into specialized variants based on specific usage patterns. The original function used bit flags (mem_flags) to control different behaviors, which has been replaced with dedicated functions for each use case.

Key Changes

  • Replaced flag-based layout_to_memory_desc function with specialized variants: layout_to_memory_desc_flatten, layout_to_memory_desc_blocked, layout_to_memory_desc_grouped, and layout_to_memory_desc_strides
  • Refactored dimension and stride calculation logic into separate helper functions with switch statements instead of if-else chains
  • Updated all call sites across multiple files to use the appropriate specialized function instead of passing flags

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
utils.hpp Added declarations for new specialized layout_to_memory_desc variants and updated signature of flag-based function
utils.cpp Implemented new specialized functions with refactored helper functions for dimension and stride calculations; commented out original implementation
program_node.cpp Updated binary post-op creation to use ternary operator with specialized functions instead of flag-based calls
reorder_onednn.cpp Replaced flag-based calls with layout_to_memory_desc_blocked
reduce_onednn.cpp Replaced flag-based calls with layout_to_memory_desc_blocked
primitive_onednn_base.h Updated fused operation handling to use ternary operator with layout_to_memory_desc_flatten
fully_connected_onednn.cpp Replaced flag-based calls with layout_to_memory_desc_flatten and conditional logic for weights handling
deconvolution_onednn.cpp Replaced flag-based calls with layout_to_memory_desc_flatten for bias
convolution_onednn.cpp Replaced flag-based calls with layout_to_memory_desc_flatten and added explicit target_fmt parameter

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

dilation.insert(dilation.end(), insert_count, 0);
pad_l.insert(pad_l.end(), insert_count, 0);
pad_r.insert(pad_r.end(), insert_count, 0);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random spot)
Some thoughts about this PR:

  • Separation of layout_to_memory_desc looks good. Especially against use_strides case.
  • Does it cleanup the lambda function that was introduced in https://github.com/openvinotoolkit/openvino/pull/32391/files?
  • layout_to_memory_desc is separated into multiple functions, but it is implemented based on calculate_memory_dims. Does it bring value?
  • I think some call path is too deep. Is it inevitable? For example:
    layout_to_memory_desc_grouped -> calculate_memory_dims -> calculate_default_dims -> calculate_3d_tensor_dims

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@jade-cho jade-cho force-pushed the refactor_layout_to_mem_desc branch from e312718 to 2c7fdd6 Compare December 4, 2025 03:36
@jade-cho jade-cho force-pushed the refactor_layout_to_mem_desc branch from 2c7fdd6 to 94e8c02 Compare December 5, 2025 07:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants