Skip to content

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Dec 27, 2025

Stack from ghstack (oldest at bottom):

Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by vTensor used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the GPUMemoryLayout which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the PackedDimInfo struct to vTensor, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

PackedDimInfo

Introduced the PackedDimInfo struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The PackedDimInfo struct contains:

  • packed_dim: Which dimension is tightly packed (WHCN index) / contiguous in memory
  • packed_dim_padded: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
  • outer_packed_dim: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to packed_dim
  • outer_packed_dim_padded: Whether outer packed dim is padded (tiled only)

Changes

  • Added PackedDimInfo struct with helper function calculate_packed_dim_info()
  • Replaced packed_dim_ member with packed_dim_info_ in vTensor class
  • Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
    • create_hashed_layout
    • calculate_dim_order
    • calculate_padded_sizes
    • calculate_logical_limits
    • TextureMetadata constructor/update
    • vTensorStorage constructor
  • Added packed_dim_info() accessor to vTensor and ComputeGraph classes
  • Store an additional padded_sizes_ member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using sizes_ directly
  • Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: D89832382

… metadata

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 27, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16403

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 2 New Failures, 1 Unrelated Failure

As of commit 1ad1bef with merge base 0dfda64 (image):

NEW FAILURES - The following jobs have failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA pushed a commit that referenced this pull request Dec 27, 2025
… metadata

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)

ghstack-source-id: 331218513
Pull Request resolved: #16403
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 27, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…d dimension metadata"

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Dec 29, 2025
… metadata

Pull Request resolved: #16403

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)
ghstack-source-id: 331346332
…d dimension metadata"

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Dec 29, 2025
… metadata

Pull Request resolved: #16403

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing
ghstack-source-id: 331360027

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)
@SS-JIA
Copy link
Contributor Author

SS-JIA commented Dec 29, 2025

Addressed comments (cc: @mergennachin)

  • Introduced better enforcement of rules in PackedDimInfo
    • Also introduced utility field is_block_packed to make code more readable
  • Fixed logic of virtual_transpose() - virtual transposition is not allowed if either packed dim is the one being transposed
  • Introduced comprehensive testing for the metadata of constructed vTensor instances in vulkan_compute_api_test
  • calculate_*() functions are no longer public - the only callsites were in testing code anyway

@SS-JIA SS-JIA requested a review from mergennachin December 29, 2025 22:00
Copy link
Contributor

@mergennachin mergennachin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few unused variables.

What are your thoughts on adding stricter flag (e.g., -Wunused-parameter) in backends/vulkan/CMakeLists.txt?

@SS-JIA
Copy link
Contributor Author

SS-JIA commented Dec 29, 2025

@mergennachin yes, let's do that. Is there a list of additional flags you recommend in addition to -Wunused-parameter? I'll put up a follow up PR to add these flags + relevant fixes

…d dimension metadata"

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)

[ghstack-poisoned]
@SS-JIA
Copy link
Contributor Author

SS-JIA commented Dec 30, 2025

Changes after accept: small change to Concat.cpp so that it doesn't need to call calculate_padded_sizes; this allows all calculate_*() utility functions to be made private.

Also resolved unused parameters issues.

@meta-codesync meta-codesync bot merged commit d1a6c5d into gh/SS-JIA/388/base Dec 30, 2025
162 of 166 checks passed
@meta-codesync meta-codesync bot deleted the gh/SS-JIA/388/head branch December 30, 2025 04:04
@mergennachin
Copy link
Contributor

Is there a list of additional flags you recommend in addition to -Wunused-parameter

@SS-JIA What about just -Wall and -Wextra (which will include unused-parameter)? arm, samsung and qualcomm backends have these flags enabled fwiw

https://github.com/pytorch/executorch/blob/main/backends/arm/CMakeLists.txt
https://github.com/pytorch/executorch/blob/main/backends/samsung/CMakeLists.txt
https://github.com/pytorch/executorch/blob/main/backends/qualcomm/CMakeLists.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants