Skip to content

Conversation

@pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #16403 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/388/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/388/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/388/orig
Differential Revision: D89832382
@diff-train-skip-merge

… metadata

Pull Request resolved: #16403

## Context

With the introduction of block-packed memory layouts for quantized tensors, the metadata stored by `vTensor` used to describe the data layout within a texture/buffer was no longer sufficient to form a complete description of the data layout. This created an awkward pattern of needing to estimate the `GPUMemoryLayout` which would be needed to compute storage descriptors such as image extents.

This diff addresses the problem by introducing the `PackedDimInfo` struct to `vTensor`, which provides a complete description of how data in a tensor may be organized in the GPU buffer/texture used to store the tensor data, and allows simplification of the functions used to compute buffer numel or texture extents.

## `PackedDimInfo`

Introduced the `PackedDimInfo` struct that encapsulates all information about packed dimensions in GPU tensors. This improves code organization and makes the relationship between related metadata fields explicit.

The `PackedDimInfo` struct contains:
- `packed_dim`: Which dimension is tightly packed (WHCN index) / contiguous in memory
- `packed_dim_padded`: Whether the packed dimension is padded to multiple of 4; some layouts will do this to accomodate vectorized load/stores
- `outer_packed_dim`: Second-level packing for block-packed layouts (4W4C, 4H4W); for layouts with only a single level of packing, will be equal to `packed_dim`
- `outer_packed_dim_padded`: Whether outer packed dim is padded (tiled only)

## Changes

- Added PackedDimInfo struct with helper function calculate_packed_dim_info()
- Replaced packed_dim_ member with packed_dim_info_ in vTensor class
- Updated function signatures to accept PackedDimInfo& instead of packed_dim_:
  * create_hashed_layout
  * calculate_dim_order
  * calculate_padded_sizes
  * calculate_logical_limits
  * TextureMetadata constructor/update
  * vTensorStorage constructor
- Added packed_dim_info() accessor to vTensor and ComputeGraph classes
- Store an additional `padded_sizes_` member in vTensor which is now used to for strides/image extents/GPU buffer numel computation, instead of using `sizes_` directly
- Introduce memory layouts for kInt8x4 type that only use a single level of packing
ghstack-source-id: 331381718

Differential Revision: [D89832382](https://our.internmc.facebook.com/intern/diff/D89832382/)
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner December 30, 2025 04:05
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16414

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ No Failures, 133 Pending

As of commit 447d2a4 with merge base 0dfda64 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 30, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@SS-JIA SS-JIA merged commit e63b441 into main Dec 30, 2025
149 of 153 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/388/orig branch December 30, 2025 04:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants