[volume - 5] 인덱스 및 캐시 적용 - 김동환#202
[volume - 5] 인덱스 및 캐시 적용 - 김동환#202Hwan0518 wants to merge 9 commits intoLoopers-dev-lab:Hwan0518from
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthrough상품 도메인에서 Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Cache as ProductCacheManager
participant Lock as CacheLock
participant DB as ProductQuerydslRepository
participant ReadModel as product_read_model
Client->>Cache: getOrLoad(detailKey, loader)
alt Cache Hit
Cache-->>Client: ProductCacheDto
else Cache Miss
Cache->>Lock: executeWithLock(detailKey, loader)
Lock->>DB: findProductCacheDtoById(productId)
DB->>ReadModel: select from product_read_model
ReadModel-->>DB: ProductCacheDto
DB-->>Lock: ProductCacheDto
Lock->>Cache: put(detailKey, ProductCacheDto, TTL+jitter)
Lock-->>Client: ProductCacheDto
end
sequenceDiagram
participant Admin
participant Facade as ProductCommandFacade
participant CmdSvc as ProductCommandService
participant ReadRepo as ProductReadModelRepository
participant Cache as ProductCacheManager
Admin->>Facade: updateBrand(brandId, dto)
Facade->>CmdSvc: updateBrand(...)
CmdSvc->>ReadRepo: syncBrandNameInReadModel(brandId, name)
ReadRepo-->>CmdSvc: OK
CmdSvc->>CmdSvc: findActiveIdsByBrandId(brandId)
CmdSvc->>Cache: refreshProductDetailCache(productId) (for each id)
Cache-->>CmdSvc: OK
CmdSvc-->>Facade: updated
Facade-->>Admin: HTTP 200
sequenceDiagram
participant User
participant Facade as ProductQueryFacade
participant QuerySvc as ProductQueryService
participant Cache as ProductCacheManager
participant Lock as CacheLock
participant DB as ProductQuerydslRepository
User->>Facade: searchProducts(criteria, page)
Facade->>QuerySvc: searchProducts(...)
QuerySvc->>Cache: get(idListKey)
alt ID list hit
Cache-->>QuerySvc: IdListCacheEntry
else miss
QuerySvc->>Lock: executeWithLock(idListKey, loader)
Lock->>DB: searchProductIds(criteria,page)
DB-->>Lock: IdListCacheEntry
Lock->>Cache: put(idListKey, entry)
Lock-->>QuerySvc: IdListCacheEntry
end
QuerySvc->>Cache: mgetProductDetails(ids)
Cache-->>QuerySvc: List<ProductCacheDto>
QuerySvc-->>Facade: List<ProductDetailOutDto>
Facade-->>User: HTTP 200
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR implements index optimization, Read Model separation (removing likeCount from Product domain), and Redis Cache-Aside pattern for the product catalog domain. It targets significant performance improvements for product listing/detail APIs at 100K+ data scale.
Changes:
- Added composite indexes across 8 entities (Product Read Model, Brand, Order, CartItem, CouponTemplate, IssuedCoupon, ProductLike, OrderItem) to eliminate full table scans
- Introduced
ProductReadModelEntitywith denormalizedbrandName/likeCount, migrating query paths from JOIN-based Product+Brand queries to single-table Read Model projections - Implemented 2-layer Redis cache (ID list + detail) with write-through, stampede protection (LocalCacheLock/RedisCacheLock), PER, and TTL jitter
Reviewed changes
Copilot reviewed 67 out of 72 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
Product.java |
Removed likeCount field from domain model |
ProductEntity.java |
Removed likeCount column |
ProductReadModelEntity.java |
New read-optimized entity with 12 composite indexes |
ProductReadModelJpaRepository.java |
New JPA repo with atomic counter updates and bulk operations |
ProductReadModelRepositoryImpl.java |
Repository impl with createdAt/likeCount preservation on save |
ProductReadModelRepository.java |
Domain repository interface for Read Model sync |
ProductCacheManager.java |
Redis cache manager with get/put/mget, PER, stampede protection |
LocalCacheLock.java |
JVM-local key-level lock for stampede prevention |
RedisCacheLock.java |
Redis SETNX distributed lock (standby) |
ProductCacheConstants.java, ProductCacheDto.java, IdListCacheEntry.java, CacheLock.java |
Cache infrastructure types |
ProductQueryService.java |
2-layer cache read path, admin detail via Read Model |
ProductCommandService.java |
Read Model sync + cache write-through |
ProductQueryFacade.java |
Simplified to delegate to cache-aware service |
ProductCommandFacade.java |
Added Read Model sync + cache refresh calls |
BrandCommandFacade.java |
Brand name change triggers Read Model + cache sync |
ProductQuerydslRepository.java |
Migrated to Read Model, added tie-breaker, ID list query |
ProductQueryPort.java, ProductQueryPortImpl.java |
Extended with cache DTO and ID list query methods |
ProductEntityMapper.java |
Removed likeCount mapping |
| Entity files (Brand, Order, OrderItem, CartItem, ProductLike, IssuedCoupon, CouponTemplate) | Added composite indexes |
ProductOutDto.java, ProductDetailOutDto.java, AdminProductOutDto.java, AdminProductDetailOutDto.java |
Removed from(Product) factory methods |
build.gradle.kts |
Added benchmark source set |
CLAUDE.md, .claude/skills/index-design/SKILL.md |
Documentation for index design conventions |
| Test files | Updated for likeCount removal, Read Model, cache behavior |
round5-docs/*, docs/todo/* |
Design docs, measurement results, future TODOs |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| private String buildIdListCacheKey(Long brandId, ProductSortType sortType, int page, int size) { | ||
| String brandPart = brandId != null ? brandId.toString() : "all"; | ||
| String sortPart = sortType != null ? sortType.name() : "LATEST"; | ||
| return ID_LIST_KEY_PREFIX + brandPart + ":" + sortPart + ":" + page + ":" + size; |
There was a problem hiding this comment.
The buildIdListCacheKey method is duplicated in both ProductQueryService (line 256) and ProductCommandService (line 246) with identical logic. This duplication means any change to the cache key format must be synchronized across both files, risking inconsistency. Consider extracting this into ProductCacheConstants or a shared utility method.
| CompletableFuture.runAsync(() -> { | ||
| try { | ||
| T fresh = loader.get(); | ||
| put(key, fresh, ttl); | ||
| } catch (Exception e) { | ||
| log.warn("PER 비동기 갱신 실패. key={}", key, e); | ||
| } | ||
| }); |
There was a problem hiding this comment.
The PER async refresh via CompletableFuture.runAsync() uses the default ForkJoinPool. In a high-traffic scenario, this could starve the common pool or lead to unbounded thread usage. Consider using a dedicated/bounded executor for PER refreshes.
| synchronized (lock) { | ||
| try { | ||
| return loader.get(); | ||
| } finally { | ||
| locks.remove(key); | ||
| } |
There was a problem hiding this comment.
The locks.remove(key) in the finally block can cause a race condition. If thread A holds the lock and is executing the loader, thread B arrives and gets the same lock object via computeIfAbsent. When thread A finishes and calls locks.remove(key), thread C could then call computeIfAbsent and get a new lock object, allowing C to enter the synchronized block concurrently with B (who is still waiting on the old lock object that was removed from the map).
To fix this, avoid removing the lock in the finally block, or use a reference-counting approach to ensure the lock object is only removed when no threads are waiting on it.
| } else { | ||
| // 락 획득 실패 → 대기 후 재시도 (캐시 히트 기대) | ||
| Thread.sleep(WAIT_MILLIS); | ||
| return loader.get(); |
There was a problem hiding this comment.
When the lock is not acquired, the code sleeps for 50ms and then calls loader.get() again, which means it will hit the DB anyway — defeating the purpose of the distributed lock. In a stampede scenario with N concurrent requests, N-1 of them will all call loader.get() after a 50ms delay instead of retrying the cache. The lock-failed path should retry reading from the cache after sleeping, not call the loader directly.
| // DB 조회 + 캐시 저장 | ||
| T value = loader.get(); | ||
| put(key, value, ttl); | ||
| return value; | ||
| }); |
There was a problem hiding this comment.
The getOrLoad method stores null values into the cache when loader.get() returns null (line 154-155: T value = loader.get(); put(key, value, ttl);). This will serialize null as the string "null" into Redis, which will then be deserialized back as the literal string "null" on subsequent reads, potentially causing ClassCastException or incorrect behavior. Add a null check before calling put, similar to the null guards in refreshProductDetail and refreshIdList.
29bd14c to
f2cd6b9
Compare
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
apps/commerce-api/build.gradle.kts (1)
54-54:⚠️ Potential issue | 🟡 Minorarchunit 의존성을 최신 버전으로 업그레이드한다.
archunit-junit5:1.3.0은 오래된 버전이다. 최신 릴리스 버전은 1.4.1이며, 1.3.2 패치도 이용 가능하다. 오래된 의존성 버전 사용은 버그 수정 및 성능 개선이 포함된 최신 릴리스를 놓치는 문제가 되므로, 최소한 1.3.2로의 업그레이드를 고려하거나 최신 메이저 버전인 1.4.1로 업데이트하고 호환성 테스트를 수행한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/build.gradle.kts` at line 54, Update the archunit test dependency declaration in apps/commerce-api/build.gradle.kts: replace the version string in the testImplementation("com.tngtech.archunit:archunit-junit5:1.3.0") entry with a newer release (preferably "1.4.1", or at minimum "1.3.2"), then run the project's test suite and any architecture tests to validate compatibility after the upgrade.apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java (1)
51-72:⚠️ Potential issue | 🟠 Major캐시 갱신이 트랜잭션 내부에서 수행되어 데이터 정합성 위험이 있다.
Line 64-68의 캐시 write-through가
@Transactional내부에서 실행된다. 만약 이후 로직이나 DB 커밋 시점에 예외가 발생하여 트랜잭션이 롤백되면:
- DB: 브랜드명 변경 롤백
- Cache: 이미 새 브랜드명으로 갱신됨 (롤백 안됨)
운영 환경에서 사용자가 캐시에서 잘못된 브랜드명을 조회하게 된다.
수정안: 캐시 갱신을 트랜잭션 커밋 이후로 분리
-// 2. 브랜드 수정 (브랜드명 변경 시 상품 상세 캐시 write-through) -@Transactional -public AdminBrandDetailOutDto updateBrand(Long id, AdminBrandUpdateInDto inDto) { +// 2. 브랜드 수정 +public AdminBrandDetailOutDto updateBrand(Long id, AdminBrandUpdateInDto inDto) { + AdminBrandDetailOutDto result = updateBrandInternal(id, inDto); + + // 트랜잭션 커밋 후 캐시 갱신 (비동기 또는 `@TransactionalEventListener` 활용) + List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); + for (Long productId : productIds) { + productCommandService.refreshProductDetailCache(productId); + } + + return result; +} + +@Transactional +private AdminBrandDetailOutDto updateBrandInternal(Long id, AdminBrandUpdateInDto inDto) { // ... 기존 DB 변경 로직 - - // 상품 상세 캐시 write-through (해당 브랜드의 전체 상품) - List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); - for (Long productId : productIds) { - productCommandService.refreshProductDetailCache(productId); - }또는
@TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)을 사용하여 커밋 후 캐시 갱신을 트리거하는 방식을 권장한다.추가 테스트: 브랜드 수정 트랜잭션 롤백 시 캐시 상태가 변경되지 않았는지 검증하는 테스트 케이스 추가를 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java` around lines 51 - 72, The updateBrand method performs cache write-through (productCommandService.refreshProductDetailCache) while still inside the `@Transactional` boundary, risking cache/DB divergence on rollback; move the cache refresh logic out of the transaction by publishing a domain event (e.g., BrandUpdatedEvent with brand id and new name) at the end of updateBrand or return value, and handle that event in a separate listener annotated with `@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT) (or an application event listener that runs after commit) which calls productQueryService.findActiveIdsByBrandId and then productCommandService.refreshProductDetailCache for each productId; add/adjust tests to assert cache is not updated when the transaction rolls back.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/mapper/ProductEntityMapper.java (1)
22-37:⚠️ Potential issue | 🟠 Major삭제 시각을 다시 찍지 말고 원본
deletedAt을 그대로 보존해야 한다.
toEntity()는 삭제된 도메인을 매핑할 때 타임스탬프를 받지 않는entity.delete()를 호출하므로, 도메인에 있던deletedAt을 그대로 옮길 수 없다. 운영에서는 소프트 삭제 감사 로그, 보관 배치, read-model 재동기화 기준 시각이 틀어질 수 있다.ProductEntity생성/복원 경로에서product.getDeletedAt()값을 직접 주입하도록 바꾸고, 삭제된Product -> ProductEntity테스트는isNotNull()대신 동일한 타임스탬프를 검증하는 케이스를 추가해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/mapper/ProductEntityMapper.java` around lines 22 - 37, The toEntity(Product product) mapping currently calls entity.delete() which sets a fresh deletion timestamp and loses the original product.getDeletedAt(); change the mapping so ProductEntity is created/initialized with the original product.getDeletedAt() (pass that timestamp into ProductEntity.of or an explicit setter/constructor) instead of calling entity.delete(), and update the Product -> ProductEntity test to assert that the entity.deletedAt equals the domain product.getDeletedAt() (exact timestamp equality) rather than just isNotNull().apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java (1)
56-67:⚠️ Potential issue | 🟠 Major테스트 픽스처의 동일 ID가 캐시 무효화 인자 실수를 가린다.
createTestProduct()와createTestBrand()가 모두1L을 사용해서 facade가 잘못된 식별자를refreshIdListCacheForAllSorts()또는refreshIdListCacheForSort()에 넘겨도 테스트가 통과한다. 운영에서는 브랜드별 목록 캐시가 갱신되지 않아 오래된 응답이 남을 수 있다.productId와brandId를 서로 다른 값으로 분리하고, create/update/delete 각각에서 기대 식별자가 정확히 전달되는지 검증하는 케이스를 추가해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java` around lines 56 - 67, The test fixtures use the same id for product and brand which masks incorrect cache-invalidation calls; update createTestProduct() and createTestBrand() to return distinct ids (e.g., productId != brandId) and then add assertions in ProductCommandFacadeTest that when invoking create/update/delete flows the facade methods refreshIdListCacheForAllSorts() and refreshIdListCacheForSort() receive the exact expected identifiers for each operation (verify productId is passed for product-related cache refreshes and brandId for brand-scoped refreshes), touching the test helpers createTestProduct, createTestBrand and the test cases that exercise the facade methods to explicitly assert the passed ids.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java (1)
163-200:⚠️ Potential issue | 🟡 Minor좋아요 증가/감소 테스트 - Read Model과 캐시 실패 시나리오 누락
increaseLikeCount와decreaseLikeCount테스트에서 정상 케이스만 검증하고 있다. 다음 실패 시나리오 테스트가 필요하다:
readModelRepository.increaseLikeCount()실패 시 예외 전파 여부productCacheManager.refreshProductDetail()실패 시 Read Model 업데이트는 유지되는지- 존재하지 않는 상품 ID로 호출 시 예외 처리
운영 환경에서 캐시 장애가 좋아요 기능 장애로 이어지지 않아야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java` around lines 163 - 200, Add unit tests for failure scenarios for both ProductCommandService.increaseLikeCount and .decreaseLikeCount: (1) mock readModelRepository.increaseLikeCount()/decreaseLikeCount() to throw and assert the exception is propagated by productCommandService; (2) mock readModelRepository to succeed but have productCacheManager.refreshProductDetail(...) throw and assert the readModelRepository method was still called (i.e., read model update is retained) while the cache exception is observed/propagated or handled as per implementation; (3) mock readModelRepository to throw a not-found/invalid-id exception for a non-existent product ID and assert productCommandService surfaces that error; use the existing test class names (IncreaseLikeCountTest, DecreaseLikeCountTest) and verify interactions with readModelRepository and productCacheManager via Mockito verify and when(...).thenThrow(...) to cover these scenarios.apps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.java (1)
97-127:⚠️ Potential issue | 🟡 MinorupdateBrand 테스트 - 대량 상품 시나리오 및 실패 케이스 누락
현재 테스트는 2개 상품에 대한 캐시 갱신만 검증한다. 다음 케이스 추가를 권장한다:
- 상품이 0개인 브랜드 업데이트 시 캐시 갱신 호출 없음 검증
syncBrandNameInReadModel실패 시 예외 전파 또는 격리 여부- 일부
refreshProductDetailCache실패 시 나머지 상품 처리 여부운영 환경에서 수천 개 상품을 가진 브랜드명 변경 시 성능 영향도 고려해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.java` around lines 97 - 127, Add three additional unit tests for BrandCommandFacade.updateBrand: (1) "no products" case: mock productQueryService.findActiveIdsByBrandId(1L) to return empty list and verify productCommandService.syncBrandNameInReadModel is still called but no refreshProductDetailCache invocations occur; (2) "syncBrandNameInReadModel fails" case: make productCommandService.syncBrandNameInReadModel throw an exception (or return a failed future if async) and assert whether brandCommandFacade.updateBrand propagates that exception or handles it (mirror production behavior) by expecting the exception or verifying isolation behavior; (3) "partial refresh failures" case: mock productQueryService.findActiveIdsByBrandId to return many ids, have productCommandService.refreshProductDetailCache throw for some ids and succeed for others, then verify the method was invoked for all ids and assert that updateBrand either propagates the combined failure or continues processing (assert expected behavior). Use the same setup pattern as the existing test (Brand.reconstruct, given(...) stubs) and verify interactions with brandQueryService, brandCommandService, productQueryService, and productCommandService for each scenario.
🟡 Minor comments (10)
apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java-59-63 (1)
59-63:⚠️ Potential issue | 🟡 MinortearDown에서 한 정리 실패가 다른 정리를 건너뛰지 않게 해야 한다
Line 61이 예외를 던지면 Line 62가 실행되지 않아 Redis 키가 남고, 이후 E2E가 연쇄 실패하면서 원인 파악이 어려워진다.
try/finally또는 정리 전용 coordinator로 DB/Redis 정리를 각각 반드시 시도하고, 두 예외는 suppressed로 함께 보고하도록 바꾸는 편이 안전하다. 추가로 한 정리 작업이 실패해도 다른 정리 작업이 호출되는지 목 기반 테스트를 보강해야 한다.예시 수정안
`@AfterEach` void tearDown() { - databaseCleanUp.truncateAllTables(); - redisCleanUp.truncateAll(); + RuntimeException cleanupFailure = null; + try { + databaseCleanUp.truncateAllTables(); + } catch (RuntimeException e) { + cleanupFailure = e; + } finally { + try { + redisCleanUp.truncateAll(); + } catch (RuntimeException e) { + if (cleanupFailure != null) { + cleanupFailure.addSuppressed(e); + } else { + cleanupFailure = e; + } + } + } + if (cleanupFailure != null) { + throw cleanupFailure; + } }As per coding guidelines, "통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java` around lines 59 - 63, The tearDown method currently calls databaseCleanUp.truncateAllTables() then redisCleanUp.truncateAll() so if the first call throws the second is skipped; change tearDown to always attempt both cleanups by wrapping each cleanup call so the other still runs (use try/finally or independent try/catch blocks), capture any exceptions and attach one as suppressed to the other (or aggregate them) before rethrowing or failing the test, and update/add a mock-based test to verify that when databaseCleanUp.truncateAllTables() throws the redisCleanUp.truncateAll() is still invoked; refer to the tearDown method and the methods databaseCleanUp.truncateAllTables() and redisCleanUp.truncateAll() when locating where to apply these changes.apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java-740-768 (1)
740-768:⚠️ Potential issue | 🟡 Minor상세 캐시만 검증해서 목록 캐시 무효화 회귀를 놓칠 수 있다
운영에서는 상품 수정 후 목록 캐시가 남아 있으면 TTL 동안 이전 이름과 가격이 계속 노출된다. 현재 시나리오는 상세 조회만 prewarm하므로 SCAN 기반 목록 키 무효화 누락이나 파라미터별 키(
brandId,sort,page) 누락을 잡지 못한다. 수정 전에 목록 조회를 한 번 이상 호출해 캐시를 채운 뒤, 수정 후 같은 쿼리 결과가 갱신되는지 검증하는 테스트를 별도로 추가해야 한다. 삭제 후 목록에서 제거되는 케이스까지 함께 넣으면 회귀 탐지가 더 안정적이다.예시 테스트 보강
+ mockMvc.perform(get("/api/v1/products") + .param("brandId", brandId.toString()) + .param("sort", "PRICE_ASC")) + .andExpect(status().isOk()) + .andExpect(jsonPath("$.content[0].name").value("에어맥스")); + mockMvc.perform(put("/api-admin/v1/products/{productId}", productId) .header(ADMIN_LDAP_HEADER, ADMIN_LDAP_VALUE) .contentType(MediaType.APPLICATION_JSON) .content(objectMapper.writeValueAsString(updateRequest))) .andExpect(status().isOk()); mockMvc.perform(get("/api/v1/products/{productId}", productId)) .andExpect(status().isOk()) .andExpect(jsonPath("$.name").value("에어맥스 97")) .andExpect(jsonPath("$.price").value(159000)) .andExpect(jsonPath("$.stock").value(200)) .andExpect(jsonPath("$.description").value("레트로 러닝화")); + + mockMvc.perform(get("/api/v1/products") + .param("brandId", brandId.toString()) + .param("sort", "PRICE_ASC")) + .andExpect(status().isOk()) + .andExpect(jsonPath("$.content[0].name").value("에어맥스 97")) + .andExpect(jsonPath("$.content[0].price").value(159000));As per coding guidelines, "Java 테스트 리뷰 기준이다. 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다. 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java` around lines 740 - 768, Extend the updateProductThenGetReturnsUpdatedData test (or add a new test) to also prewarm and assert the product listing cache: before updating call the listing endpoint (GET /api/v1/products with the same query params e.g., brandId, sort, page) at least once to populate cache; after performing put via AdminProductUpdateRequest to /api-admin/v1/products/{productId} call the same listing GET and assert the updated name/price appear (and that deleted products are removed if you add a deletion step). Ensure you use the same request parameters used by production caching (e.g., brandId, sort, page) when prewarming and asserting, and add a separate assertion flow for delete-to-list removal if covering deletion regressions.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheConstants.java-29-33 (1)
29-33:⚠️ Potential issue | 🟡 Minor문서화된 캐시 TTL 값과 코드의 구현이 불일치한다.
설계 문서(
05-to-be-cache-visualization.html)에서는 "상품 상세 10분, 상품 목록 5분"으로 명시되어 있으나,ProductCacheConstants.java에서는DETAIL_TTL = 2분,ID_LIST_TTL = 3분으로 구현되어 있다.이 불일치는 운영 시 다음 문제를 초래한다:
- 캐시 일관성 윈도우(stale 데이터 허용 구간)가 설계 의도와 맞지 않음
- 캐시 적중률 모니터링 기준이 달라짐 (5~10분 기준으로 설정된 경보가 작동하지 않음)
- 용량 계획 및 무효화 전략의 예측 불가능성 증대
문서 또는 코드 중 어느 쪽을 수정해야 하는지 결정하고, 결정된 TTL 값이
ProductCacheManager의 PER(Probabilistic Early Refresh) 로직과 일관성이 있는지 검증한 후 수정한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheConstants.java` around lines 29 - 33, The TTL constants in ProductCacheConstants (DETAIL_TTL and ID_LIST_TTL) do not match the design doc; change DETAIL_TTL to Duration.ofMinutes(10) and ID_LIST_TTL to Duration.ofMinutes(5) to match "상품 상세 10분, 상품 목록 5분", then review ProductCacheManager's PER logic (references to DETAIL_TTL and ID_LIST_TTL) to ensure any early-refresh windows or probabilistic calculations use the updated constants and adjust thresholds if they were tuned for the old 2/3 minute values.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java-27-34 (1)
27-34:⚠️ Potential issue | 🟡 Minor@Modifying 쿼리에 clearAutomatically = true를 추가하여 영속성 컨텍스트 안정성을 보장해야 한다.
현재 코드는
increaseLikeCount,decreaseLikeCount,updateBrandNameByBrandId모두 UPDATE 수행 전에 엔티티를 로드하지 않으므로 즉각적인 캐시 불일치 문제는 없다. 그러나 향후 리팩토링 중 이들 메서드 호출 전에 동일 엔티티를 먼저 조회하는 코드가 추가될 경우, 영속성 컨텍스트의 1차 캐시가 DB의 변경을 반영하지 못해 데이터 불일치 버그를 유발할 수 있다.세 메서드 모두
@Modifying(clearAutomatically = true)로 수정하면 쿼리 실행 직후 자동으로 영속성 컨텍스트를 초기화하여 향후 안전성을 확보할 수 있다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java` around lines 27 - 34, The three repository modifying methods updateBrandNameByBrandId, increaseLikeCount and decreaseLikeCount should include clearAutomatically = true on their `@Modifying` annotations to avoid persistence-context cache staleness; update each method's `@Modifying` to `@Modifying`(clearAutomatically = true) so the EntityManager clears the first-level cache immediately after the bulk UPDATE and prevents future read-after-write inconsistencies if the same entity is later loaded in the same transaction.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLockTest.java-34-83 (1)
34-83:⚠️ Potential issue | 🟡 Minor
@DisplayName과 실제 테스트 동작이 불일치한다.Line 35의 DisplayName은 "loader 1회만 실행. 나머지는 대기 후 결과 공유"라고 명시하지만, Line 82의 assertion은
loaderCallCount.get()).isEqualTo(threadCount)(100회)를 기대한다.이 불일치는 테스트 의도를 오해하게 만들어 향후 유지보수 시 혼란을 야기한다. LocalCacheLock이 결과를 캐싱하지 않고 직렬화만 수행한다는 점이 명확히 드러나야 한다.
수정안: DisplayName을 실제 동작에 맞게 변경
`@Test` -@DisplayName("[executeWithLock()] 같은 key 100개 동시 요청 -> loader 1회만 실행. 나머지는 대기 후 결과 공유") +@DisplayName("[executeWithLock()] 같은 key 100개 동시 요청 -> loader 직렬 실행 (100회). 락으로 동시 DB 조회 방지, 결과 캐싱은 CacheManager에서 수행") void sameKeyConcurrentRequests_loaderCalledOnce() throws InterruptedException {추가 테스트: 실제 스탬피드 보호(loader 1회 호출, 결과 공유)를 검증하는 통합 테스트가
ProductCacheManager레벨에서 존재하는지 확인이 필요하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLockTest.java` around lines 34 - 83, Update the test DisplayName to match the actual behavior verified in sameKeyConcurrentRequests_loaderCalledOnce: LocalCacheLock only serializes access (does not cache), so change the `@DisplayName` text from "loader 1회만 실행. 나머지는 대기 후 결과 공유" to something like "같은 key 100개 동시 요청 -> loader는 직렬 실행되어 100회 호출" so the message aligns with the assertion that loaderCallCount equals threadCount; reference the test method sameKeyConcurrentRequests_loaderCalledOnce and the LocalCacheLock behavior when making the change.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java-59-68 (1)
59-68:⚠️ Potential issue | 🟡 Minor
decreaseLikeCount()에서 Read Model 미존재와 likeCount=0을 구분하지 못한다.현재 구현은 WHERE
likeCount > 0조건으로 0행 반환 시 두 가지 상황을 구분하지 못한다:
- 엔티티 존재, likeCount가 이미 0인 경우 (정상)
- 엔티티 자체가 미존재하는 경우 (데이터 정합성 문제)
운영 환경에서 Read Model 동기화 누락이나 데이터 불일치 발생 시 문제를 인지하지 못하고 조용히 실패한다. 장애 감지가 지연되어 사용자에게 잘못된 likeCount가 노출될 수 있다.
수정안: 엔티티 존재 여부를 먼저 확인하거나 별도 쿼리로 검증
`@Override` public void decreaseLikeCount(Long productId) { - // 원자적 감소 (단일 SQL UPDATE — 0 이하로 내려가지 않음) - int updatedRows = jpaRepository.decreaseLikeCount(productId); - - // 대상 Read Model 미존재 시 예외 (likeCount가 이미 0인 경우는 정상 — 0행 반환 허용) - // Note: decreaseLikeCount WHERE likeCount > 0 조건으로 0행 반환은 이미 0인 경우도 포함 - // 따라서 여기서는 검증하지 않음 (음수 방지가 목적) + // 엔티티 존재 여부 먼저 확인 + if (!jpaRepository.existsById(productId)) { + throw new CoreException(ErrorType.PRODUCT_NOT_FOUND); + } + + // 원자적 감소 (단일 SQL UPDATE — 0 이하로 내려가지 않음) + jpaRepository.decreaseLikeCount(productId); }추가 테스트: Read Model 미존재 상태에서
decreaseLikeCount()호출 시 예외 발생 여부를 검증하는 테스트 케이스 추가를 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java` around lines 59 - 68, The decreaseLikeCount(Long productId) implementation currently can't tell if updatedRows == 0 means the entity is missing or likeCount was already 0; update the method (ProductReadModelRepositoryImpl.decreaseLikeCount) to distinguish these cases by: if jpaRepository.decreaseLikeCount(productId) returns 0, call a separate existence check (e.g., jpaRepository.existsById(productId) or findById) and throw a specific exception when the entity does not exist; otherwise treat 0 as a no-op for already-zero likeCount. Also add a unit/integration test asserting that decreaseLikeCount on a non-existent productId raises the expected exception.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java-240-307 (1)
240-307:⚠️ Potential issue | 🟡 MinorgetOrLoad 테스트 - Redis 장애 격리 테스트 누락
getOrLoad의 캐시 미스/히트 시나리오는 테스트되었으나, Redis 연결 실패 시 예외가 격리되고 loader가 정상 실행되는지 검증하는 테스트가 없다. 운영 안정성 확보를 위해 필수적인 테스트다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java` around lines 240 - 307, Add a test that simulates Redis/unavailable-cache behavior and verifies getOrLoad still isolates the failure and runs the loader: arrange by configuring the underlying cache (used by productCacheManager) to throw on get/put (or use a spy/mock to throw RuntimeException), call productCacheManager.getOrLoad(key, ProductDetailOutDto.class, Duration.ofMinutes(10), loader) where loader increments loaderCallCount and returns a value, and assert the loader was invoked (loaderCallCount == 1), the returned DTO equals the loader result, and that the cache exception did not propagate (no exception thrown); also assert that productCacheManager.get(key, ProductDetailOutDto.class) does not return a cached value if put failed. Use the existing method names productCacheManager.getOrLoad, productCacheManager.get, productCacheManager.put and the loader lambda to locate and implement the test.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java-76-78 (1)
76-78:⚠️ Potential issue | 🟡 MinorupdateProduct에서 PRICE_ASC만 캐시 갱신 - 정렬 타입 누락 가능성
가격 변경 시
PRICE_ASC정렬만 갱신하고 있다. 그러나 가격 변경은PRICE_DESC정렬에도 영향을 준다. 의도적인 설계인지 확인이 필요하다.🔧 제안: PRICE_DESC도 갱신
// write-through: 상세 캐시 + PRICE_ASC 정렬 ID 리스트 (가격 변경 영향) productCommandService.refreshProductDetailCache(id); productCommandService.refreshIdListCacheForSort(updatedProduct.getBrandId(), ProductSortType.PRICE_ASC); +productCommandService.refreshIdListCacheForSort(updatedProduct.getBrandId(), ProductSortType.PRICE_DESC);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java` around lines 76 - 78, The updateProduct flow in ProductCommandFacade currently refreshes product detail cache and only calls productCommandService.refreshIdListCacheForSort(..., ProductSortType.PRICE_ASC), which misses the PRICE_DESC ordering that is also affected by price changes; modify updateProduct to also call productCommandService.refreshIdListCacheForSort(updatedProduct.getBrandId(), ProductSortType.PRICE_DESC) (i.e., refresh both PRICE_ASC and PRICE_DESC) alongside the existing productCommandService.refreshProductDetailCache(id) call so both sort caches are updated when price changes.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java-380-431 (1)
380-431:⚠️ Potential issue | 🟡 MinorgetOrLoadProductDetail 테스트 - Redis 장애 시나리오 누락
캐시 히트, 캐시 미스, 상품 미존재 케이스는 테스트되었으나, Redis 장애 시 DB fallback 동작 테스트가 누락되어 있다. 운영 안정성 검증을 위해
productCacheManager.getOrLoadWithPer()가 예외를 던질 때 DB 직접 조회로 fallback하는지 테스트가 필요하다.💡 추가 테스트 케이스 제안
`@Test` `@DisplayName`("[getOrLoadProductDetail()] Redis 장애 시 -> DB fallback 동작. 예외 없이 ProductDetailOutDto 반환") void redisFailure_fallbackToDb() { // Arrange given(productCacheManager.getOrLoadWithPer(any(), any(), any(), any())) .willThrow(new RuntimeException("Redis connection failed")); ProductCacheDto cacheDto = new ProductCacheDto(...); given(productQueryPort.findProductCacheDtoById(1L)).willReturn(cacheDto); // Act & Assert // ProductQueryService 내부에서 Redis 예외를 catch하고 DB fallback하는지 검증 }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java` around lines 380 - 431, The tests are missing a Redis-failure scenario: add a test for productQueryService.getOrLoadProductDetail that simulates productCacheManager.getOrLoadWithPer throwing (e.g., RuntimeException) and verifies the service falls back to DB by stubbing productQueryPort.findProductCacheDtoById to return a ProductCacheDto and asserting a valid ProductDetailOutDto is returned; also verify productQueryPort.findProductCacheDtoById was invoked and that the exception from getOrLoadWithPer is swallowed (no exception escapes the service).apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java-52-104 (1)
52-104:⚠️ Potential issue | 🟡 MinorExecutorService 종료 처리 불완전 - 리소스 누수 및 플래키 테스트 가능성
executor.shutdown()만 호출하고awaitTermination()을 호출하지 않아 모든 태스크가 완료되기 전에 테스트가 종료될 수 있다.doneLatch.await()가 있어 현재는 동작하지만, 예외 발생 시 latch가 카운트다운되지 않으면 무한 대기한다.테스트에 타임아웃을 설정하고,
executor.awaitTermination()을 추가하는 것이 안전하다.🛡️ 개선된 종료 처리
+@Test(timeout = 30000) `@DisplayName`("[getOrLoad()] single-key 스탬피드 - 캐시 미스 상태에서 100개 동시 요청 -> loader 호출 최소화 (이상: 1회)") void singleKeyStampede_loaderMinimized() throws InterruptedException { // ... existing code ... readyLatch.await(); startLatch.countDown(); - doneLatch.await(); + boolean completed = doneLatch.await(30, TimeUnit.SECONDS); + assertThat(completed).isTrue(); executor.shutdown(); + executor.awaitTermination(5, TimeUnit.SECONDS);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java` around lines 52 - 104, The test method singleKeyStampede_loaderMinimized() currently calls executor.shutdown() without waiting for termination and uses doneLatch.await() with no timeout; update the cleanup to (1) wait on doneLatch with a reasonable timeout (e.g., a few seconds) to avoid indefinite blocking if a task fails, (2) call executor.shutdown() then executor.awaitTermination(...) with a timeout, and (3) if awaitTermination times out, call executor.shutdownNow() to ensure tasks are stopped; refer to the local variables executor, doneLatch, readyLatch, and startLatch when adding these timeouts and shutdown fallback logic.
🧹 Nitpick comments (16)
apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java (2)
23-23: 사용되지 않는 import가 존재한다.
verify가 import 되어 있으나 테스트 코드에서 사용되지 않는다. AI 요약에 따르면 이전에 존재하던verify(productQueryFacade.findActiveByIds(productIds))검증이 제거되었다.
- 문제점: 불필요한 import는 코드 정리 도구나 정적 분석에서 경고를 발생시키고, 코드 가독성을 저해한다.
- 수정안: 사용하지 않는 import를 제거하거나, Mock 호출 검증을 다시 추가한다.
♻️ 제안: 사용하지 않는 import 제거
-import static org.mockito.Mockito.verify;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java` at line 23, The test OrderProductReaderImplTest imports Mockito.verify but no longer uses it; either remove the unused import or restore the verification. Locate the import of verify and either delete it, or reintroduce the mock assertion such as verifying productQueryFacade.findActiveByIds(productIds) (or the appropriate call on productQueryFacade with productIds) in the test methods to justify keeping verify.
42-74: 예외 흐름 및 경계값 테스트가 누락되어 있다.현재 테스트는 정상 케이스(2개 상품 조회)와 빈 목록 케이스만 검증한다. 운영 환경에서 발생 가능한 다음 시나리오에 대한 테스트가 필요하다:
- null 입력 처리:
productIds가null로 전달될 때NullPointerException또는 적절한 예외 처리 여부- 부분 조회 결과: 요청한 ID 중 일부만 존재할 때(예: 삭제된 상품) 반환 순서 및 결과 검증
- 예외 전파:
productQueryFacade.findActiveByIds()가 예외를 던질 때OrderProductReaderImpl의 동작이 테스트들이 없으면 운영 중 예상치 못한 입력이나 장애 상황에서 시스템 동작을 예측하기 어렵다.
💡 추가 테스트 케이스 예시
`@Test` `@DisplayName`("[readProducts()] 일부 ID만 존재하는 경우 -> 존재하는 상품만 반환") void readProductsPartialMatch() { // Arrange List<Long> productIds = List.of(1L, 2L, 3L); Product p1 = Product.reconstruct(1L, 1L, ProductName.from("상품1"), Money.from(new BigDecimal("10000")), Stock.from(5L), null, null); given(productQueryFacade.findActiveByIds(productIds)).willReturn(List.of(p1)); // Act List<OrderProductInfo> result = orderProductReaderImpl.readProducts(productIds); // Assert assertThat(result).hasSize(1); assertThat(result.get(0).productId()).isEqualTo(1L); } `@Test` `@DisplayName`("[readProducts()] Facade 예외 발생 시 -> 예외 전파") void readProductsThrowsException() { // Arrange List<Long> productIds = List.of(1L); given(productQueryFacade.findActiveByIds(productIds)) .willThrow(new RuntimeException("조회 실패")); // Act & Assert assertThatThrownBy(() -> orderProductReaderImpl.readProducts(productIds)) .isInstanceOf(RuntimeException.class); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java` around lines 42 - 74, Add boundary and error-path tests for OrderProductReaderImpl.readProducts: add a test in OrderProductReaderImplTest that verifies null productIds handling (call orderProductReaderImpl.readProducts(null) and assert expected behavior—either assertThrows(NullPointerException.class) or that it returns empty list depending on implementation), a partial-match test where productQueryFacade.findActiveByIds(productIds) returns a subset and assert only those OrderProductInfo entries are returned (use Product.reconstruct to build returned products and assert sizes and ids), and an exception-propagation test that stubs productQueryFacade.findActiveByIds(...) to throw (e.g., new RuntimeException("조회 실패")) and assert that readProducts rethrows the exception using assertThatThrownBy; locate tests in OrderProductReaderImplTest and use the existing productQueryFacade mock and orderProductReaderImpl instance.apps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.java (1)
39-52: 예외 전파 케이스도 같이 고정해 두는 편이 안전하다.현재는 정상 위임만 검증하고 있어
ProductCommandFacade가 예외를 던질 때ProductLikeCountSyncerImpl이 예외를 삼키거나 다른 예외로 바꿔도 테스트가 통과한다. 이 경로는 좋아요 수 동기화 쓰기 경로이므로 실패가 가려지면 ReadModel/캐시 불일치가 운영에서 늦게 드러난다.
increaseLikeCount()와decreaseLikeCount()각각에 대해 facade 예외가 그대로 전파되는지 추가로 검증하는 것을 권장한다. 수정은willThrow(...)와assertThatThrownBy(...)조합으로 충분하다. 추가 테스트로는 각 메서드별 예외 전파 1건씩, 그리고 가능하다면null상품 ID 입력 시 기대 동작도 명시하는 편이 좋다.예시
+import static org.assertj.core.api.Assertions.assertThatThrownBy; import static org.assertj.core.api.Assertions.assertThat; import static org.mockito.BDDMockito.*; import static org.mockito.Mockito.verify; @@ class IncreaseLikeCountTest { @@ void increaseLikeCountSuccess() { // Arrange Long productId = 42L; @@ assertThat(captor.getValue()).isEqualTo(productId); } + + `@Test` + `@DisplayName`("[increaseLikeCount()] Provider Facade 예외 발생 -> 동일 예외 전파") + void increaseLikeCountPropagatesException() { + Long productId = 42L; + RuntimeException exception = new RuntimeException("boom"); + willThrow(exception).given(productCommandFacade).increaseLikeCount(productId); + + assertThatThrownBy(() -> productLikeCountSyncerImpl.increaseLikeCount(productId)) + .isSameAs(exception); + } } @@ class DecreaseLikeCountTest { @@ void decreaseLikeCountSuccess() { // Arrange Long productId = 42L; @@ assertThat(captor.getValue()).isEqualTo(productId); } + + `@Test` + `@DisplayName`("[decreaseLikeCount()] Provider Facade 예외 발생 -> 동일 예외 전파") + void decreaseLikeCountPropagatesException() { + Long productId = 42L; + RuntimeException exception = new RuntimeException("boom"); + willThrow(exception).given(productCommandFacade).decreaseLikeCount(productId); + + assertThatThrownBy(() -> productLikeCountSyncerImpl.decreaseLikeCount(productId)) + .isSameAs(exception); + } }As per coding guidelines,
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.Also applies to: 61-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.java` around lines 39 - 52, Add tests to ProductLikeCountSyncerImplTest to verify that ProductLikeCountSyncerImpl propagates exceptions from ProductCommandFacade: for increaseLikeCount() and decreaseLikeCount(), stub productCommandFacade to willThrow(new RuntimeException("...")) when called and assertThatThrownBy(() -> productLikeCountSyncerImpl.increaseLikeCount(productId)) (and similarly for decreaseLikeCount) to confirm the same exception is propagated; optionally add a test for null productId to assert expected behavior. Ensure you reference the existing test methods increaseLikeCountSuccess/decreaseLikeCountSuccess and use ArgumentCaptor/verify patterns consistently.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/IdListCacheEntry.java (1)
13-14: 캐시 엔티티에서 방어적 복사 누락으로 인한 불변성 위반 가능성이 있다.
List<Long> ids가 외부에서 전달된 참조를 그대로 노출한다. 캐시된 데이터가 외부에서 의도치 않게 변경될 수 있어 캐시 일관성 문제가 발생할 수 있다.♻️ 불변 리스트로 방어적 복사 적용
-public record IdListCacheEntry(List<Long> ids, long totalElements) { +public record IdListCacheEntry(List<Long> ids, long totalElements) { + public IdListCacheEntry { + ids = ids != null ? List.copyOf(ids) : List.of(); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/IdListCacheEntry.java` around lines 13 - 14, IdListCacheEntry currently exposes the incoming List<Long> reference via its ids component, risking external mutation; modify the record to perform a defensive copy in its canonical constructor (or factory) by replacing the incoming ids with an unmodifiable/immutable copy (e.g., List.copyOf(ids) or Collections.unmodifiableList(new ArrayList<>(ids))) so the record's ids field is immutable and cannot be changed by callers; ensure null-handling (requireNonNull) if desired and apply the same defensive-copy approach wherever IdListCacheEntry is constructed.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductReadModelEntity.java (1)
104-106: 팩토리 메서드에서ZonedDateTime.now()직접 호출은 테스트 용이성을 저하시킨다.
of(Product, String)메서드에서ZonedDateTime.now()를 직접 호출하면 단위 테스트에서 시간 관련 검증이 어렵다. 테스트 시 특정 시점을 주입하기 어려워 시간 기반 로직 검증이 제한된다.♻️ Clock 주입 패턴 또는 시간 파라미터화 고려
테스트 환경에서 시간을 제어해야 하는 경우, 오버로드된
of(Product, String, ZonedDateTime, Long)메서드를 활용하거나, 서비스 레이어에서 Clock을 주입받아 시간을 전달하는 방식을 고려할 수 있다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductReadModelEntity.java` around lines 104 - 106, The factory method ProductReadModelEntity.of(Product, String) calls ZonedDateTime.now() directly which hinders time-controlled unit tests; change it to stop creating the timestamp internally and instead either (A) add an overload that accepts a Clock and call ZonedDateTime.now(clock) from the convenience method, or (B) remove the direct-now call and require callers to use the existing of(Product, String, ZonedDateTime, Long) so tests can pass a fixed ZonedDateTime; update the convenience method implementation of(Product, String) to delegate to the overload that accepts a Clock or to throw/redirect to the parametrized of(...) so all time creation is testable (refer to methods: of(Product, String) and of(Product, String, ZonedDateTime, Long)).apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java (1)
51-53: 브랜드별 활성 상품 ID 조회 시 대량 데이터 반환 가능성을 고려해야 한다.
findActiveIdsByBrandId가 페이징 없이 전체 ID 목록을 반환한다. 브랜드당 상품 수가 많은 경우(수만 건 이상) 메모리 부담이 될 수 있다.운영 환경에서 브랜드당 최대 상품 수가 제한적이라면 현재 구현으로 충분하지만, 모니터링 대상으로 관리가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java` around lines 51 - 53, The method findActiveIdsByBrandId in ProductReadModelJpaRepository returns all IDs for a brand without paging which can OOM for large brands; change it to a paginated or streaming API (e.g., accept a Pageable and return Page<Long>/Slice<Long>, or return Stream<Long> and use a read-only transactional streaming query with appropriate fetchSize) and update callers to iterate in chunks rather than loading the full list into memory; ensure the `@Query` remains ("SELECT e.id FROM ProductReadModelEntity e WHERE e.brandId = :brandId AND e.deletedAt IS NULL") but adjust the repository method signature to support pagination or streaming and add transactional boundaries where needed.apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java (1)
65-68: 브랜드에 속한 상품이 많을 경우 성능 병목이 발생할 수 있다.모든 상품 ID를 조회한 후 개별적으로
refreshProductDetailCache()를 호출한다. 브랜드에 1,000개 상품이 있다면 1,000번의 캐시 연산이 트랜잭션 내에서 동기적으로 실행된다.운영 관점 문제:
- 장시간 트랜잭션으로 DB 커넥션 점유 시간 증가
- 타임아웃 위험
- 사용자 응답 지연
수정안: 배치 처리 또는 비동기 처리 적용
// 방법 1: 비동기 이벤트로 분리 `@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT) public void handleBrandNameChanged(BrandNameChangedEvent event) { List<Long> productIds = productQueryService.findActiveIdsByBrandId(event.brandId()); productIds.forEach(productCommandService::refreshProductDetailCache); } // 방법 2: 배치 캐시 무효화 (개별 갱신 대신 패턴 기반 삭제) productCacheManager.evictByBrandId(brandId);추가 테스트: 대량 상품(예: 500개 이상)이 연결된 브랜드 수정 시 응답 시간 측정 테스트를 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java` around lines 65 - 68, The current loop in BrandCommandFacade that calls productCommandService.refreshProductDetailCache for every ID returned by productQueryService.findActiveIdsByBrandId causes long synchronous transactions for large brands; change this to avoid per-entity synchronous work inside the transaction by (a) moving the refresh logic out of the transactional path and publishing an async event after commit (use a TransactionalEventListener/after-commit handler that reads product IDs via productQueryService and then invokes refreshProductDetailCache asynchronously) or (b) replace per-product refresh with a batch cache invalidation (introduce a productCacheManager.evictByBrandId(brandId) or similar) so the method in BrandCommandFacade no longer iterates and performs heavy work inside the transaction. Ensure you update tests to cover large-brand scenarios and that productCommandService/eviction runs off the commit thread.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java (1)
71-80:updateStock(),updateBrandName()실행 결과 검증이 누락되었다.두 메서드 모두 영향받은 row 수를 확인하지 않아 대상 Read Model이 존재하지 않을 때 조용히 실패한다. 운영 환경에서 Read Model 동기화 누락 시 재고나 브랜드명 불일치가 발생해도 즉시 감지되지 않는다.
increaseLikeCount()와 일관성을 맞추거나, 의도적으로 무시하는 것이라면 로깅이라도 추가하여 모니터링 가능하게 하는 것을 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java` around lines 71 - 80, updateStock(Long productId, Long newStock)와 updateBrandName(Long brandId, String newBrandName)가 jpaRepository 호출 결과(영향받은 row 수)를 확인하지 않아 대상 Read Model 미존재 시 조용히 실패합니다; increaseLikeCount()와 동일하게 jpaRepository.updateStock(...)와 jpaRepository.updateBrandNameByBrandId(...)의 반환값(affected rows)을 검사해 0이면 예외를 던지거나 적어도 processLogger/LOGGER로 경고를 남기도록 수정하세요—메서드 이름은 updateStock, updateBrandName, 그리고 호출되는 jpaRepository.updateStock / jpaRepository.updateBrandNameByBrandId를 찾아 처리하면 됩니다.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java (1)
690-712: 다건 상세 조회의 순서 계약을 테스트로 고정하는 편이 안전하다.현재는 이름 집합만 검증해서
findProductCacheDtosByIds()가 입력 ID 순서를 잃어도 통과한다. 운영에서는searchProductIds()가 만든 정렬/페이지 순서와 실제 응답 순서가 어긋나 잘못된 상품 목록이 노출될 수 있다.List.of(p2.getId(), p1.getId())처럼 역순 입력을 넣고containsExactly로 동일 순서를 검증하거나, 서비스에서 재정렬한다면 그 계약을 별도 테스트로 고정해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java` around lines 690 - 712, The test ProductQueryPortImplTest.findProductCacheDtosByIdsSuccess currently only asserts the name set which allows the method productQueryPort.findProductCacheDtosByIds(...) to return results in any order; fix by asserting the expected input-to-output ordering contract: call findProductCacheDtosByIds with a reversed ID list (e.g., List.of(p2.getId(), p1.getId())) and replace the loose containsExactlyInAnyOrder assertion with a strict containsExactly (or otherwise assert the order of ProductCacheDto::name) so the method's ordering behavior is locked into the test; if the service is expected to reorder results (e.g., to match searchProductIds pagination), add a separate test describing that deterministic reordering contract and assert accordingly.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImpl.java (1)
11-12: 애플리케이션 포트에 캐시 전용 타입을 노출하지 않는 편이 안전하다.
ProductQueryPort경계에서infrastructure.cache타입을 반환하면 캐시 직렬화 DTO 변경이 애플리케이션 계약 변경으로 전파된다. 운영에서는 캐시 포맷 교체나 캐시 우회/백필 경로 추가 시 영향 범위가 과도하게 커진다.IdListCacheEntry와ProductCacheDto를 애플리케이션용 조회 projection으로 옮기거나 포트 전용 DTO를 따로 정의하고, 아키텍처 테스트로application..이infrastructure..를 직접 참조하지 못하게 막는 편이 안전하다.Also applies to: 52-69
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImpl.java` around lines 11 - 12, 현재 ProductQueryPort(및 ProductQueryPortImpl)이 infrastructure.cache의 IdListCacheEntry와 ProductCacheDto를 반환·노출하고 있으므로 애플리케이션 계약에 캐시 전용 타입이 침투합니다; 수정하려면 ProductQueryPort의 반환 타입을 cache-agnostic한 애플리케이션 전용 DTO/Projection(예: ProductListProjection, ProductCacheProjection 등)으로 정의하고, ProductQueryPortImpl에서 IdListCacheEntry와 ProductCacheDto를 이 새 DTO로 매핑하도록 변경하며, 필요하다면 기존 cache DTO들을 infrastructure.cache에 그대로 두고 매핑 책임은 ProductQueryPortImpl(또는 별도의 mapper 클래스)에 위임하고, 아키텍처 테스트로 application 패키지가 infrastructure.cache에 직접 의존하는 것을 차단하도록 추가하십시오.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java (2)
193-307: searchProducts 테스트 커버리지 - 경계값 테스트 추가 권장
MAX_CACHEABLE_PAGE=2경계값 테스트가page=2로만 수행되고 있다.page=1(캐시 가능)과page=2(캐시 불가) 경계를 명확히 테스트하고,page=0과page=1에서 캐시가 사용되는지 검증하면 경계 조건이 더 명확해진다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java` around lines 193 - 307, Add boundary tests around MAX_CACHEABLE_PAGE for ProductQueryService.searchProducts: keep the existing page=2 test as the non-cacheable case, and add at least one test asserting cache usage when page=1 (and optionally page=0) to prove cache paths are taken. Specifically, in SearchProductsTest add a test that calls productQueryService.searchProducts(..., page=1, ...) and stubs/verifies productCacheManager.getOrLoad and productCacheManager.mgetProductDetails are invoked and productQueryPort.searchProducts is not called; mirror this for page=0 if desired. Reference ProductQueryService.searchProducts, MAX_CACHEABLE_PAGE, and the test class SearchProductsTest to locate where to add these new tests.
219-247: 캐시 전체 히트 테스트 - verifyNoInteractions 검증 범위 확인
verifyNoInteractions(productQueryPort)가 Layer 2 MGET 전체 히트 시 DB 미호출을 검증하고 있다. 그러나productQueryPort.findProductCacheDtosByIds()도 호출되지 않아야 함을 명시적으로 검증하면 테스트 의도가 더 명확해진다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java` around lines 219 - 247, The test currently asserts no interactions with productQueryPort via verifyNoInteractions(productQueryPort) but should also explicitly assert that productQueryPort.findProductCacheDtosByIds(...) was not called to clarify the DB-not-used intent; add a specific Mockito verification like verify(productQueryPort, never()).findProductCacheDtosByIds(any()) (or the equivalent never() check) alongside the existing verifyNoInteractions call, referencing the productQueryPort and findProductCacheDtosByIds symbols.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java (1)
196-217: TTL jitter 테스트 - 검증 범위가 너무 넓다base TTL이 10분(600,000ms)인데 jitter가 +10%면 660,000ms가 최대다. 그러나 테스트에서
minMs - 1000부터maxMs + 1000까지 허용하고 있어, 실제 jitter 범위(0~10%)보다 훨씬 넓은 오차를 허용한다.
minMs가baseMs로 설정되어 있어 음수 jitter를 허용하지 않는 것처럼 보이는데, 이것이 의도된 설계인지 확인이 필요하다.-long minMs = baseMs; -long maxMs = baseMs + (baseMs / 10); +// jitter 범위가 0~10%라면 +long minMs = baseMs; +long maxMs = baseMs + (baseMs / 10); +// 실제 Redis TTL 설정 시점과 getExpire 호출 시점 사이 지연만 허용 +assertThat(remainMs).isBetween(minMs - 100, maxMs);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java` around lines 196 - 217, The test in ProductCacheManagerTest (putSetsTtlWithJitter) allows too-wide TTL tolerance; tighten the assertion to the actual expected jitter range by setting minMs = baseMs (or baseMs - (baseMs/10) if negative jitter is intended) and maxMs = baseMs + (baseMs/10), then assert redisTemplate.getExpire(key, TimeUnit.MILLISECONDS) isBetween(minMs - smallEpsilon, maxMs + smallEpsilon) using a small epsilon (e.g., 50–100ms) instead of 1000ms; also confirm whether negative jitter is allowed and adjust minMs accordingly so productCacheManager.put and the test agree on jitter direction.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java (1)
389-420: ID 리스트 캐시 갱신 테스트 - 호출 횟수 검증의 취약성
times(12)와times(4)로 호출 횟수만 검증하고 있다. 상수가 변경되면(MAX_CACHEABLE_PAGE, 정렬 타입 추가 등) 테스트가 실패하거나, 잘못된 키로 갱신해도 탐지하지 못한다.호출 인자를 캡처하여 올바른 캐시 키가 생성되는지 검증하거나, 최소한 주요 캐시 키 패턴이 포함되는지 확인하는 것이 좋다.
💡 ArgumentCaptor를 사용한 검증 예시
`@Test` void refreshIdListCacheForAllSortsSuccess() { // Act productCommandService.refreshIdListCacheForAllSorts(1L); // Assert ArgumentCaptor<String> keyCaptor = ArgumentCaptor.forClass(String.class); verify(productCacheManager, atLeast(1)).refreshIdList(keyCaptor.capture(), any()); List<String> keys = keyCaptor.getAllValues(); assertThat(keys).anyMatch(k -> k.contains("1:LATEST")); assertThat(keys).anyMatch(k -> k.contains("all:LATEST")); assertThat(keys).anyMatch(k -> k.contains("PRICE_ASC")); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java` around lines 389 - 420, The tests currently only assert call counts (verify(... times(12/4))) which is brittle; update RefreshIdListCacheForAllSortsTest.refreshIdListCacheForAllSortSuccess and RefreshIdListCacheForSortTest.refreshIdListCacheForSortSuccess to capture the actual keys passed to productCacheManager.refreshIdList (use ArgumentCaptor for the key argument) and assert expected key patterns (e.g., contain "1:" for brandId, "all:" for global lists, and sort identifiers like "PRICE_ASC" or "LATEST") and that keys are produced for pages up to MAX_CACHEABLE_PAGE/expected pages instead of relying solely on times(). Ensure assertions reference refreshIdList cache key contents rather than only call counts.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java (1)
85-106: deleteProduct 흐름에서 캐시 evict 순서 검토현재 순서: 상품 삭제 → 캐시 evict → ID 리스트 refresh → 좋아요/장바구니 정리
캐시 evict가 트랜잭션 커밋 전에 실행되므로, 트랜잭션 롤백 시 캐시는 이미 삭제된 상태가 된다.
@TransactionalEventListener(phase = AFTER_COMMIT)를 고려하거나, 최소한 캐시 미스 시 DB fallback이 동작함을 테스트로 보장해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java` around lines 85 - 106, The current deleteProduct flow in ProductCommandFacade calls productCommandService.deleteProductDetailCache, refreshIdListCacheForAllSorts, deleteAllProductLikes and deleteAllCartItems inside the transactional method, which can cause cache eviction and cross-BC side-effects to occur before the DB commit; change the implementation so cache evict and external side-effect calls are executed AFTER the transaction commits (e.g., publish an ApplicationEvent from deleteProduct and handle it in a separate listener annotated with `@TransactionalEventListener`(phase = AFTER_COMMIT), or move the calls into methods annotated with `@TransactionalEventListener`(phase = AFTER_COMMIT) that invoke productCommandService.deleteProductDetailCache, productCommandService.refreshIdListCacheForAllSorts(product.getBrandId()), productCommandService.deleteAllProductLikes(product.getId()), and productCommandService.deleteAllCartItems(product.getId())); ensure the deleteProduct method only performs the DB delete (productCommandService.deleteProduct) and event publication so cache/state changes happen post-commit.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java (1)
102-103: 스탬피드 테스트 허용 범위 - 레이스 컨디션 허용 근거 명확화
assertThat(loaderCallCount.get()).isLessThanOrEqualTo(2)로 2회까지 허용하고 있다. 락 구현이 정상이라면 1회여야 하므로, 2회 허용의 근거를 주석으로 명시하거나, 락 획득 실패 시 대기 후 캐시 히트를 기대하는 로직이라면 그 설계를 문서화해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java` around lines 102 - 103, The test in CacheStampedeTest (around the assertion on loaderCallCount) currently allows up to 2 loader calls but has no explanation; update the test to either enforce the strict expectation (change assertion to == 1 on loaderCallCount in the test method) if the lock implementation guarantees a single load, or add a clear comment immediately above the assertion explaining why a second call is tolerated (e.g., permitted race window or intentional backoff/timeout behavior) and reference the related locking behavior (the test method and any helper like loaderCallCount or the cache/lock mock used) so reviewers understand the design choice.
...src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java
Outdated
Show resolved
Hide resolved
...src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java
Outdated
Show resolved
Hide resolved
| String[][] queries = { | ||
| {"UC1: brandId=X, LATEST", baseSelect + "WHERE p.deleted_at IS NULL ORDER BY p.created_at DESC LIMIT 20"}, | ||
| {"UC2: brandId=X, PRICE_ASC", baseSelect + "WHERE p.deleted_at IS NULL ORDER BY p.price ASC LIMIT 20"}, | ||
| {"UC3: brandId=X, LIKES_DESC", baseSelect + "WHERE p.deleted_at IS NULL ORDER BY p.price DESC LIMIT 20"}, | ||
| {"UC4: brandId=1, LATEST", baseSelect + "WHERE p.deleted_at IS NULL AND p.brand_id = 1 ORDER BY p.created_at DESC LIMIT 20"}, | ||
| {"UC5: brandId=1, PRICE_ASC", baseSelect + "WHERE p.deleted_at IS NULL AND p.brand_id = 1 ORDER BY p.price ASC LIMIT 20"}, | ||
| {"UC6: brandId=1, LIKES_DESC", baseSelect + "WHERE p.deleted_at IS NULL AND p.brand_id = 1 ORDER BY p.price DESC LIMIT 20"}, | ||
| }; |
There was a problem hiding this comment.
LIKES_DESC 시나리오가 실제로는 가격 내림차순을 측정한다
AS-IS UC3/UC6가 ORDER BY p.price DESC를 사용하고 있어 TO-BE의 like_count DESC와 비교 대상이 달라진다. 이 상태로는 인덱스 개선과 Read Model 도입 효과를 문서화해도 운영 의사결정에 쓸 수 없으므로, 실제 좋아요 정렬 SQL로 바꾸거나 최소한 시나리오명을 PRICE_DESC로 고쳐 비교 표에서 제외하는 편이 안전하다. 쿼리 라벨과 ORDER BY 컬럼이 일치하는지 검증하는 작은 테스트나 정적 검사를 추가하는 편이 좋다. As per coding guidelines, **/*Test*.java: 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductIndexPerformanceTest.java`
around lines 173 - 180, In ProductIndexPerformanceTest (the queries[][]
definition), UC3 and UC6 are mislabeled as LIKES_DESC while their SQL uses ORDER
BY p.price DESC; update those entries to either use ORDER BY p.like_count DESC
to match the LIKES_DESC label or rename the labels to PRICE_DESC so the scenario
name and ORDER BY column match; additionally add a small assertion or static
check inside the test (e.g., near the queries array initialization) that
verifies each query label contains the column used in its ORDER BY (reference
the queries variable, UC3/UC6, and baseSelect) to prevent future divergence.
| // 4. 좋아요 수 증가 (Read Model 원자적 카운터 + 상세 캐시 write-through) | ||
| @Transactional | ||
| public void increaseLikeCount(Long productId) { | ||
| productCommandRepository.increaseLikeCount(productId); | ||
|
|
||
| // Read Model 좋아요 수 증가 (likes 테이블이 SoT, Read Model이 유일한 projection) | ||
| readModelRepository.increaseLikeCount(productId); | ||
|
|
||
| // 상세 캐시 write-through (ID 리스트는 TTL 자연 만료 — 고빈도 트리거 최적화) | ||
| productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)); | ||
| } | ||
|
|
||
|
|
||
| // 5. 좋아요 수 감소 (원자적 카운터 — 단일 UPDATE SQL로 동시성 안전) | ||
| // 5. 좋아요 수 감소 (Read Model 원자적 카운터 + 상세 캐시 write-through) | ||
| @Transactional | ||
| public void decreaseLikeCount(Long productId) { | ||
| productCommandRepository.decreaseLikeCount(productId); | ||
|
|
||
| // Read Model 좋아요 수 감소 (likes 테이블이 SoT, Read Model이 유일한 projection) | ||
| readModelRepository.decreaseLikeCount(productId); | ||
|
|
||
| // 상세 캐시 write-through (ID 리스트는 TTL 자연 만료 — 고빈도 트리거 최적화) | ||
| productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)); |
There was a problem hiding this comment.
좋아요 변경 시 LIKES_DESC ID 리스트 캐시가 갱신되지 않는다
좋아요 수는 정렬 기준인데 상세 캐시만 갱신하면, 운영에서는 인기순 목록과 상세 likeCount가 TTL 동안 서로 다른 상태로 보이게 된다. increaseLikeCount()와 decreaseLikeCount()에서 상품의 brandId를 확인한 뒤 LIKES_DESC에 한해 brandId/all 캐시 페이지를 무효화하거나 재생성하는 편이 안전하다. 두 상품의 좋아요 수가 역전된 뒤 /api/v1/products?sort=LIKES_DESC&page=0&size=20 결과 순서가 즉시 바뀌는 통합 테스트를 추가하는 편이 좋다. As per coding guidelines, **/*Service*.java: 유스케이스 단위로 책임이 정리되어 있는지, 부수 효과가 명확한지 점검한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`
around lines 127 - 147, increaseLikeCount and decreaseLikeCount only refresh the
product detail cache, so the LIKES_DESC list cache remains stale; modify both
methods to fetch the product's brandId (via
productQueryPort.findProductCacheDtoById or a dedicated query) and, if sort ==
LIKES_DESC, invalidate or refresh the list cache pages for that brand and for
"all" (use the same cache manager that handles list pages), ensuring you call
this after updating the read model and before/after refreshProductDetail as
appropriate; add an integration test that likes/unlikes two products to assert
the /api/v1/products?sort=LIKES_DESC&page=0&size=20 ordering updates
immediately.
| productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)); | ||
| } | ||
|
|
||
|
|
||
| // 5. 좋아요 수 감소 (원자적 카운터 — 단일 UPDATE SQL로 동시성 안전) | ||
| // 5. 좋아요 수 감소 (Read Model 원자적 카운터 + 상세 캐시 write-through) | ||
| @Transactional | ||
| public void decreaseLikeCount(Long productId) { | ||
| productCommandRepository.decreaseLikeCount(productId); | ||
|
|
||
| // Read Model 좋아요 수 감소 (likes 테이블이 SoT, Read Model이 유일한 projection) | ||
| readModelRepository.decreaseLikeCount(productId); | ||
|
|
||
| // 상세 캐시 write-through (ID 리스트는 TTL 자연 만료 — 고빈도 트리거 최적화) | ||
| productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)); |
There was a problem hiding this comment.
Redis 갱신은 커밋 이후로 미루는 편이 안전하다
현재 like/stock 변경과 Facade용 캐시 refresh 메서드가 트랜잭션 안에서 Redis를 먼저 갱신하므로, 이후 롤백이나 커밋 실패가 나면 캐시에만 반영된 유령 상태가 남을 수 있다. TransactionSynchronizationManager.registerSynchronization(...)나 @TransactionalEventListener(phase = AFTER_COMMIT)로 refreshProductDetail, deleteProductDetail, refreshIdList를 커밋 후 실행으로 옮기는 편이 안전하다. 롤백 시 Redis 값이 바뀌지 않는 통합 테스트를 추가하는 편이 좋다. As per coding guidelines, **/*Service*.java: 트랜잭션 경계(@Transactional) 위치와 전파, readOnly, 롤백 조건을 점검한다.
Also applies to: 168-169, 201-241
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`
around lines 135 - 147, The cache refresh calls (e.g.,
productCacheManager.refreshProductDetail(...) inside decreaseLikeCount and other
Service methods) are executed inside the `@Transactional` boundary and must be
deferred to after successful commit; change these to run in an after-commit hook
by either registering a TransactionSynchronization
(TransactionSynchronizationManager.registerSynchronization(...)) or publishing a
transactional event and handling it with `@TransactionalEventListener`(phase =
AFTER_COMMIT) so refreshProductDetail, deleteProductDetail and refreshIdList
execute only after commit; update decreaseLikeCount and the other affected
Service methods to remove direct Redis updates and instead trigger the
after-commit action, and add an integration test that performs a rollback path
and asserts Redis is unchanged.
...-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java
Outdated
Show resolved
Hide resolved
| // PER: TTL 잔여 시간 확인 → 임박 시 확률적 비동기 갱신 | ||
| if (shouldEarlyRefresh(key, ttl)) { | ||
| CompletableFuture.runAsync(() -> { | ||
| try { | ||
| T fresh = loader.get(); | ||
| put(key, fresh, ttl); | ||
| } catch (Exception e) { | ||
| log.warn("PER 비동기 갱신 실패. key={}", key, e); | ||
| } | ||
| }); |
There was a problem hiding this comment.
PER 경로에 키 단위 동시성 제어가 빠져 있다
TTL 말단 구간의 히트 요청마다 CompletableFuture.runAsync()가 바로 loader를 호출하므로, 핫키에서는 공용 풀에 중복 refresh가 한꺼번에 쌓여 스탬피드 방어가 다시 무력화된다. 비동기 refresh도 cacheLock으로 한 번만 실행되게 하거나 전용 bounded executor로 넘겨 중복 로드를 제한하는 편이 안전하다. 동일 키에 대한 동시 요청 100건에서 loader 호출 수가 1회로 제한되는 동시성 테스트를 추가하는 편이 좋다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java`
around lines 168 - 177, The PER async-refresh path launches unbounded
CompletableFuture.runAsync calls on each near-expiry hit, allowing duplicate
loader.get() invocations for hot keys; change the async refresh to first acquire
the existing cacheLock for the specific key (reuse the same lock used by
put/get) or submit the refresh task to a bounded single-threaded executor per
key so only one loader.get() runs per key at a time, and ensure exceptions still
log as before; update shouldEarlyRefresh usage so that when it returns true the
async branch attempts to obtain the key-level lock (or enqueue to the bounded
executor) before calling loader.get() and put(key, fresh, ttl); add a
concurrency test that fires ~100 simultaneous requests for the same key and
asserts loader was invoked once to validate the fix.
| public <T> T executeWithLock(String key, Supplier<T> loader) { | ||
|
|
||
| // 락 키 생성 | ||
| String lockKey = key + ":lock"; | ||
|
|
||
| // SETNX로 락 획득 시도 (TTL 5초) | ||
| Boolean acquired = redisTemplate.opsForValue() | ||
| .setIfAbsent(lockKey, "1", LOCK_TTL); | ||
|
|
||
| try { | ||
| if (Boolean.TRUE.equals(acquired)) { | ||
| // 락 획득 성공 → loader 실행 | ||
| return loader.get(); | ||
| } else { | ||
| // 락 획득 실패 → 대기 후 재시도 (캐시 히트 기대) | ||
| Thread.sleep(WAIT_MILLIS); | ||
| return loader.get(); | ||
| } | ||
| } catch (InterruptedException e) { | ||
| Thread.currentThread().interrupt(); | ||
| return loader.get(); | ||
| } finally { | ||
| // 락 획득 성공한 경우에만 해제 | ||
| if (Boolean.TRUE.equals(acquired)) { | ||
| redisTemplate.delete(lockKey); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
분산 락 구현 - 락 소유권 검증 누락 및 대기 시간 불충분
다음 문제가 있다:
-
락 소유권 검증 누락:
finally에서delete(lockKey)시, TTL(5초) 만료 후 다른 스레드가 획득한 락을 삭제할 수 있다. 값에 고유 식별자(UUID)를 저장하고, 삭제 시 본인 락인지 확인해야 한다. -
대기 시간 불충분:
WAIT_MILLIS = 50ms로 대기 후 loader 재실행하는데, loader가 50ms 이상 걸리면 캐시 히트 대신 중복 loader 실행이 발생한다. -
재시도 없음: 락 획득 실패 시 1회 대기 후 바로 loader 실행하므로, 스탬피드 방어 효과가 제한적이다.
🔒 락 소유권 검증 추가
+private static final String LOCK_VALUE_PREFIX = UUID.randomUUID().toString();
`@Override`
public <T> T executeWithLock(String key, Supplier<T> loader) {
String lockKey = key + ":lock";
+ String lockValue = LOCK_VALUE_PREFIX + ":" + Thread.currentThread().getId();
Boolean acquired = redisTemplate.opsForValue()
- .setIfAbsent(lockKey, "1", LOCK_TTL);
+ .setIfAbsent(lockKey, lockValue, LOCK_TTL);
try {
// ... existing code ...
} finally {
if (Boolean.TRUE.equals(acquired)) {
- redisTemplate.delete(lockKey);
+ // 본인이 획득한 락인 경우에만 삭제
+ String currentValue = redisTemplate.opsForValue().get(lockKey);
+ if (lockValue.equals(currentValue)) {
+ redisTemplate.delete(lockKey);
+ }
}
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/RedisCacheLock.java`
around lines 42 - 69, In executeWithLock, stop using a plain "1" value and a
single short wait: generate a unique owner id (UUID) per invocation, use
redisTemplate.opsForValue().setIfAbsent(lockKey, ownerId, LOCK_TTL) to acquire,
and in finally release only if the stored value equals ownerId by performing an
atomic compare-and-delete (use a Redis Lua script or equivalent via
redisTemplate.execute to check value == ownerId then DEL). Replace the single
Thread.sleep(WAIT_MILLIS) + immediate loader call with a controlled retry loop
(e.g., exponential backoff or fixed retries) that polls until a max wait or
attempts are reached before calling loader to avoid duplicate work; ensure
InterruptedException handling preserves interrupt status and falls back
appropriately. Also reference LOCK_TTL, WAIT_MILLIS, lockKey, redisTemplate, and
executeWithLock when making these changes.
| } else { | ||
| // 락 획득 실패 → 대기 후 재시도 (캐시 히트 기대) | ||
| Thread.sleep(WAIT_MILLIS); | ||
| return loader.get(); |
There was a problem hiding this comment.
락 획득 실패 시 loader 재실행 - 스탬피드 방어 의도와 불일치
락 획득 실패 시 50ms 대기 후 loader.get()을 다시 호출한다. 이는 스탬피드 방어 목적과 맞지 않는다. 대기 후에는 캐시를 다시 확인하고, 여전히 미스라면 loader를 실행해야 한다.
현재 구현은 락 획득 여부와 관계없이 모든 스레드가 loader를 실행할 수 있어, 스탬피드 방어 효과가 제한적이다.
🔧 개선된 대기 후 캐시 재확인 로직
} else {
// 락 획득 실패 → 대기 후 재시도 (캐시 히트 기대)
Thread.sleep(WAIT_MILLIS);
- return loader.get();
+ // 대기 후 캐시 재확인은 상위 레이어(ProductCacheManager)에서 처리
+ // 여기서는 null 반환하여 캐시 재확인 유도
+ return null;
}또는 CacheLock 인터페이스 계약을 명확히 하여, 락 획득 실패 시 행동을 정의해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/RedisCacheLock.java`
around lines 55 - 58, In RedisCacheLock (the method that currently sleeps then
calls loader.get() on lock miss), change the behavior so after
Thread.sleep(WAIT_MILLIS) you re-check the cache for the value and only call
loader.get() if the cache is still a miss; do not unconditionally run
loader.get() after the sleep. Concretely, in the method inside class
RedisCacheLock (the block handling "락 획득 실패"), replace the direct return
loader.get() with logic that reads the cached value again (using the same cache
lookup used before acquiring the lock) and only invokes loader.get() when that
re-check returns null/miss; optionally wrap this in a retry loop or respect your
CacheLock contract to avoid multiple threads invoking loader concurrently.
Ensure the change references the existing loader.get() call and the cache lookup
used earlier so behavior and locking semantics remain consistent.
| @Table(name = "order_items", indexes = { | ||
| // 주문 상품 조회: WHERE order_id = ? / WHERE order_id IN (?) | ||
| @Index(name = "idx_order_items_order", columnList = "order_id") | ||
| }) |
There was a problem hiding this comment.
order_id 단일 인덱스만으로는 소프트 삭제 조건을 충분히 받지 못한다.
운영에서는 주문 항목 조회가 대개 deleted_at IS NULL AND order_id ... 형태로 실행되므로, 지금처럼 order_id만 인덱싱하면 소프트 삭제 데이터가 누적될수록 인덱스 스캔 후 필터링 비용이 커질 수 있다. 이 PR의 목표도 deleted_at 선두 복합 인덱스로 스캔 범위를 줄이는 것이므로, 이 엔티티도 같은 전략으로 맞추는 편이 안전하다.
수정 예시
`@Table`(name = "order_items", indexes = {
// 주문 상품 조회: WHERE order_id = ? / WHERE order_id IN (?)
- `@Index`(name = "idx_order_items_order", columnList = "order_id")
+ `@Index`(name = "idx_order_items_deleted_order", columnList = "deleted_at, order_id")
})추가 테스트로는 soft-deleted row를 섞은 데이터셋에서 대표 조회 쿼리의 EXPLAIN 결과가 새 인덱스를 선택하고 rows가 실제로 감소하는지 확인하는 회귀 성능 테스트를 넣어달라.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Table(name = "order_items", indexes = { | |
| // 주문 상품 조회: WHERE order_id = ? / WHERE order_id IN (?) | |
| @Index(name = "idx_order_items_order", columnList = "order_id") | |
| }) | |
| `@Table`(name = "order_items", indexes = { | |
| // 주문 상품 조회: WHERE order_id = ? / WHERE order_id IN (?) | |
| `@Index`(name = "idx_order_items_deleted_order", columnList = "deleted_at, order_id") | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderItemEntity.java`
around lines 22 - 25, The current `@Table` index on OrderItemEntity only indexes
order_id which doesn't serve the common query pattern filtering by deleted_at;
update the `@Table`(indexes=...) for OrderItemEntity to add a leading composite
index on deleted_at and order_id (e.g., replace or add an `@Index` named like
idx_order_items_deleted_at_order with columnList = "deleted_at, order_id") so
queries with "deleted_at IS NULL AND order_id = ?" can use the index; also add a
regression performance test that seeds rows including soft-deleted items and
asserts the representative SELECT's EXPLAIN chooses the new index and reports
fewer scanned rows.
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (5)
apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLock.java (2)
33-42:⚠️ Potential issue | 🔴 Critical현재
locks.remove(key)로 같은 key의loader가 다시 병렬 실행된다.T1이
finally에서 엔트리를 지운 직후, 대기 중인 T2가 기존 락을 이어받기 전에 T3가 새 락을 만들 수 있어 같은 key의loader가 동시에 두 번 이상 실행된다. 운영에서는 캐시 스탬피드 방어가 깨져 DB 중복 조회와 불필요한 캐시 쓰기가 급증하므로, 키별 락 엔트리를 참조 카운트로 관리하고 마지막 대기자까지 빠진 뒤에만 제거하다. 추가 테스트로T1 실행 -> T2 대기 -> T1 종료 직후 T3 진입시loader호출 수가 1회로 유지되는 동시성 회귀 테스트를 넣다.수정 예시
-import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.atomic.AtomicInteger; ... - private final ConcurrentHashMap<String, Object> locks = new ConcurrentHashMap<>(); + private final ConcurrentHashMap<String, LockEntry> locks = new ConcurrentHashMap<>(); + + private static final class LockEntry { + private final Object monitor = new Object(); + private final AtomicInteger refCount = new AtomicInteger(1); + } ... - Object lock = locks.computeIfAbsent(key, k -> new Object()); + LockEntry entry = locks.compute(key, (k, existing) -> { + if (existing == null) { + return new LockEntry(); + } + existing.refCount.incrementAndGet(); + return existing; + }); - synchronized (lock) { + synchronized (entry.monitor) { try { return loader.get(); } finally { - locks.remove(key); + locks.computeIfPresent(key, (k, existing) -> { + if (existing != entry) { + return existing; + } + return entry.refCount.decrementAndGet() == 0 ? null : entry; + }); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLock.java` around lines 33 - 42, 현재 LocalCacheLock의 locks.remove(key) 호출로 인해 T1 종료 직후 대기중인 T2가 기존 락을 이어받기 전에 T3가 새 락을 만들어 loader가 중복 실행될 수 있으니, locks의 값(Object) 대신 참조 카운트를 갖는 락 엔트리(예: class LockEntry { Object monitor; int count; })로 바꾸고 computeIfAbsent에서 엔트리를 생성하거나 조회한 뒤 반드시 참조 카운트(+1)를 증가시키며, synchronized(lockEntry.monitor) 내부 실행이 끝난 뒤 참조 카운트(-1)를 하고 count가 0이 될 때만 locks에서 제거하도록 변경하세요; 대상 식별자: LocalCacheLock, locks, computeIfAbsent, loader, locks.remove. 또한 T1 실행 -> T2 대기 -> T1 종료 직후 T3 진입 시 loader가 1회만 호출됨을 검증하는 동시성 회귀 테스트를 추가하세요.
17-18:⚠️ Potential issue | 🟠 Major운영 기본 빈으로
LocalCacheLock이 선택되지 않게 하다.이 상태에서는 멀티 인스턴스 환경에서도 JVM 로컬 락이 기본 주입되어 노드 간 캐시 스탬피드 방어가 전혀 되지 않는다.
LocalCacheLock은 local/test 프로파일로 제한하고, 공유 환경에서는RedisCacheLock이 기본 선택되도록 조건부 빈 또는@Primary를 재구성하다. 추가 테스트로 local 프로파일에는LocalCacheLock, dev/qa/prod 프로파일에는RedisCacheLock이 주입되는지 통합 테스트를 추가하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLock.java` around lines 17 - 18, 현재 LocalCacheLock가 기본 빈(`@Primary`)으로 등록되어 멀티 인스턴스 환경에서 JVM 로컬 락이 기본 주입되는 문제입니다; 제거된 `@Primary/`@Component 설정을 수정하고 LocalCacheLock 클래스를 로컬/테스트 전용으로 제한(`@Profile`({"local","test"}) 또는 동등한 조건)한 뒤 RedisCacheLock을 기본 빈으로 만들되 프로덕션/공유 환경에서 선택되도록 구성(예: RedisCacheLock에 `@Primary` 또는 `@Profile`("!local & !test") 또는 `@ConditionalOnMissingBean` 전략 적용)하세요; 또한 통합 테스트를 추가해 local 프로파일에서는 LocalCacheLock가 주입되고 dev/qa/prod(또는 non-local/test)에서는 RedisCacheLock가 주입되는지 검증하세요.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java (2)
109-117:⚠️ Potential issue | 🟠 Major
put()가nullpayload를 그대로 저장하지 않도록 막아야 한다.운영에서는 존재하지 않는 상품 조회 결과가 캐시에 애매한 형태로 남으면, 읽기 경로가 이를 정상 hit로 소비하지 못해 요청마다 DB/ReadModel을 다시 치는 루프가 생긴다.
put()에서null은 저장하지 않거나, 명시적 negative-cache sentinel을 도입하고get()에서 그 sentinel을 일관되게 해석하도록 정리하는 편이 안전하다. 추가로 동일한 없는 상품을 연속 조회할 때 loader 호출 수가 1회로 제한되는지 검증하는 테스트가 필요하다.#!/bin/bash set -euo pipefail file='apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java' echo '--- put / getOrLoad / getOrLoadWithPer ---' sed -n '108,196p' "$file" echo echo '--- null handling around cache writes ---' rg -n 'writeValueAsString|Optional\.of\(|put\(key, value|loader\.get\(\)' "$file" echo echo '--- tests related to cache miss / non-existent product handling ---' rg -n 'getOrLoad|negative|not.?found|null|miss' apps/commerce-api/src/test/java🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 109 - 117, put() currently serializes and writes null payloads which allows ambiguous cache entries; change ProductCacheManager.put(String key, Object value, Duration ttl) to skip storing null values (or instead store a defined negative-cache sentinel constant and document it) and ensure get()/getOrLoad()/getOrLoadWithPer() consistently interpret that sentinel as a cache miss; update logic around objectMapper.writeValueAsString(value) and writeTemplate.opsForValue().set(...) so nulls are not written (or the sentinel is written) and reuse applyJitter as-is; add a unit/integration test that loads a missing product and asserts the loader is invoked only once on repeated lookups to validate negative-cache behavior.
168-177:⚠️ Potential issue | 🟠 MajorPER 비동기 갱신에도 키 단위 동시성 제어가 필요하다.
현재는 TTL 임박 hit마다
CompletableFuture.runAsync()가 즉시loader.get()를 호출하므로, 핫키에서는 refresh 작업이 공용 풀에 중복 적재된다. 그러면 평상시 miss 경로에서 넣어 둔 stampede 방어가 near-expiry 구간에서 다시 무력화된다. 이 경로도 기존cacheLock을 재사용하거나 키별 bounded executor로 직렬화해서, 동일 키 refresh가 한 번만 돌도록 바꾸는 편이 안전하다. 추가로 동일 키 동시 hit 100건에서 loader 호출 수가 1회로 제한되는지 검증하는 동시성 테스트가 필요하다.#!/bin/bash set -euo pipefail file='apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java' echo '--- PER refresh branch ---' sed -n '161,177p' "$file" echo echo '--- async refresh and locking references ---' rg -n 'runAsync|cacheLock|shouldEarlyRefresh|getOrLoadWithPer' \ "$file" \ apps/commerce-api/src/test/java echo echo '--- concurrency-oriented cache tests ---' rg -n 'stampede|concurrent|parallel|PER|early refresh' apps/commerce-api/src/test/java🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 168 - 177, The PER early-refresh branch calls CompletableFuture.runAsync(...) which can queue many concurrent loader.get() tasks for the same hot key; change this to serialize per-key refreshes by reusing the existing cacheLock (or a key-specific bounded executor) so that shouldEarlyRefresh(...) triggers at most one concurrent refresh for a given key: wrap the async refresh body so it first acquires the same lock used by getOrLoadWithPer/cacheLock before invoking loader.get() and put(...), or submit to a per-key single-threaded executor; also add a concurrency test that fires ~100 concurrent hits at shouldEarlyRefresh and verifies loader.get() is invoked only once.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/RedisCacheLock.java (1)
48-66:⚠️ Potential issue | 🟠 Major락 소유권 없는 해제와 대기 후 직접
loader실행을 함께 수정해야 한다.운영에서는 두 문제가 같이 터진다. 첫째, TTL 만료 뒤
delete(lockKey)를 그대로 호출하면 이미 다른 워커가 다시 획득한 락까지 지울 수 있다. 둘째, 락 미획득 경로가 50ms 대기 후loader.get()를 바로 실행해서 핫키에서 DB/ReadModel 부하가 다시 중복된다. 락 값에 호출별 owner token을 저장하고, 해제는 Lua compare-and-delete로 원자적으로 처리하며, 미획득 경로는 재시도 루프에서 캐시 재확인 후에만 로더를 실행하도록 바꾸는 편이 안전하다. 추가로 동일 키 동시 요청에서 로더 호출 수가 1회로 제한되는지와, TTL 경계에서 이전 소유자가 새 소유자의 락을 해제하지 못하는지를 검증하는 테스트가 필요하다.#!/bin/bash set -euo pipefail file='apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/RedisCacheLock.java' echo '--- executeWithLock body ---' sed -n '42,69p' "$file" echo echo '--- lock operations in current implementation ---' rg -n 'setIfAbsent|Thread\.sleep|loader\.get\(|delete\(lockKey\)' "$file" echo echo '--- related lock ownership / retry handling in codebase ---' rg -n 'LuaScript|compare.?delete|ownerId|executeWithLock|RedisCacheLock|CacheStampede' \ apps/commerce-api/src/main/java \ apps/commerce-api/src/test/java🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/RedisCacheLock.java` around lines 48 - 66, The executeWithLock implementation in RedisCacheLock currently uses redisTemplate.opsForValue().setIfAbsent(lockKey, "1", LOCK_TTL) and unconditionally calls redisTemplate.delete(lockKey) and immediately runs loader.get() after a fixed Thread.sleep(WAIT_MILLIS), which can delete another owner’s lock and causes duplicate loader invocations under contention; change the lock value to a unique owner token per call (e.g. UUID), store that token in setIfAbsent, on release replace redisTemplate.delete(lockKey) with an atomic compare-and-delete via a Lua script that deletes only if the stored token matches, and replace the single sleep+loader path with a retry loop that rechecks the cache (and only calls loader.get() when cache still misses and after acquiring the lock) to ensure loader is invoked at most once; update RedisCacheLock/executeWithLock and add tests to assert single loader invocation under concurrent requests and that a TTL-boundary owner cannot release a newly acquired lock.
🧹 Nitpick comments (15)
apps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.java (1)
39-52: 경계값 및 실패 케이스 테스트 누락단순 위임 검증만 수행하고 있으며, 경계값/실패 케이스/예외 흐름 테스트가 누락되어 있다.
운영 관점에서 문제인 이유:
nullproductId가 전달될 경우 syncer의 방어 로직 유무에 따라 NPE 또는 downstream 오류 발생 가능- facade에서 예외 발생 시 syncer의 예외 전파 동작이 검증되지 않음
권장 추가 테스트:
nullproductId 전달 시 동작 검증productCommandFacade에서 예외 발생 시 예외 전파 검증또한 ArgumentCaptor 사용은 단순 위임 검증에 과도하다. 아래와 같이 단순화할 수 있다:
verify(productCommandFacade).increaseLikeCount(productId);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.java` around lines 39 - 52, 현재 테스트 increaseLikeCountSuccess()에서는 정상 경로의 위임만 검증하므로 null 입력과 예외 전파 경계값을 추가해야 합니다; ProductLikeCountSyncerImplTest에 새로운 테스트를 추가하여 (1) productLikeCountSyncerImpl.increaseLikeCount(null) 호출 시 기대 동작(예: NPE 발생 또는 무시/유효성 검사 처리)을 검증하고 (2) productCommandFacade.increaseLikeCount(...)가 예외를 던질 때 productLikeCountSyncerImpl이 해당 예외를 올바르게 전파하거나 감싸는 동작을 검증하세요; 또한 기존 성공 케이스의 검증은 ArgumentCaptor 대신 verify(productCommandFacade).increaseLikeCount(productId)로 단순화해 위임 인자 비교를 명확히 하십시오.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java (2)
77-90: 실제 동시 시작과 실패 시 정리를 함께 보장해야 한다현재 구조는 작업 제출 순서에 실행이 맡겨져 있어 일부 호출이 다른 호출 시작 전에 끝나도 테스트가 통과할 수 있다. 이 경우 운영에서만 나타나는 경합 회귀를 놓칠 수 있다. 또한
future.get(...)가 예외나 타임아웃을 던지면 스레드 풀이 정리되지 않아 이후 통합 테스트를 불안정하게 만들 수 있다.CountDownLatch또는CyclicBarrier로 동시 시작을 강제하고,try/finally에서shutdownNow()와awaitTermination()까지 보장하는 쪽이 안전하다. 추가 테스트로는 start gate를 둔 상태에서 스레드 수를 높여 여러 번 반복 실행하는 케이스를 넣어 경합 재현성을 확인하는 것이 좋다.예시 수정안
+ CountDownLatch ready = new CountDownLatch(threadCount); + CountDownLatch start = new CountDownLatch(1); ExecutorService executorService = Executors.newFixedThreadPool(threadCount); List<Future<?>> futures = new ArrayList<>(); - for (int i = 0; i < threadCount; i++) { - futures.add(executorService.submit(() -> - productCommandService.increaseLikeCount(productId))); - } - for (Future<?> future : futures) { - future.get(10, TimeUnit.SECONDS); - } - executorService.shutdown(); + try { + for (int i = 0; i < threadCount; i++) { + futures.add(executorService.submit(() -> { + ready.countDown(); + start.await(); + productCommandService.increaseLikeCount(productId); + return null; + })); + } + ready.await(5, TimeUnit.SECONDS); + start.countDown(); + for (Future<?> future : futures) { + future.get(10, TimeUnit.SECONDS); + } + } finally { + executorService.shutdownNow(); + executorService.awaitTermination(5, TimeUnit.SECONDS); + }Also applies to: 108-120, 139-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java` around lines 77 - 90, The current concurrent-submit pattern in ProductLikeCountConcurrencyTest can let tasks start at different times and may leak threads on failures; change the test to use a start gate (CountDownLatch or CyclicBarrier) so all worker threads call productCommandService.increaseLikeCount(productId) simultaneously, collect Futures as before, and wrap execution/await in try/finally where you call executorService.shutdownNow() and then awaitTermination(...) to guarantee cleanup even on exceptions/timeouts; also ensure each future.get(...) is invoked and failures rethrown so the test fails fast. Apply the same start-gate + try/finally shutdown pattern to the other similar blocks that call productCommandService.increaseLikeCount (the other concurrency sections in this test) and consider adding a repeated-loop test that increases threadCount to improve race reproducibility.
159-171: Read Model 픽스처가 실제 동기화 결과와 달라진다
saveReadModel()이Product.reconstruct(..., null, ...)로 description을 버리고createdAt도 새 시각으로 만들어 저장하면, 운영 동기화 경로와 다른 상태의 Read Model이 준비된다.ProductReadModelEntity.of(...)는 입력된Product의 description과 전달된createdAt을 그대로 반영하므로, 이런 픽스처는 실제 동기화 회귀가 있어도 동시성 테스트가 통과하게 만들 수 있다. 가능하면 실제 Read Model 동기화 경로를 통해 시드하거나, 최소한 원본ProductEntity의 필드를 모두 보존해 저장하는 편이 낫다. 추가 테스트로는 좋아요 증감 전에 시드된 Read Model의 핵심 필드가 원본 상품/브랜드 데이터와 일치하는지 검증하는 케이스를 넣는 것이 좋다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java` around lines 159 - 171, The saveReadModel helper creates a Product with a null description and a fresh ZonedDateTime, causing the seeded Read Model to diverge from real synchronization; update saveReadModel so it preserves original fields from ProductEntity (pass the actual description and createdAt from productEntity into Product.reconstruct and into ProductReadModelEntity.of) or else seed via the real Read Model synchronization path; also add a test that asserts the seeded ProductReadModelEntity (via productReadModelJpaRepository.findById) has matching core fields (description, createdAt, name, price, stock, brandName) to the source ProductEntity/brand before performing like-count concurrency operations.apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java (1)
59-63: 정리 루틴은 한쪽 실패와 무관하게 둘 다 실행되어야 한다Line 61에서 예외가 발생하면 Line 62가 실행되지 않아 Redis 키가 다음 테스트로 누수될 수 있다. Testcontainers를 공유하는 E2E 스위트에서는 이런 누수가 캐시 관련 테스트를 간헐적으로 오염시키므로,
@AfterEach에서는 두 정리를 각각 보장해야 한다.try/finally또는 suppressed 예외 수집 방식으로 분리하는 편이 안전하다.예시 수정안
`@AfterEach` void tearDown() { - databaseCleanUp.truncateAllTables(); - redisCleanUp.truncateAll(); + RuntimeException failure = null; + try { + databaseCleanUp.truncateAllTables(); + } catch (RuntimeException e) { + failure = e; + } finally { + try { + redisCleanUp.truncateAll(); + } catch (RuntimeException e) { + if (failure != null) { + failure.addSuppressed(e); + } else { + failure = e; + } + } + } + + if (failure != null) { + throw failure; + } }추가 테스트로
DatabaseCleanUp이 실패해도 Redis 정리가 호출되는지 테스트 더블 기반 단위 테스트를 두면 플래키 회귀를 막기 쉽다. As per coding guidelines, "**/*Test*.java: Java 테스트 리뷰 기준이다. 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다. 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java` around lines 59 - 63, The tearDown method currently calls databaseCleanUp.truncateAllTables() then redisCleanUp.truncateAll(), but if the database cleanup throws the Redis cleanup is skipped; update tearDown to guarantee both cleanups run regardless of exceptions (use try/finally or run each in its own try and collect/suppress exceptions) so databaseCleanUp.truncateAllTables() and redisCleanUp.truncateAll() are always attempted and any exceptions are rethrown or aggregated after both run; also add a unit test that simulates DatabaseCleanUp throwing to assert RedisCleanUp.truncateAll() still executes.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java (1)
74-105: 성공 경로 외 예외 흐름도 고정하다.이번 변경으로
ProductCommandFacade의 계약에 Read Model 동기화와 캐시 갱신이 추가되었는데, 현재 테스트는 성공 경로만 검증한다. 운영에서는syncReadModel,refreshProductDetailCache,getAdminProductDetail중 하나만 실패해도 후속 호출 여부와 예외 전파가 달라질 수 있으니, 각 흐름마다 예외 발생 시 이후 호출이 중단되는지까지 검증하는 테스트를 추가하는 편이 안전하다. 추가 테스트로 create/update/delete 각각에서 협력자가 예외를 던질 때 예외가 그대로 전파되고 후속 캐시/조회 호출이 발생하지 않는지 확인하다. As per coding guidelines,**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.Also applies to: 115-150, 160-180
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java` around lines 74 - 105, Add unit tests in ProductCommandFacadeTest to cover failure/exception flows for create/update/delete: for each operation (createProduct, updateProduct, deleteProduct) write tests where each collaborator method (e.g., productCommandService.syncReadModel, productCommandService.refreshProductDetailCache, productCommandService.refreshIdListCacheForAllSorts, productQueryService.getAdminProductDetail) is mocked to throw and assert that the exception propagates from ProductCommandFacade and that subsequent collaborator calls are not invoked (use verifyNoMoreInteractions/verify(..., never())). Reference ProductCommandFacade#createProduct, `#updateProduct`, `#deleteProduct` and the collaborator methods named above when adding these tests so each failure point is explicitly exercised and verified.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java (1)
147-155: 다건 캐시 DTO 조회의 반환 순서를 계약으로 고정하다.이 메서드는
IN조건만 있고orderBy가 없어 입력productIds순서와 다른 결과가 나와도 이상하지 않다. 운영에서는 다건 miss 결과를 ID 순서대로 합치거나 key별로 캐시에 다시 쓰는 경로가 있으면 다른 상품 상세가 잘못된 위치나 key에 매핑될 수 있으니, 여기서 CASE/FIELD 기반 정렬로 입력 순서를 보장하거나 반환 타입을Map<Long, ProductCacheDto>로 바꿔 호출부가 ID 기준으로 재정렬하게 하다. 추가 테스트로[id2, id1]입력과 다건 partial miss 상황에서 응답 순서와 캐시 key 매핑이 모두 유지되는 케이스를 넣다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java` around lines 147 - 155, 현재 findProductCacheDtosByIds(List<Long> productIds)는 where절에 IN만 있어 반환 순서가 입력 productIds 순서와 달라질 수 있으니 호출자 안전을 위해 반드시 순서를 고정하세요: 방법 A) 기존 메서드 유지시 QueryDSL에서 입력 productIds 순서를 보장하는 ORDER BY CASE/FIELD 식을 추가해(readModel.id 기반의 CASE 또는 DB-specific FIELD/ARRAY_POSITION 템플릿 사용) 반환 순서를 productIds 순으로 맞추고; 또는 방법 B) 시그니처를 List에서 Map<Long, ProductCacheDto>로 바꿔(예: findProductCacheDtoMapByIds) 호출부가 ID 키로 재정렬하도록 변경하세요; 변경사항은 findProductCacheDtosByIds (또는 새 메서드 이름)과 관련 호출부를 모두 수정하고, 단위/통합 테스트에 [id2, id1] 입력과 partial-miss 시나리오를 추가해 순서/매핑이 유지되는지 검증하세요.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java (2)
53-79:getProduct실패 케이스 테스트 누락상품이 존재하지 않거나 삭제된 경우
getOrLoadProductDetail에서 예외가 발생하는 시나리오 테스트가 없다. Facade가 예외를 그대로 전파하는지 검증하는 테스트 추가를 권장한다.📝 테스트 케이스 추가 제안
`@Test` `@DisplayName`("[getProduct()] 존재하지 않는 상품 -> PRODUCT_NOT_FOUND 예외 전파") void getProductNotFound() { // Arrange given(productQueryService.getOrLoadProductDetail(999L)) .willThrow(new CoreException(ErrorType.PRODUCT_NOT_FOUND)); // Act & Assert CoreException exception = assertThrows(CoreException.class, () -> productQueryFacade.getProduct(999L)); assertThat(exception.getErrorType()).isEqualTo(ErrorType.PRODUCT_NOT_FOUND); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java` around lines 53 - 79, Add a new test in ProductQueryFacadeTest to verify failure propagation from productQueryService: mock productQueryService.getOrLoadProductDetail(999L) to throw new CoreException(ErrorType.PRODUCT_NOT_FOUND) and assert that productQueryFacade.getProduct(999L) throws CoreException; capture the exception with assertThrows and assert its getErrorType() equals ErrorType.PRODUCT_NOT_FOUND (use the same testing style as getProductSuccess and reference productQueryService and productQueryFacade).
44-50:createTestProduct헬퍼 메서드 미사용
createTestProduct메서드가findActiveByIdsSuccess테스트에서만 사용된다.getProductSuccess등의 테스트에서는 직접 DTO를 생성하고 있어 헬퍼 메서드 활용도가 낮다. 테스트 코드 정리 시 참고하기 바란다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java` around lines 44 - 50, createTestProduct 헬퍼가 findActiveByIdsSuccess에서만 사용되고 있으니 getProductSuccess 등 다른 테스트들이 직접 DTO를 생성하는 부분을 리팩터링해 헬퍼를 재사용하도록 바꾸세요: ProductQueryFacadeTest의 createTestProduct 메서드를 재사용하거나 (필요하면 인자(예: id, price, stock)를 추가하여) getProductSuccess, findActiveByIdsSuccess 등 테스트에서 Product.reconstruct(...) 대신 createTestProduct(...) 호출로 통일해 중복을 제거하고 테스트 데이터를 중앙화하세요.apps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.java (1)
97-127: 브랜드 수정 시 연관 상품이 없는 경우 테스트 누락
updateBrandSuccess테스트에서findActiveIdsByBrandId가 빈 리스트를 반환하는 경우를 검증하지 않는다. 상품이 없는 브랜드 수정 시에도 Read Model 동기화는 호출되어야 하지만,refreshProductDetailCache는 호출되지 않아야 한다. 경계 조건 테스트 추가를 권장한다.📝 테스트 케이스 추가 제안
`@Test` `@DisplayName`("[BrandCommandFacade.updateBrand()] 연관 상품 없음 -> Read Model 동기화 수행, 캐시 갱신 미수행") void updateBrandSuccessWithNoProducts() { // Arrange Brand brand = Brand.reconstruct(1L, BrandName.from("나이키"), BrandDescription.from("스포츠 브랜드"), VisibleStatus.HIDDEN, null); AdminBrandUpdateInDto inDto = new AdminBrandUpdateInDto("아디다스", "독일 스포츠 브랜드", null); Brand updatedBrand = Brand.reconstruct(1L, BrandName.from("아디다스"), BrandDescription.from("독일 스포츠 브랜드"), VisibleStatus.HIDDEN, null); given(brandQueryService.getBrandById(1L)).willReturn(brand); given(brandCommandService.updateBrand(brand, inDto)).willReturn(updatedBrand); given(productQueryService.findActiveIdsByBrandId(1L)).willReturn(List.of()); // Act AdminBrandDetailOutDto result = brandCommandFacade.updateBrand(1L, inDto); // Assert verify(productCommandService).syncBrandNameInReadModel(1L, "아디다스"); verify(productCommandService, never()).refreshProductDetailCache(any()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.java` around lines 97 - 127, The current test updateBrandSuccess doesn't cover the case where no products are associated; add a new unit test (e.g., updateBrandSuccessWithNoProducts) that arranges brand, inDto, and updatedBrand, mocks brandQueryService.getBrandById(...) and brandCommandService.updateBrand(...), and specifically mocks productQueryService.findActiveIdsByBrandId(...) to return an empty List; call brandCommandFacade.updateBrand(...), then verify productCommandService.syncBrandNameInReadModel(1L, "아디다스") was invoked and verify productCommandService.refreshProductDetailCache(...) was never invoked (use never() / any() matchers) to assert correct behavior when there are no related products.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java (2)
96-104:ExecutorService종료 처리 미흡으로 테스트 리소스 누수 가능
executor.shutdown()호출 후awaitTermination()을 호출하지 않아, 스레드풀이 완전히 종료되기 전에 테스트가 끝날 수 있다. CI 환경에서 반복 실행 시 스레드 누수로 인한 플래키 테스트 발생 가능성이 있다.🔧 수정 제안
doneLatch.await(); executor.shutdown(); +executor.awaitTermination(10, TimeUnit.SECONDS); // Assert — loader 호출 최소화 (이상: 1회, 레이스 컨디션 허용: <= 2) assertThat(loaderCallCount.get()).isLessThanOrEqualTo(2);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java` around lines 96 - 104, The test in CacheStampedeTest calls executor.shutdown() but doesn't wait for termination which can leak threads; update the test to call executor.shutdown() then await termination with a reasonable timeout (e.g., a few seconds) and if awaitTermination returns false, call executor.shutdownNow() and await again to ensure the thread pool is fully stopped before the test exits, referencing the executor variable and the test method surrounding readyLatch/startLatch/doneLatch logic.
150-158: 동일한ExecutorService종료 처리 개선 필요
cacheHitStampede_loaderNotCalled테스트에서도 동일하게awaitTermination()호출이 누락되어 있다.🔧 수정 제안
doneLatch.await(); executor.shutdown(); +executor.awaitTermination(10, TimeUnit.SECONDS); // Assert — 캐시 히트이므로 loader 0회 호출 assertThat(loaderCallCount.get()).isEqualTo(0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java` around lines 150 - 158, The test method cacheHitStampede_loaderNotCalled currently calls executor.shutdown() but does not wait for termination; update this test to call executor.awaitTermination(...) after executor.shutdown() (use a reasonable timeout, e.g., a few seconds) and handle InterruptedException appropriately so the executor is properly terminated before assertions; locate the executor variable and the cacheHitStampede_loaderNotCalled test to add the awaitTermination step (and optional assertion that the executor terminated) to mirror the other tests' shutdown handling.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java (1)
4-4: Wildcard import 사용
com.loopers.catalog.product.application.dto.out.*wildcard import가 사용되고 있다. 프로젝트 컨벤션에 따라 허용될 수 있으나, 명시적 import가 코드 가독성과 의존성 추적에 유리하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java` at line 4, ProductQueryFacade currently uses a wildcard import (com.loopers.catalog.product.application.dto.out.*); replace it with explicit imports for each DTO class actually referenced by ProductQueryFacade (e.g., ProductDto, ProductDetailDto, ProductListDto — whatever specific classes are used) to improve readability and dependency tracing; you can let your IDE "organize imports" or inspect usages in ProductQueryFacade to enumerate the exact DTO class names and add them as individual import statements.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/CacheLock.java (1)
12-15: 인터페이스 계약에 예외 처리 및 null 반환 정책 명시 필요
loader가null을 반환하거나 예외를 발생시키는 경우의 동작이 인터페이스 수준에서 정의되어 있지 않다. 구현체별로 동작이 달라질 수 있어, 운영 환경에서 예기치 않은 동작이 발생할 수 있다.📝 Javadoc 보완 제안
/** * 캐시 스탬피드 방지용 key-level 락 * - 같은 key에 대한 동시 DB 조회를 1회로 제한 * - 구현체: LocalCacheLock (`@Primary`), RedisCacheLock (분산 환경 전환용) + * + * `@param` key 락을 획득할 캐시 키 + * `@param` loader 캐시 미스 시 호출되는 데이터 로더 (null 반환 허용, 예외 발생 시 그대로 전파) + * `@return` loader의 반환값 (null 가능) + * `@throws` RuntimeException loader에서 발생한 예외는 그대로 전파 */ public interface CacheLock { <T> T executeWithLock(String key, Supplier<T> loader); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/CacheLock.java` around lines 12 - 15, The CacheLock.executeWithLock contract is underspecified for cases where the provided Supplier<T> loader returns null or throws an exception; update the interface by adding Javadoc to CacheLock and the executeWithLock method that clearly states (1) whether null return values from loader are allowed or should be treated as absent (and how callers should handle them), (2) that exceptions thrown by loader will be propagated to the caller (or alternatively wrapped in a specific runtime exception) and not swallowed, and (3) what happens on lock acquisition failures (e.g., throw a LockAcquisitionException or propagate InterruptedException); reference the CacheLock interface and the executeWithLock(String key, Supplier<T> loader) method in the doc and include `@param`, `@return`, and `@throws` tags (and an `@implSpec` note for implementors) so all implementations behave consistently.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java (1)
1-539: Redis 연결 실패 시 graceful degradation 테스트 누락
ProductCacheManager는 Redis 장애 시 try-catch로 예외를 잡고 로그만 남기는 구조로 설계되었다(PR 설명 참조). 그러나 Redis 연결 실패 시에도 서비스가 정상 동작하는지 검증하는 테스트가 없다. 운영 환경에서 Redis 장애 발생 시 예기치 않은 동작을 방지하기 위해 다음 테스트 추가를 권장한다:
- Redis 연결 실패 시
get()→Optional.empty()반환- Redis 연결 실패 시
getOrLoad()→ loader 호출하여 값 반환- Redis 연결 실패 시
put()→ 예외 없이 정상 종료🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java` around lines 1 - 539, Add tests to ProductCacheManagerTest that simulate Redis connection failure and verify graceful degradation: create a nested test class (e.g., RedisFailureTest) that simulates Redis being down (stop the Redis testcontainer from RedisTestContainersConfig or temporarily replace/autowire RedisTemplate to throw a RedisConnectionFailureException), then assert ProductCacheManager.get(...) returns Optional.empty(), ProductCacheManager.getOrLoad(..., loader) invokes the loader and returns its value, and ProductCacheManager.put(...) completes without throwing; reference ProductCacheManager.get, ProductCacheManager.getOrLoad, ProductCacheManager.put and use redisTemplate or the Redis testcontainer control to simulate the failure.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java (1)
394-417: 호출 횟수만 고정하면 잘못된 캐시 키 조합도 테스트를 통과한다.
times(12)와times(4)만으로는 brand/all, 정렬, 페이지 조합이 정확히 생성됐는지 검증하지 못한다. 운영에서는 특정 조합의 ID 리스트 캐시만 빠져도 목록 일관성이 깨지는데, 현재 테스트는 같은 키를 여러 번 갱신해도 통과할 수 있다.ArgumentCaptor로refreshIdList()의 cacheKey를 수집해 기대 조합 전체를 비교하거나, 최소한ProductSortType.values().length와 cacheable page 상수에서 기대 횟수를 계산하도록 바꾸는 편이 안전하다. 추가로 브랜드별 키와all키가 모두 갱신되는지 확인하는 실패 케이스 테스트를 넣는 편이 좋다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java` around lines 394 - 417, The tests only assert total call counts (verify(productCacheManager, times(...)).refreshIdList(...)) which can pass even if wrong cache keys are used; update the tests for refreshIdListCacheForAllSorts() and refreshIdListCacheForSort() to capture and assert the exact cacheKey combinations: use an ArgumentCaptor for the cacheKey argument of productCacheManager.refreshIdList(...) in the ProductCommandServiceTest, collect all captured keys when invoking productCommandService.refreshIdListCacheForAllSort(1L) and productCommandService.refreshIdListCacheForSort(1L, ProductSortType.PRICE_ASC), then assert the set equals the expected Cartesian product of (brand + "all") keys × ProductSortType.values() (or the single sort for the sort-specific test) × all cacheable page numbers (use the existing cacheable page constant), and optionally add a negative test that simulates a missing key scenario to ensure failure when a brand or "all" key is not refreshed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java`:
- Around line 61-68: The cache refresh loop currently runs inside the
BrandCommandFacade transactional flow (calls to
productQueryService.findActiveIdsByBrandId and
productCommandService.refreshProductDetailCache), which risks long transactions
and cache writes before DB commit; move the product detail cache refresh to run
after successful commit by registering a TransactionSynchronization
(TransactionSynchronizationManager.registerSynchronization) or publish a domain
event handled by a `@TransactionalEventListener`(phase = AFTER_COMMIT) that
receives the brand id and calls productQueryService.findActiveIdsByBrandId and
productCommandService.refreshProductDetailCache for each productId; keep
productCommandService.syncBrandNameInReadModel inside the transaction if it must
be transactional, and add tests that simulate an exception after brand update to
assert no cache refresh on rollback and that after-commit refresh runs for large
product sets.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`:
- Around line 201-210: The refreshProductDetailCache method performs a DB read
via productQueryPort.findProductCacheDtoById and is currently invoked inside a
transactional facade (ProductCommandFacade), causing cache refresh to run before
DB commit; change the flow so cache writes occur after transaction commit—either
remove the direct call from the transactional facade and invoke
ProductCommandService.refreshProductDetailCache from a
`@TransactionalEventListener` or register a
TransactionSynchronizationManager.afterCommit callback that calls
productCacheManager.refreshProductDetail and the productQueryPort lookup; ensure
failures in refreshProductDetailCache are logged and surfaced to
monitoring/alerting (not just swallowed) and add tests for exceptions from
productQueryPort.findProductCacheDtoById and for cache write failures to
validate eventual consistency.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`:
- Around line 103-115: When some missedIds cannot be recovered (i.e.
fromDb.size() < missedIds.size()) don’t silently drop nulls and return the old
total; instead detect that partial recovery happened after calling
loadAndCacheDetails(missedIds) and either (a) invalidate the ID-list cache and
re-fetch the id list (e.g. call the service/method that loads idList again) and
re-run the mergeInOrder/caching flow, or (b) recompute the page content and
totalElements consistently (adjust totalElements by the number of permanently
missing IDs or recalc from a fresh id query) before constructing
ProductPageOutDto; update the logic around extractMissedIds,
loadAndCacheDetails, mergeInOrder and the ProductPageOutDto construction to
perform this branch and add a unit test covering "ID list cached but one detail
deleted" to assert page size and totalElements remain consistent.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java`:
- Around line 740-768: The test updateProductThenGetReturnsUpdatedData currently
only asserts responses and may miss cache behavior; modify it to inspect Redis
directly using the project's Redis client (e.g., StringRedisTemplate or
RedisTemplate) to (1) after the first mockMvc GET assert the presence of the
cache key "product:v1:{productId}", (2) after the PUT assert that the same key
was removed (invalidated), and (3) after the final GET assert the key was
recreated and its serialized value contains the updated fields
(name/price/stock/description); use existing symbols in the test (mockMvc,
objectMapper, ADMIN_LDAP_HEADER/ADMIN_LDAP_VALUE, productId) and add a
test-scoped injection of the Redis template to perform these key existence and
value assertions.
In
`@apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java`:
- Around line 62-72: The test currently only asserts successful field mapping;
ensure it also verifies interaction counts by adding
verify(productQueryFacade).findActiveByIds(productIds) and
verifyNoMoreInteractions(productQueryFacade) after the assertAll to lock the
batch-query to one call, and add a separate unit test in
OrderProductReaderImplTest that stubs productQueryFacade.findActiveByIds(...) to
throw and asserts that readProducts(...) (or the
OrderProductReaderImpl.readProducts method) propagates or handles the exception
per policy; reference productQueryFacade, findActiveByIds, and readProducts when
locating where to add the interaction verifications and the new failure-case
test.
In `@round5-docs/05-to-be-cache-visualization.html`:
- Around line 307-389: The doc currently presents the 3-layer stampede
protection as if LocalCacheLock alone ensures single-access globally; update the
diagram, tables, and explanatory text to clearly separate "single-node/local
path" (showing LocalCacheLock behavior) from "distributed/multi-node path"
(introduce RedisCacheLock and its acquire/success/fail fallback flows), add
explicit branches for Redis lock acquisition success → DB+SET and lock
acquisition failure → wait/retry/serve stale, and annotate where PER and TTL
jitter interact with each path; finally, note to verify the changes against the
actual CacheLock implementations (LocalCacheLock and RedisCacheLock) and
existing benchmark topology to ensure the documentation matches the real
behavior.
- Around line 399-406: The summary KPI numbers (e.g., the strings "Cache Hit
3.87~4.97ms" and "Cache Miss 5.13~10.75ms" in the list under the H3 heading) are
inconsistent with the 10M-listing chart dataset; update the text to derive its
numbers from the same chart dataset constant (or correct the hard-coded values
to match the chart) and add an automated validation that compares the computed
KPI text values against the chart dataset (fail build/test if they differ) so
the "1000만건 목록" chart and the corresponding conclusion strings always stay in
sync.
- Line 7: 현재 HTML에 Chart.js를 외부 CDN(https://cdn.jsdelivr.net/...)로만 로드하고 있어
네트워크/서비스 가용성에 따라 렌더링이 실패할 수 있으므로, Chart.js를 저장소의 고정 자산으로 넣어 self-host 하거나(예:
프로젝트의 assets/chart.chart.umd.min.js로 복사) 최소한 <script> 태그에 SRI를 위한 integrity와
crossorigin 속성을 추가하고 로드 실패 시 대체 안내 또는 로컬 복사본으로 폴백하는 로직을 구현하세요; 변경 대상은 해당 <script
src="...chart.umd.min.js"> 태그이며, 수정 후 브라우저의 오프라인 모드에서 페이지를 열어 차트가 정상 렌더링되거나 대체
안내가 표시되는지 확인하세요.
---
Duplicate comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLock.java`:
- Around line 33-42: 현재 LocalCacheLock의 locks.remove(key) 호출로 인해 T1 종료 직후 대기중인
T2가 기존 락을 이어받기 전에 T3가 새 락을 만들어 loader가 중복 실행될 수 있으니, locks의 값(Object) 대신 참조 카운트를
갖는 락 엔트리(예: class LockEntry { Object monitor; int count; })로 바꾸고
computeIfAbsent에서 엔트리를 생성하거나 조회한 뒤 반드시 참조 카운트(+1)를 증가시키며,
synchronized(lockEntry.monitor) 내부 실행이 끝난 뒤 참조 카운트(-1)를 하고 count가 0이 될 때만
locks에서 제거하도록 변경하세요; 대상 식별자: LocalCacheLock, locks, computeIfAbsent, loader,
locks.remove. 또한 T1 실행 -> T2 대기 -> T1 종료 직후 T3 진입 시 loader가 1회만 호출됨을 검증하는 동시성 회귀
테스트를 추가하세요.
- Around line 17-18: 현재 LocalCacheLock가 기본 빈(`@Primary`)으로 등록되어 멀티 인스턴스 환경에서 JVM
로컬 락이 기본 주입되는 문제입니다; 제거된 `@Primary/`@Component 설정을 수정하고 LocalCacheLock 클래스를 로컬/테스트
전용으로 제한(`@Profile`({"local","test"}) 또는 동등한 조건)한 뒤 RedisCacheLock을 기본 빈으로 만들되
프로덕션/공유 환경에서 선택되도록 구성(예: RedisCacheLock에 `@Primary` 또는 `@Profile`("!local & !test")
또는 `@ConditionalOnMissingBean` 전략 적용)하세요; 또한 통합 테스트를 추가해 local 프로파일에서는
LocalCacheLock가 주입되고 dev/qa/prod(또는 non-local/test)에서는 RedisCacheLock가 주입되는지
검증하세요.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java`:
- Around line 109-117: put() currently serializes and writes null payloads which
allows ambiguous cache entries; change ProductCacheManager.put(String key,
Object value, Duration ttl) to skip storing null values (or instead store a
defined negative-cache sentinel constant and document it) and ensure
get()/getOrLoad()/getOrLoadWithPer() consistently interpret that sentinel as a
cache miss; update logic around objectMapper.writeValueAsString(value) and
writeTemplate.opsForValue().set(...) so nulls are not written (or the sentinel
is written) and reuse applyJitter as-is; add a unit/integration test that loads
a missing product and asserts the loader is invoked only once on repeated
lookups to validate negative-cache behavior.
- Around line 168-177: The PER early-refresh branch calls
CompletableFuture.runAsync(...) which can queue many concurrent loader.get()
tasks for the same hot key; change this to serialize per-key refreshes by
reusing the existing cacheLock (or a key-specific bounded executor) so that
shouldEarlyRefresh(...) triggers at most one concurrent refresh for a given key:
wrap the async refresh body so it first acquires the same lock used by
getOrLoadWithPer/cacheLock before invoking loader.get() and put(...), or submit
to a per-key single-threaded executor; also add a concurrency test that fires
~100 concurrent hits at shouldEarlyRefresh and verifies loader.get() is invoked
only once.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/RedisCacheLock.java`:
- Around line 48-66: The executeWithLock implementation in RedisCacheLock
currently uses redisTemplate.opsForValue().setIfAbsent(lockKey, "1", LOCK_TTL)
and unconditionally calls redisTemplate.delete(lockKey) and immediately runs
loader.get() after a fixed Thread.sleep(WAIT_MILLIS), which can delete another
owner’s lock and causes duplicate loader invocations under contention; change
the lock value to a unique owner token per call (e.g. UUID), store that token in
setIfAbsent, on release replace redisTemplate.delete(lockKey) with an atomic
compare-and-delete via a Lua script that deletes only if the stored token
matches, and replace the single sleep+loader path with a retry loop that
rechecks the cache (and only calls loader.get() when cache still misses and
after acquiring the lock) to ensure loader is invoked at most once; update
RedisCacheLock/executeWithLock and add tests to assert single loader invocation
under concurrent requests and that a TTL-boundary owner cannot release a newly
acquired lock.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java`:
- Line 4: ProductQueryFacade currently uses a wildcard import
(com.loopers.catalog.product.application.dto.out.*); replace it with explicit
imports for each DTO class actually referenced by ProductQueryFacade (e.g.,
ProductDto, ProductDetailDto, ProductListDto — whatever specific classes are
used) to improve readability and dependency tracing; you can let your IDE
"organize imports" or inspect usages in ProductQueryFacade to enumerate the
exact DTO class names and add them as individual import statements.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/CacheLock.java`:
- Around line 12-15: The CacheLock.executeWithLock contract is underspecified
for cases where the provided Supplier<T> loader returns null or throws an
exception; update the interface by adding Javadoc to CacheLock and the
executeWithLock method that clearly states (1) whether null return values from
loader are allowed or should be treated as absent (and how callers should handle
them), (2) that exceptions thrown by loader will be propagated to the caller (or
alternatively wrapped in a specific runtime exception) and not swallowed, and
(3) what happens on lock acquisition failures (e.g., throw a
LockAcquisitionException or propagate InterruptedException); reference the
CacheLock interface and the executeWithLock(String key, Supplier<T> loader)
method in the doc and include `@param`, `@return`, and `@throws` tags (and an
`@implSpec` note for implementors) so all implementations behave consistently.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java`:
- Around line 147-155: 현재 findProductCacheDtosByIds(List<Long> productIds)는
where절에 IN만 있어 반환 순서가 입력 productIds 순서와 달라질 수 있으니 호출자 안전을 위해 반드시 순서를 고정하세요: 방법
A) 기존 메서드 유지시 QueryDSL에서 입력 productIds 순서를 보장하는 ORDER BY CASE/FIELD 식을
추가해(readModel.id 기반의 CASE 또는 DB-specific FIELD/ARRAY_POSITION 템플릿 사용) 반환 순서를
productIds 순으로 맞추고; 또는 방법 B) 시그니처를 List에서 Map<Long, ProductCacheDto>로 바꿔(예:
findProductCacheDtoMapByIds) 호출부가 ID 키로 재정렬하도록 변경하세요; 변경사항은
findProductCacheDtosByIds (또는 새 메서드 이름)과 관련 호출부를 모두 수정하고, 단위/통합 테스트에 [id2, id1]
입력과 partial-miss 시나리오를 추가해 순서/매핑이 유지되는지 검증하세요.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.java`:
- Around line 97-127: The current test updateBrandSuccess doesn't cover the case
where no products are associated; add a new unit test (e.g.,
updateBrandSuccessWithNoProducts) that arranges brand, inDto, and updatedBrand,
mocks brandQueryService.getBrandById(...) and
brandCommandService.updateBrand(...), and specifically mocks
productQueryService.findActiveIdsByBrandId(...) to return an empty List; call
brandCommandFacade.updateBrand(...), then verify
productCommandService.syncBrandNameInReadModel(1L, "아디다스") was invoked and
verify productCommandService.refreshProductDetailCache(...) was never invoked
(use never() / any() matchers) to assert correct behavior when there are no
related products.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java`:
- Around line 74-105: Add unit tests in ProductCommandFacadeTest to cover
failure/exception flows for create/update/delete: for each operation
(createProduct, updateProduct, deleteProduct) write tests where each
collaborator method (e.g., productCommandService.syncReadModel,
productCommandService.refreshProductDetailCache,
productCommandService.refreshIdListCacheForAllSorts,
productQueryService.getAdminProductDetail) is mocked to throw and assert that
the exception propagates from ProductCommandFacade and that subsequent
collaborator calls are not invoked (use verifyNoMoreInteractions/verify(...,
never())). Reference ProductCommandFacade#createProduct, `#updateProduct`,
`#deleteProduct` and the collaborator methods named above when adding these tests
so each failure point is explicitly exercised and verified.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java`:
- Around line 53-79: Add a new test in ProductQueryFacadeTest to verify failure
propagation from productQueryService: mock
productQueryService.getOrLoadProductDetail(999L) to throw new
CoreException(ErrorType.PRODUCT_NOT_FOUND) and assert that
productQueryFacade.getProduct(999L) throws CoreException; capture the exception
with assertThrows and assert its getErrorType() equals
ErrorType.PRODUCT_NOT_FOUND (use the same testing style as getProductSuccess and
reference productQueryService and productQueryFacade).
- Around line 44-50: createTestProduct 헬퍼가 findActiveByIdsSuccess에서만 사용되고 있으니
getProductSuccess 등 다른 테스트들이 직접 DTO를 생성하는 부분을 리팩터링해 헬퍼를 재사용하도록 바꾸세요:
ProductQueryFacadeTest의 createTestProduct 메서드를 재사용하거나 (필요하면 인자(예: id, price,
stock)를 추가하여) getProductSuccess, findActiveByIdsSuccess 등 테스트에서
Product.reconstruct(...) 대신 createTestProduct(...) 호출로 통일해 중복을 제거하고 테스트 데이터를
중앙화하세요.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java`:
- Around line 394-417: The tests only assert total call counts
(verify(productCacheManager, times(...)).refreshIdList(...)) which can pass even
if wrong cache keys are used; update the tests for
refreshIdListCacheForAllSorts() and refreshIdListCacheForSort() to capture and
assert the exact cacheKey combinations: use an ArgumentCaptor for the cacheKey
argument of productCacheManager.refreshIdList(...) in the
ProductCommandServiceTest, collect all captured keys when invoking
productCommandService.refreshIdListCacheForAllSort(1L) and
productCommandService.refreshIdListCacheForSort(1L, ProductSortType.PRICE_ASC),
then assert the set equals the expected Cartesian product of (brand + "all")
keys × ProductSortType.values() (or the single sort for the sort-specific test)
× all cacheable page numbers (use the existing cacheable page constant), and
optionally add a negative test that simulates a missing key scenario to ensure
failure when a brand or "all" key is not refreshed.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java`:
- Around line 77-90: The current concurrent-submit pattern in
ProductLikeCountConcurrencyTest can let tasks start at different times and may
leak threads on failures; change the test to use a start gate (CountDownLatch or
CyclicBarrier) so all worker threads call
productCommandService.increaseLikeCount(productId) simultaneously, collect
Futures as before, and wrap execution/await in try/finally where you call
executorService.shutdownNow() and then awaitTermination(...) to guarantee
cleanup even on exceptions/timeouts; also ensure each future.get(...) is invoked
and failures rethrown so the test fails fast. Apply the same start-gate +
try/finally shutdown pattern to the other similar blocks that call
productCommandService.increaseLikeCount (the other concurrency sections in this
test) and consider adding a repeated-loop test that increases threadCount to
improve race reproducibility.
- Around line 159-171: The saveReadModel helper creates a Product with a null
description and a fresh ZonedDateTime, causing the seeded Read Model to diverge
from real synchronization; update saveReadModel so it preserves original fields
from ProductEntity (pass the actual description and createdAt from productEntity
into Product.reconstruct and into ProductReadModelEntity.of) or else seed via
the real Read Model synchronization path; also add a test that asserts the
seeded ProductReadModelEntity (via productReadModelJpaRepository.findById) has
matching core fields (description, createdAt, name, price, stock, brandName) to
the source ProductEntity/brand before performing like-count concurrency
operations.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java`:
- Around line 96-104: The test in CacheStampedeTest calls executor.shutdown()
but doesn't wait for termination which can leak threads; update the test to call
executor.shutdown() then await termination with a reasonable timeout (e.g., a
few seconds) and if awaitTermination returns false, call executor.shutdownNow()
and await again to ensure the thread pool is fully stopped before the test
exits, referencing the executor variable and the test method surrounding
readyLatch/startLatch/doneLatch logic.
- Around line 150-158: The test method cacheHitStampede_loaderNotCalled
currently calls executor.shutdown() but does not wait for termination; update
this test to call executor.awaitTermination(...) after executor.shutdown() (use
a reasonable timeout, e.g., a few seconds) and handle InterruptedException
appropriately so the executor is properly terminated before assertions; locate
the executor variable and the cacheHitStampede_loaderNotCalled test to add the
awaitTermination step (and optional assertion that the executor terminated) to
mirror the other tests' shutdown handling.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java`:
- Around line 1-539: Add tests to ProductCacheManagerTest that simulate Redis
connection failure and verify graceful degradation: create a nested test class
(e.g., RedisFailureTest) that simulates Redis being down (stop the Redis
testcontainer from RedisTestContainersConfig or temporarily replace/autowire
RedisTemplate to throw a RedisConnectionFailureException), then assert
ProductCacheManager.get(...) returns Optional.empty(),
ProductCacheManager.getOrLoad(..., loader) invokes the loader and returns its
value, and ProductCacheManager.put(...) completes without throwing; reference
ProductCacheManager.get, ProductCacheManager.getOrLoad, ProductCacheManager.put
and use redisTemplate or the Redis testcontainer control to simulate the
failure.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java`:
- Around line 59-63: The tearDown method currently calls
databaseCleanUp.truncateAllTables() then redisCleanUp.truncateAll(), but if the
database cleanup throws the Redis cleanup is skipped; update tearDown to
guarantee both cleanups run regardless of exceptions (use try/finally or run
each in its own try and collect/suppress exceptions) so
databaseCleanUp.truncateAllTables() and redisCleanUp.truncateAll() are always
attempted and any exceptions are rethrown or aggregated after both run; also add
a unit test that simulates DatabaseCleanUp throwing to assert
RedisCleanUp.truncateAll() still executes.
In
`@apps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.java`:
- Around line 39-52: 현재 테스트 increaseLikeCountSuccess()에서는 정상 경로의 위임만 검증하므로 null
입력과 예외 전파 경계값을 추가해야 합니다; ProductLikeCountSyncerImplTest에 새로운 테스트를 추가하여 (1)
productLikeCountSyncerImpl.increaseLikeCount(null) 호출 시 기대 동작(예: NPE 발생 또는
무시/유효성 검사 처리)을 검증하고 (2) productCommandFacade.increaseLikeCount(...)가 예외를 던질 때
productLikeCountSyncerImpl이 해당 예외를 올바르게 전파하거나 감싸는 동작을 검증하세요; 또한 기존 성공 케이스의 검증은
ArgumentCaptor 대신 verify(productCommandFacade).increaseLikeCount(productId)로
단순화해 위임 인자 비교를 명확히 하십시오.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a4f31f55-e8e7-404e-8036-84bafc682096
⛔ Files ignored due to path filters (10)
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/AdminProductDetailOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/AdminProductOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/ProductDetailOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/ProductOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/query/ProductQueryPort.javais excluded by!**/out/**and included by**docs/todo/cache-event-driven-refresh.mdis excluded by!**/*.mdand included by**round5-docs/05-to-be-cache-measurement.mdis excluded by!**/*.mdand included by**round5-docs/06-2layer-cache-implementation-design.mdis excluded by!**/*.mdand included by**round5-docs/07-cache-eviction-analysis.mdis excluded by!**/*.mdand included by**round5-docs/08-cross-domain-index-and-cache-analysis.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (29)
apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/CacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/IdListCacheEntry.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheConstants.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheDto.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/RedisCacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImpl.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.javaapps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductStockConcurrencyTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLockTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.javaapps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.javaapps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.javaround5-docs/05-to-be-cache-visualization.html
🚧 Files skipped from review as they are similar to previous changes (4)
- apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheConstants.java
- apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/IdListCacheEntry.java
- apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductStockConcurrencyTest.java
- apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/LocalCacheLockTest.java
| // 상품 Read Model의 brand_name 일괄 동기화 | ||
| productCommandService.syncBrandNameInReadModel(id, updatedBrand.getName().value()); | ||
|
|
||
| // 상품 상세 캐시 write-through (해당 브랜드의 전체 상품) | ||
| List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); | ||
| for (Long productId : productIds) { | ||
| productCommandService.refreshProductDetailCache(productId); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "BrandCommandFacade.java" | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 187
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3903
🏁 Script executed:
find . -type f -name "ProductCommandService.java" | head -5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 193
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java | head -200Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 8231
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java | sed -n '200,300p'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2349
🏁 Script executed:
find . -type f -name "ProductCacheManager.java" | head -5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 192
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 10710
🏁 Script executed:
rg -n "refreshProductDetailCache" ./apps/commerce-api/src/main/java/com/loopers/catalog/brand/ -A 2 -B 2Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 794
캐시 갱신은 트랜잭션 커밋 이후로 분리하다.
Line 65-68에서 전체 상품 ID를 읽고 각 상품마다 Redis write-through를 같은 트랜잭션 안에서 수행하면 두 가지 문제가 발생한다. 첫째, 브랜드 상품 수가 많을수록 DB 트랜잭션이 길어져 다른 쓰기 작업과의 락 경합이 증가하고 응답 지연이 커진다. 둘째, 더 중요한 것은 캐시가 DB 커밋 전에 먼저 기록되므로, 롤백이나 커밋 실패 시 캐시가 DB 상태보다 앞서가는 일관성 위반이 발생한다는 점이다.
트랜잭션 커밋 직후 실행되도록 TransactionSynchronization으로 분리하거나, 대량 상품 갱신은 별도 배치 작업으로 처리하다. 추가 테스트로 브랜드 수정 후 예외 발생 시 캐시가 갱신되지 않는 롤백 케이스와, 다량 상품에 대해 after-commit 실행이 정상 동작하는 경로를 검증하다.
수정 예시
+import org.springframework.transaction.support.TransactionSynchronization;
+import org.springframework.transaction.support.TransactionSynchronizationManager;
...
// 상품 Read Model의 brand_name 일괄 동기화
productCommandService.syncBrandNameInReadModel(id, updatedBrand.getName().value());
- // 상품 상세 캐시 write-through (해당 브랜드의 전체 상품)
- List<Long> productIds = productQueryService.findActiveIdsByBrandId(id);
- for (Long productId : productIds) {
- productCommandService.refreshProductDetailCache(productId);
- }
+ // 캐시 갱신은 커밋 이후에 수행
+ List<Long> productIds = productQueryService.findActiveIdsByBrandId(id);
+ TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
+ `@Override`
+ public void afterCommit() {
+ productIds.forEach(productCommandService::refreshProductDetailCache);
+ }
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 상품 Read Model의 brand_name 일괄 동기화 | |
| productCommandService.syncBrandNameInReadModel(id, updatedBrand.getName().value()); | |
| // 상품 상세 캐시 write-through (해당 브랜드의 전체 상품) | |
| List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); | |
| for (Long productId : productIds) { | |
| productCommandService.refreshProductDetailCache(productId); | |
| } | |
| // 상품 Read Model의 brand_name 일괄 동기화 | |
| productCommandService.syncBrandNameInReadModel(id, updatedBrand.getName().value()); | |
| // 캐시 갱신은 커밋 이후에 수행 | |
| List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); | |
| TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() { | |
| `@Override` | |
| public void afterCommit() { | |
| productIds.forEach(productCommandService::refreshProductDetailCache); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java`
around lines 61 - 68, The cache refresh loop currently runs inside the
BrandCommandFacade transactional flow (calls to
productQueryService.findActiveIdsByBrandId and
productCommandService.refreshProductDetailCache), which risks long transactions
and cache writes before DB commit; move the product detail cache refresh to run
after successful commit by registering a TransactionSynchronization
(TransactionSynchronizationManager.registerSynchronization) or publish a domain
event handled by a `@TransactionalEventListener`(phase = AFTER_COMMIT) that
receives the brand id and calls productQueryService.findActiveIdsByBrandId and
productCommandService.refreshProductDetailCache for each productId; keep
productCommandService.syncBrandNameInReadModel inside the transaction if it must
be transactional, and add tests that simulate an exception after brand update to
assert no cache refresh on rollback and that after-commit refresh runs for large
product sets.
| // 11. 상품 상세 캐시 write-through (Facade에서 호출 — Read Model projection 기반) | ||
| public void refreshProductDetailCache(Long productId) { | ||
| productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)); | ||
| } | ||
|
|
||
|
|
||
| // 12. 상품 상세 캐시 삭제 (상품 삭제 시 Facade에서 호출) | ||
| public void deleteProductDetailCache(Long productId) { | ||
| productCacheManager.deleteProductDetail(productId); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# ProductCommandFacade에서 캐시 메서드 호출 시점 확인
ast-grep --pattern $'@Transactional
$$$
public $_ $_($$$) {
$$$
productCommandService.refreshProductDetailCache($$$);
$$$
}'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# ProductCommandFacade 파일 위치 확인
fd -t f "ProductCommandFacade" --extension javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 298
🏁 Script executed:
# 캐시 메서드 호출 위치 검색
rg "refreshProductDetailCache|deleteProductDetailCache" -t java -B 5 -A 2Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 15849
🏁 Script executed:
# ProductCommandFacade의 주요 메서드 확인
head -100 apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3336
🏁 Script executed:
# createProduct, updateProduct, deleteProduct 메서드의 `@Transactional` 설정 확인
rg "@Transactional|public.*Product.*\(" apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java -B 2 -A 15 | head -150Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2363
🏁 Script executed:
# ProductCommandService의 refreshProductDetailCache 메서드와 주변 코드 상세 확인
sed -n '195,215p' apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 812
🏁 Script executed:
# productCacheManager의 refreshProductDetail 메서드 구현 확인
rg "refreshProductDetail|deleteProductDetail" -t java --type-list java -A 10 | head -80Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 474
🏁 Script executed:
# productCacheManager 찾기
fd -t f "CacheManager" --extension javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 300
🏁 Script executed:
# ProductCacheManager 구현 확인
find apps/commerce-api -name "*CacheManager*" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 300
🏁 Script executed:
# ProductCacheManager의 refreshProductDetail, deleteProductDetail 메서드 확인
cat apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 8435
refreshProductDetailCache는 DB 조회를 수행하므로 트랜잭션 경계 재검토 필요
refreshProductDetailCache는 productQueryPort.findProductCacheDtoById(productId)를 통해 DB에 접근한다. 현재 코드는 ProductCommandFacade의 @Transactional 메서드 내에서 호출되므로, 캐시 갱신이 DB 트랜잭션 커밋 전에 발생한다. 이 경우 캐시 작업 실패 시 DB 롤백이 되지 않아 데이터 불일치 위험이 있다.
개선안:
- Facade 메서드에서 데이터베이스 커밋 후 캐시 갱신을 명시적으로 분리하거나,
@TransactionalEventListener또는 트랜잭션 콜백(TransactionSynchronizationManager)을 사용하여 커밋 후 캐시 갱신 보장 refreshProductDetailCache호출 실패 시에도 비즈니스 로직이 진행되므로, 로깅 외에 모니터링 알림 추가 권장 (특히 운영 환경에서 캐시-DB 동기화 검증)
추가 테스트:
refreshProductDetailCache호출 중findProductCacheDtoById가 예외 발생 시나리오 검증- 캐시 저장 실패 후 Read Model 재조회 시 일관성 확인
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`
around lines 201 - 210, The refreshProductDetailCache method performs a DB read
via productQueryPort.findProductCacheDtoById and is currently invoked inside a
transactional facade (ProductCommandFacade), causing cache refresh to run before
DB commit; change the flow so cache writes occur after transaction commit—either
remove the direct call from the transactional facade and invoke
ProductCommandService.refreshProductDetailCache from a
`@TransactionalEventListener` or register a
TransactionSynchronizationManager.afterCommit callback that calls
productCacheManager.refreshProductDetail and the productQueryPort lookup; ensure
failures in refreshProductDetailCache are logged and surfaced to
monitoring/alerting (not just swallowed) and add tests for exceptions from
productQueryPort.findProductCacheDtoById and for cache write failures to
validate eventual consistency.
| List<Long> missedIds = extractMissedIds(idList.ids(), cached); | ||
| if (!missedIds.isEmpty()) { | ||
| List<ProductCacheDto> fromDb = loadAndCacheDetails(missedIds); | ||
| cached = mergeInOrder(idList.ids(), cached, fromDb); | ||
| } | ||
|
|
||
| // dangling ID 방어 (삭제되었으나 ID 리스트에 남은 경우 null skip) | ||
| List<ProductOutDto> content = cached.stream() | ||
| .filter(Objects::nonNull) | ||
| .map(ProductCacheDto::toProductOutDto) | ||
| .toList(); | ||
|
|
||
| return new ProductPageOutDto(content, page, size, idList.totalElements()); |
There was a problem hiding this comment.
부분 미스에서 null만 걸러내면 페이지 크기와 totalElements가 어긋난다.
상세 조회가 일부 복구되지 않으면 현재 구현은 그 항목을 조용히 버리고 기존 totalElements를 그대로 반환한다. 운영에서는 삭제·브랜드 변경 직후 stale ID 리스트가 남아 있으면, 캐시 TTL 동안 사용자가 짧은 페이지와 잘못된 총 개수를 반복해서 받게 된다. missedIds를 DB에서 모두 복구하지 못한 경우에는 ID 리스트 캐시를 즉시 무효화하고 한 번 재조회하거나, 최소한 결과 수와 total count를 함께 재계산하도록 바꾸는 편이 안전하다. 추가로 “ID 리스트 캐시는 적중했지만 상세 한 건이 이미 삭제된 상황” 테스트를 넣어 페이지 크기와 total count가 일관되게 유지되는지 검증해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`
around lines 103 - 115, When some missedIds cannot be recovered (i.e.
fromDb.size() < missedIds.size()) don’t silently drop nulls and return the old
total; instead detect that partial recovery happened after calling
loadAndCacheDetails(missedIds) and either (a) invalidate the ID-list cache and
re-fetch the id list (e.g. call the service/method that loads idList again) and
re-run the mergeInOrder/caching flow, or (b) recompute the page content and
totalElements consistently (adjust totalElements by the number of permanently
missing IDs or recalc from a fresh id query) before constructing
ProductPageOutDto; update the logic around extractMissedIds,
loadAndCacheDetails, mergeInOrder and the ProductPageOutDto construction to
perform this branch and add a unit test covering "ID list cached but one detail
deleted" to assert page size and totalElements remain consistent.
| @Test | ||
| @DisplayName("[PUT + GET /api/v1/products/{productId}] 상품 수정 후 상세 조회 -> 캐시 무효화되어 수정된 데이터 반환") | ||
| void updateProductThenGetReturnsUpdatedData() throws Exception { | ||
| // Arrange | ||
| Long brandId = createBrandAndGetId("나이키", "스포츠 브랜드"); | ||
| Long productId = createProductAndGetId(brandId, "에어맥스", new BigDecimal("129000"), 100L, "러닝화"); | ||
|
|
||
| // 상세 조회 (캐시에 저장됨) | ||
| mockMvc.perform(get("/api/v1/products/{productId}", productId)) | ||
| .andExpect(status().isOk()) | ||
| .andExpect(jsonPath("$.name").value("에어맥스")); | ||
|
|
||
| // 상품 수정 (캐시 무효화 발생) | ||
| AdminProductUpdateRequest updateRequest = new AdminProductUpdateRequest( | ||
| "에어맥스 97", new BigDecimal("159000"), 200L, "레트로 러닝화"); | ||
| mockMvc.perform(put("/api-admin/v1/products/{productId}", productId) | ||
| .header(ADMIN_LDAP_HEADER, ADMIN_LDAP_VALUE) | ||
| .contentType(MediaType.APPLICATION_JSON) | ||
| .content(objectMapper.writeValueAsString(updateRequest))) | ||
| .andExpect(status().isOk()); | ||
|
|
||
| // Act & Assert — 수정된 데이터가 반환되어야 함 (캐시 무효화 검증) | ||
| mockMvc.perform(get("/api/v1/products/{productId}", productId)) | ||
| .andExpect(status().isOk()) | ||
| .andExpect(jsonPath("$.name").value("에어맥스 97")) | ||
| .andExpect(jsonPath("$.price").value(159000)) | ||
| .andExpect(jsonPath("$.stock").value(200)) | ||
| .andExpect(jsonPath("$.description").value("레트로 러닝화")); | ||
| } |
There was a problem hiding this comment.
캐시 무효화 테스트가 실제 Redis 상태를 확인하지 않아 거짓 양성이 가능하다
현재 검증은 두 번째 GET이 최신 응답을 반환하는지만 본다. 이 경우 첫 번째 GET이 Redis에 쓰지 못했거나, 조회 경로가 캐시를 우회해도 그대로 통과하므로 운영에서 상세 캐시(product:v1:{productId}) 무효화 회귀를 놓칠 수 있다. 첫 번째 GET 뒤 키 생성 여부, PUT 뒤 키 삭제 여부, 두 번째 GET 뒤 재생성 여부까지 직접 확인해야 무효화 경로를 고정할 수 있다.
예시 수정안
+import static org.assertj.core.api.Assertions.assertThat;
+
+import com.loopers.catalog.product.infrastructure.cache.ProductCacheConstants;
import org.springframework.beans.factory.annotation.Autowired;
+import org.springframework.data.redis.core.StringRedisTemplate;
...
`@Autowired`
private RedisCleanUp redisCleanUp;
+
+ `@Autowired`
+ private StringRedisTemplate stringRedisTemplate;
...
void updateProductThenGetReturnsUpdatedData() throws Exception {
// Arrange
Long brandId = createBrandAndGetId("나이키", "스포츠 브랜드");
Long productId = createProductAndGetId(brandId, "에어맥스", new BigDecimal("129000"), 100L, "러닝화");
+ String cacheKey = ProductCacheConstants.DETAIL_KEY_PREFIX + productId;
// 상세 조회 (캐시에 저장됨)
mockMvc.perform(get("/api/v1/products/{productId}", productId))
.andExpect(status().isOk())
.andExpect(jsonPath("$.name").value("에어맥스"));
+ assertThat(stringRedisTemplate.hasKey(cacheKey)).isTrue();
// 상품 수정 (캐시 무효화 발생)
AdminProductUpdateRequest updateRequest = new AdminProductUpdateRequest(
"에어맥스 97", new BigDecimal("159000"), 200L, "레트로 러닝화");
mockMvc.perform(put("/api-admin/v1/products/{productId}", productId)
.header(ADMIN_LDAP_HEADER, ADMIN_LDAP_VALUE)
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(updateRequest)))
.andExpect(status().isOk());
+ assertThat(stringRedisTemplate.hasKey(cacheKey)).isFalse();
// Act & Assert — 수정된 데이터가 반환되어야 함 (캐시 무효화 검증)
mockMvc.perform(get("/api/v1/products/{productId}", productId))
.andExpect(status().isOk())
.andExpect(jsonPath("$.name").value("에어맥스 97"))
.andExpect(jsonPath("$.price").value(159000))
.andExpect(jsonPath("$.stock").value(200))
.andExpect(jsonPath("$.description").value("레트로 러닝화"));
+ assertThat(stringRedisTemplate.hasKey(cacheKey)).isTrue();
}추가 테스트로 두 번째 GET 이후 Redis에 저장된 값이 수정된 필드로 직렬화되어 있는지까지 확인하면 재캐시 경로 회귀도 함께 막을 수 있다. As per coding guidelines, "**/*Test*.java: Java 테스트 리뷰 기준이다. 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다. 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.java`
around lines 740 - 768, The test updateProductThenGetReturnsUpdatedData
currently only asserts responses and may miss cache behavior; modify it to
inspect Redis directly using the project's Redis client (e.g.,
StringRedisTemplate or RedisTemplate) to (1) after the first mockMvc GET assert
the presence of the cache key "product:v1:{productId}", (2) after the PUT assert
that the same key was removed (invalidated), and (3) after the final GET assert
the key was recreated and its serialized value contains the updated fields
(name/price/stock/description); use existing symbols in the test (mockMvc,
objectMapper, ADMIN_LDAP_HEADER/ADMIN_LDAP_VALUE, productId) and add a
test-scoped injection of the Redis template to perform these key existence and
value assertions.
| // Assert — Product → OrderProductInfo 변환 전체 필드 검증 | ||
| assertAll( | ||
| () -> assertThat(result).hasSize(2), | ||
| () -> assertThat(result.get(0).productId()).isEqualTo(1L), | ||
| () -> assertThat(result.get(0).name()).isEqualTo("나이키 에어맥스"), | ||
| () -> assertThat(result.get(0).price()).isEqualByComparingTo(new BigDecimal("100000")), | ||
| () -> assertThat(result.get(0).stock()).isEqualTo(10L), | ||
| () -> assertThat(result.get(1).productId()).isEqualTo(2L), | ||
| () -> verify(productQueryFacade).findActiveByIds(productIds) | ||
| () -> assertThat(result.get(1).name()).isEqualTo("아디다스 울트라부스트"), | ||
| () -> assertThat(result.get(1).price()).isEqualByComparingTo(new BigDecimal("200000")), | ||
| () -> assertThat(result.get(1).stock()).isEqualTo(5L) |
There was a problem hiding this comment.
정상값 검증만으로는 ACL 회귀를 충분히 고정하지 못한다.
지금 변경으로 필드 매핑 검증은 강화됐지만, findActiveByIds()가 1회만 호출되는지와 조회 실패 시 예외 흐름이 유지되는지는 더 이상 보장되지 않는다. 이 경계에서 중복 호출이 생기면 캐시/카탈로그 부하가 바로 증가하고, 실패 처리 변경은 주문 생성 장애로 이어질 수 있다. assertAll 뒤에 verify(productQueryFacade).findActiveByIds(productIds)와 verifyNoMoreInteractions(productQueryFacade)를 추가해 배치 조회 1회를 고정하고, 별도 테스트로 findActiveByIds() 예외 주입 시 readProducts()의 전파/변환 정책을 검증하는 편이 안전하다.
제안 diff
import static org.mockito.BDDMockito.given;
import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.verifyNoMoreInteractions;
@@
assertAll(
() -> assertThat(result).hasSize(2),
() -> assertThat(result.get(0).productId()).isEqualTo(1L),
() -> assertThat(result.get(0).name()).isEqualTo("나이키 에어맥스"),
() -> assertThat(result.get(0).price()).isEqualByComparingTo(new BigDecimal("100000")),
() -> assertThat(result.get(0).stock()).isEqualTo(10L),
() -> assertThat(result.get(1).productId()).isEqualTo(2L),
() -> assertThat(result.get(1).name()).isEqualTo("아디다스 울트라부스트"),
() -> assertThat(result.get(1).price()).isEqualByComparingTo(new BigDecimal("200000")),
() -> assertThat(result.get(1).stock()).isEqualTo(5L)
);
+ verify(productQueryFacade).findActiveByIds(productIds);
+ verifyNoMoreInteractions(productQueryFacade);As per coding guidelines, "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java`
around lines 62 - 72, The test currently only asserts successful field mapping;
ensure it also verifies interaction counts by adding
verify(productQueryFacade).findActiveByIds(productIds) and
verifyNoMoreInteractions(productQueryFacade) after the assertAll to lock the
batch-query to one call, and add a separate unit test in
OrderProductReaderImplTest that stubs productQueryFacade.findActiveByIds(...) to
throw and asserts that readProducts(...) (or the
OrderProductReaderImpl.readProducts method) propagates or handles the exception
per policy; reference productQueryFacade, findActiveByIds, and readProducts when
locating where to add the interaction verifications and the new failure-case
test.
| <meta charset="UTF-8"> | ||
| <meta name="viewport" content="width=device-width, initial-scale=1.0"> | ||
| <title>TO-BE 캐시 성능 측정 결과 시각화</title> | ||
| <script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.7/dist/chart.umd.min.js"></script> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
head -50 round5-docs/05-to-be-cache-visualization.htmlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2319
🏁 Script executed:
wc -l round5-docs/05-to-be-cache-visualization.htmlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 134
🏁 Script executed:
# Chart.js 로딩 및 에러 처리 관련 코드 검색
rg -n "chart|Chart|script|error|catch|fallback" round5-docs/05-to-be-cache-visualization.html -A 2 -B 2Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 6700
벤치마크 문서를 외부 CDN에만 의존시키면 재현성이 깨진다.
운영 관점에서는 측정 산출물이 네트워크 상태나 CDN 정책에 따라 렌더링되지 않으면 추후 회귀 분석과 감사 근거로 사용할 수 없다. 이 파일은 Chart.js를 저장소 안의 고정 자산으로 함께 두어 self-host 하거나, 최소한 integrity, crossorigin 속성과 로딩 실패 fallback을 추가하는 쪽이 안전하다. 추가 테스트로 브라우저 Offline 모드에서 이 HTML을 열어 차트가 계속 렌더링되거나 대체 안내가 노출되는지 확인해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@round5-docs/05-to-be-cache-visualization.html` at line 7, 현재 HTML에 Chart.js를
외부 CDN(https://cdn.jsdelivr.net/...)로만 로드하고 있어 네트워크/서비스 가용성에 따라 렌더링이 실패할 수 있으므로,
Chart.js를 저장소의 고정 자산으로 넣어 self-host 하거나(예: 프로젝트의 assets/chart.chart.umd.min.js로
복사) 최소한 <script> 태그에 SRI를 위한 integrity와 crossorigin 속성을 추가하고 로드 실패 시 대체 안내 또는 로컬
복사본으로 폴백하는 로직을 구현하세요; 변경 대상은 해당 <script src="...chart.umd.min.js"> 태그이며, 수정 후
브라우저의 오프라인 모드에서 페이지를 열어 차트가 정상 렌더링되거나 대체 안내가 표시되는지 확인하세요.
| <h3>캐시 적용 효과 요약 (1000만건 실측 기준)</h3> | ||
| <ul> | ||
| <li><strong>Cache Hit 응답시간 3.5~5ms</strong>: 데이터 규모와 완전 무관. 10만건이든 1000만건이든 동일 (전체 실측).</li> | ||
| <li><strong>1000만건 목록 API</strong>: Cache Hit <strong>3.87~4.97ms</strong>, Cache Miss <strong>5.13~10.75ms</strong> (실측)</li> | ||
| <li><strong>1000만건 버스트 Hit</strong>: avg 24~50ms, p95 38~77ms (100 concurrent, 실측)</li> | ||
| <li><strong>1000만건 지속 부하</strong>: avg 10~12ms, 에러율 0%, QPS 20.0 달성 (20 RPS x 10초, 실측)</li> | ||
| <li><strong>DB 부하 95% 감소</strong>: 캐시 적중률 95% 달성 시, DB는 전체 트래픽의 5%만 처리</li> | ||
| <li><strong>Redis 장애 시에도 서비스 유지</strong>: try-catch fallback + 인덱스 최적화 DB 조회</li> |
There was a problem hiding this comment.
결론 요약 수치가 차트 원본과 불일치한다.
운영 관점에서는 성능 문서의 숫자가 서로 다르면 회귀 비교와 의사결정 근거가 무너진다. 예를 들어 Line 402의 Cache Hit 3.87~4.97ms, Cache Miss 5.13~10.75ms는 이 파일의 1000만건 목록 차트 데이터와 맞지 않으므로, 결론 문구를 동일한 dataset 상수에서 파생시키거나 실제 값으로 정정해야 한다. 추가 테스트로 KPI/결론 텍스트와 차트 dataset을 자동 대조하는 검증을 넣어 문서 드리프트를 막아야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@round5-docs/05-to-be-cache-visualization.html` around lines 399 - 406, The
summary KPI numbers (e.g., the strings "Cache Hit 3.87~4.97ms" and "Cache Miss
5.13~10.75ms" in the list under the H3 heading) are inconsistent with the
10M-listing chart dataset; update the text to derive its numbers from the same
chart dataset constant (or correct the hard-coded values to match the chart) and
add an automated validation that compares the computed KPI text values against
the chart dataset (fail build/test if they differ) so the "1000만건 목록" chart and
the corresponding conclusion strings always stay in sync.
f2cd6b9 to
4178526
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (5)
round5-docs/05-to-be-cache-visualization.html (3)
307-389:⚠️ Potential issue | 🟠 Major스탬피드 보호 설명이 분산 환경 동작을 충분히 드러내지 못한다.
Line 307-389 설명과 표는
LocalCacheLock중심으로 읽혀 멀티 인스턴스에서 전역 단일 DB 접근이 항상 보장되는 것처럼 오해를 유발한다. 운영 관점에서 락 경로 오해는 장애 시 트래픽 증폭과 재현 실패로 이어진다. 단일 노드 경로(LocalCacheLock)와 분산 경로(RedisCacheLock)를 분리해락 획득 성공/실패 → 대기·재시도·stale 반환분기를 명시하고, PER/TTL Jitter가 각 분기 어디에 적용되는지 표시해야 한다. 추가 테스트로 문서 다이어그램/표를 실제CacheLock구현 및 벤치마크 토폴로지와 대조 검증해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` around lines 307 - 389, The documentation currently implies LocalCacheLock provides a global single-reader guarantee; update the Cache-Aside diagram and the "스탬피드 보호 3계층" table to explicitly separate LocalCacheLock (per-instance) and RedisCacheLock (distributed), and add branching for lock acquisition outcomes (lock_acquired → DB load + Redis SET; lock_failed → wait/retry with backoff OR return stale), marking where PER and TTL Jitter apply in each branch; also annotate the diagram/table to show which techniques are per-node vs global and add a note to validate the doc against the actual CacheLock implementation (CacheLock, LocalCacheLock, RedisCacheLock) and benchmark/topology tests to confirm behavior under failure and high concurrency.
7-7:⚠️ Potential issue | 🟠 Major외부 CDN 단일 의존은 문서 재현성과 가용성을 떨어뜨린다.
Line 7은 CDN 장애/차단 시 차트 전체가 렌더링 실패하므로 회귀 분석 근거 문서로서 신뢰성이 깨진다. 로컬 번들(self-host) 우선 + CDN 백업, 또는 최소 SRI(
integrity)와crossorigin, 로드 실패 안내를 추가해야 한다. 추가 테스트로 브라우저 오프라인 모드와 CDN 차단 환경에서 차트 또는 대체 안내 노출을 검증해야 한다.수정 예시
-<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.7/dist/chart.umd.min.js"></script> +<script src="./assets/chart.umd.min.js"></script> +<script> + if (!window.Chart) { + const s = document.createElement('script'); + s.src = "https://cdn.jsdelivr.net/npm/chart.js@4.4.7/dist/chart.umd.min.js"; + s.crossOrigin = "anonymous"; + // s.integrity = "검증된 SRI 해시값"; + s.onerror = () => { + document.body.insertAdjacentHTML( + "afterbegin", + '<div class="note-banner"><strong>차트 로드 실패</strong><br>네트워크 또는 CDN 상태를 확인해야 한다.</div>' + ); + }; + document.head.appendChild(s); + } +</script>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` at line 7, The HTML currently loads Chart.js solely from the external CDN via the script tag referencing "chart.umd.min.js", which risks rendering failure if the CDN is blocked; change this to prefer a local self-hosted bundle (serve a local copy of chart.umd.min.js), keep the CDN URL as a fallback, and add SRI integrity and crossorigin attributes for the CDN script; also attach a load/error handler (or small inline script after the tag) that falls back to the local file when the CDN fails and displays a user-facing message if both fail; verify by testing in offline/browser CDN-blocked scenarios to ensure charts or a clear alternative message appear.
399-406:⚠️ Potential issue | 🟡 Minor결론 KPI가 차트 데이터와 분리되어 드리프트가 발생한다.
Line 399-406은 요약 숫자를 하드코딩해 데이터셋과 불일치가 재발한다. 운영 보고서에서 결론 수치 불일치는 의사결정 신뢰도를 훼손한다. 결론 KPI를 차트 데이터 상수에서 계산해 렌더링하도록 바꾸고, 값 불일치 시 실패하는 검증을 추가해야 한다. 추가 테스트로 “1000만건 목록 API” 요약 수치와
hitMissAll데이터의 최소/최대값 자동 비교를 수행해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` around lines 399 - 406, The summary KPIs are hardcoded and can drift from the chart data; change the static numbers in the HTML summary block (the "1000만건 목록 API" and related lines) to be computed at render time from the chart data constant hitMissAll (e.g., compute min/max and averages from hitMissAll and use those values in the rendered text), and add a unit test/assertion that validates the computed summary equals the chart data (for example compare the rendered "1000만건 목록 API" min/max to Math.min/Math.max over hitMissAll and fail the build if they differ) so any future data changes will fail CI rather than produce inconsistent reports.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java (1)
102-115:⚠️ Potential issue | 🟠 Major부분 복구 실패를 조용히 필터링하면 페이지 메타데이터가 깨진다.
Line 105에서
missedIds일부만 DB에서 복구되고 Line 110-115에서null만 걸러 버리면, 삭제 직후 stale ID 리스트가 남아 있는 동안 사용자는 짧은 페이지를 보면서도totalElements는 이전 값을 계속 받는다. 같은 요청이 반복될 때마다 누락 ID를 다시 DB에서 찾는 비용도 계속 발생한다.fromDb.size() < missedIds.size()이면 ID 리스트 캐시를 즉시 무효화하고 한 번 재조회하거나, 최소한content와totalElements를 함께 재계산하도록 바꾸는 편이 안전하다. 그리고 “ID 리스트 캐시는 hit했지만 상세 한 건은 이미 삭제된 경우” 회귀 테스트를 추가해 페이지 크기와totalElements가 항상 일치하는지 검증해야 한다.Also applies to: 225-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java` around lines 102 - 115, The partial-recovery path in ProductQueryService (around extractMissedIds, loadAndCacheDetails, mergeInOrder) can leave stale IDs filtered out later while totalElements (idList.totalElements()) remains outdated; change the logic so that if fromDb.size() < missedIds.size() you invalidate or refresh the ID-list cache and re-run the id fetch (or recompute both content and totalElements from the refreshed ID list) instead of silently filtering nulls, and ensure mergeInOrder is only used after a successful full recovery; also add a regression test that simulates “ID-list cache hit but some detail rows deleted” to assert that ProductPageOutDto.content.size() and totalElements are consistent after the request.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java (1)
177-186:⚠️ Potential issue | 🟠 MajorPER 갱신 경로가 다시 스탬피드를 만들고
null을 캐시에 기록한다.Line 179-186은 만료 임박 hit마다 refresh 작업을 바로 제출하므로 hot key에서
loader가 fan-out된다. 게다가 Line 181-182에서fresh == null인 경우도 그대로put()되어"null"이 저장될 수 있어, 이후 조회가 warn 로그를 남기며 계속 miss로 떨어질 수 있다. 비동기 갱신도 키 단위로 1회만 실행되게 dedupe하고,fresh != null일 때만 저장하거나 없으면 명시적으로 evict 또는 sentinel 처리하는 편이 안전하다. 같은 키로 동시 요청 100건을 보내loader가 1회만 호출되는지와, 삭제된 상품 refresh가"null"을 남기지 않는지 회귀 테스트를 추가해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 177 - 186, The PER async refresh currently fans out on hot keys and may cache nulls; change the async refresh logic in the block using shouldEarlyRefresh, CompletableFuture.runAsync, loader, put and perExecutor to deduplicate refreshes per key (ensure only one in-flight refresh per key, e.g., track in-flight futures) and when the loader returns apply: if fresh != null then put(key, fresh, ttl) else evict(key) or write a sentinel rather than storing null; also add regression tests that send 100 concurrent requests for the same key asserting loader is invoked only once and that a deleted/absent product never leaves a "null" value in cache (test both put/evict behavior).
🧹 Nitpick comments (5)
apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/IdListCacheEntry.java (1)
13-14:ids리스트의 불변성이 보장되지 않는다.외부에서 전달된
List<Long>가 그대로 저장되어, 호출측에서 원본 리스트를 수정하면 캐시 값이 오염될 수 있다. JSON 직렬화/역직렬화 경로에서는 새 리스트가 생성되어 문제가 없지만, 테스트나 직접 생성 시 방어적 복사가 없으면 디버깅이 어려운 버그가 발생할 수 있다.♻️ 방어적 복사 적용 예시
-public record IdListCacheEntry(List<Long> ids, long totalElements) { -} +public record IdListCacheEntry(List<Long> ids, long totalElements) { + public IdListCacheEntry { + ids = ids != null ? List.copyOf(ids) : List.of(); + } +}As per coding guidelines, "
**/*.java: null 처리, 방어적 복사, 불변성, equals/hashCode/toString 구현 안정성을 점검한다."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/IdListCacheEntry.java` around lines 13 - 14, IdListCacheEntry currently stores the provided List<Long> as-is, allowing external mutation to corrupt cache; implement a canonical constructor for the record IdListCacheEntry that validates ids is non-null (Objects.requireNonNull), performs a defensive copy via List.copyOf(ids) (or new ArrayList<>(ids) wrapped as unmodifiable), and optionally validate totalElements (e.g., non-negative), so the record stores an immutable list and cannot be modified by callers or tests.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java (1)
60-61: PER 전용 스레드 풀을 컴포넌트 수명주기에 묶는 편이 안전하다.Line 61의 고정 스레드 풀은 종료 훅이 없어 애플리케이션 종료나 테스트 컨텍스트 재생성 시 refresh 작업이 남을 수 있다. 운영에서는 graceful shutdown 지연이나 hanging 테스트로 이어질 수 있다. Spring이 관리하는 executor를 주입하거나 최소한
@PreDestroy에서shutdown/awaitTermination을 호출하고, 컨텍스트 종료 시 executor가 정상 종료되는 테스트를 추가하는 편이 안전하다.예시 수정안
+import jakarta.annotation.PreDestroy; + // PER 비동기 갱신 전용 스레드 풀 (ForkJoinPool 고갈 방지) private final ExecutorService perExecutor = Executors.newFixedThreadPool(3); + +@PreDestroy +void shutdownPerExecutor() { + perExecutor.shutdown(); + try { + if (!perExecutor.awaitTermination(5, TimeUnit.SECONDS)) { + perExecutor.shutdownNow(); + } + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + perExecutor.shutdownNow(); + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 60 - 61, The fixed-size ExecutorService perExecutor declared in ProductCacheManager should be tied to the component lifecycle to avoid leaked threads: replace or manage perExecutor so Spring controls it (inject a TaskExecutor/ThreadPoolTaskExecutor) or add lifecycle shutdown handling; specifically, update the ProductCacheManager class to either accept an injected Executor/TaskExecutor instead of using Executors.newFixedThreadPool(3) for perExecutor, or add a `@PreDestroy` method that calls perExecutor.shutdown() and awaits termination (with a timeout) to ensure tasks finish on context close and tests don't hang.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java (1)
490-516: 동률 정렬 키 경계 케이스를 추가하는 편이 안전하다.현재 테스트는
price와likeCount값이 모두 달라서, Querydsl에 추가한id desctie-breaker가 깨져도 통과한다. 운영에서는 같은 가격이나 같은 좋아요 수를 가진 상품이 흔하므로 이 순서가 흔들리면 페이지 경계에서 중복·누락이 생기고 ID 리스트 캐시도 잘못 고정된다. 같은price또는likeCount를 가진 상품을 여러 개 만들고 더 큰id가 먼저 오는지, 다음 페이지에서도 순서가 안정적인지 검증하는 케이스를 추가하는 편이 안전하다. As per coding guidelines**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java` around lines 490 - 516, Add boundary tests to verify the Querydsl tie-breaker (id desc) when sorting by PRICE_ASC or LIKE_COUNT so ordering is stable: extend or add tests alongside searchProductIdsSuccess to create multiple ProductEntity instances with identical price (and a separate case with identical likeCount), ensure their ids differ, call productQueryPort.searchProductIds with ProductSortType.PRICE_ASC (and a LIKE_COUNT sort test), and assert that within the same page the higher id appears first and that pagination retains stable order across pages (use PageCriteria to split results). Specifically target the searchProductIdsSuccess scenario and assert ordering of result.ids() and consistency on subsequent pages to catch regressions in the tie-breaker logic.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java (2)
57-79: 예외 전파 케이스 테스트가 누락되어 있다현재 테스트는
productQueryService.getOrLoadProductDetail()이 정상 응답을 반환하는 경우만 검증한다. 서비스에서CoreException이 발생했을 때 Facade가 이를 올바르게 전파하는지 검증하는 테스트가 필요하다.`@Test` `@DisplayName`("[getProduct()] 존재하지 않는 상품 조회 -> CoreException 전파") void getProductNotFound() { // Arrange given(productQueryService.getOrLoadProductDetail(999L)) .willThrow(new CoreException(ErrorType.PRODUCT_NOT_FOUND)); // Act & Assert assertThatThrownBy(() -> productQueryFacade.getProduct(999L)) .isInstanceOf(CoreException.class) .hasFieldOrPropertyWithValue("errorType", ErrorType.PRODUCT_NOT_FOUND); }As per coding guidelines,
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java` around lines 57 - 79, Add a unit test in ProductQueryFacadeTest that verifies exception propagation when productQueryService.getOrLoadProductDetail(...) throws a CoreException; stub productQueryService.getOrLoadProductDetail(999L) to willThrow(new CoreException(ErrorType.PRODUCT_NOT_FOUND)) and assert that calling productQueryFacade.getProduct(999L) throws CoreException with errorType == ErrorType.PRODUCT_NOT_FOUND (use assertThatThrownBy or equivalent and reference productQueryService.getOrLoadProductDetail, productQueryFacade.getProduct, CoreException, and ErrorType.PRODUCT_NOT_FOUND).
114-131: 관리자 상세 조회 테스트에도 예외 케이스 추가가 필요하다
getAdminProduct()테스트도 동일하게 happy path만 검증하고 있다. 상품이 존재하지 않는 경우의 예외 전파 테스트를 추가하는 것이 좋다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java` around lines 114 - 131, Add a negative test to ProductQueryFacadeTest that verifies getAdminProduct(Long) propagates the service exception: mock productQueryService.getAdminProductDetail(1L) to throw the appropriate exception (e.g., ProductNotFoundException or NoSuchElementException), call productQueryFacade.getAdminProduct(1L) and assert that the same exception is thrown, and verify productQueryService.getAdminProductDetail(1L) was invoked; place this alongside the existing getAdminProductSuccess test to cover the error path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`:
- Around line 203-211: refreshProductDetailCache and deleteProductDetailCache
currently let Redis exceptions bubble up and can cause transaction rollbacks;
wrap the calls to productCacheManager.refreshProductDetail(productId, ...) and
productCacheManager.deleteProductDetail(productId) in try/catch that catches
Exception, logs a warning including productId and the exception (e.g.,
log.warn("[Cache] ... productId={}", productId, e)), and suppresses the
exception so cache failures don't break the surrounding `@Transactional` flow;
also consider adding monitoring (a Micrometer counter) in the catch and add an
integration test to verify product create/update/delete succeed when Redis is
unavailable.
- Around line 214-230: refreshIdListCacheForAllSorts() and
refreshIdListCacheForSort() synchronously run many DB queries via
refreshSingleIdList() inside transactional create/update/delete flows; change
this to decouple cache updates by making the refresh asynchronous or
event-driven: replace direct calls to
refreshIdListCacheForAllSort/refreshIdListCacheForSort from transactional
methods with either (a) an `@Async-invoked` wrapper or (b) publish a
CacheRefreshEvent that an application listener handles and calls
refreshIdListCacheForSort/refreshSingleIdList off the transaction; alternatively
implement an evict-only path (call a new
evictIdListCacheForSort/evictSingleIdList) from the transaction and let the
async listener lazily rebuild pages, and for createProduct consider keeping only
page=0 synchronous by invoking refreshSingleIdList(brandId, sortType, 0) and
delegating remaining pages to the async/event handler.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java`:
- Around line 151-164: The double-check inside the lock uses get() (which goes
through readTemplate and may prefer replicas), so change that second-check to
perform a master-bound read (bypass replica-preferred readTemplate) so the lock
sees the authoritative value; specifically, inside
cacheLock.executeWithLock(...) replace the Optional<T> doubleCheck = get(key,
type) call with a master/primary read variant (use the master-bound readTemplate
or a getMaster(key,type) helper) so it returns the master value even when
replicas are preferred, then keep the existing put(...) and loader.get() flow.
Also add a concurrency test that stubs replicas to return stale/miss and the
master to return the cached value to assert loader.get() is invoked exactly once
under simultaneous requests.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLockTest.java`:
- Around line 34-83: The test DisplayName is inconsistent with its assertion: in
LocalCacheLockTest.sameKeyConcurrentRequests_loaderCalledOnce the `@DisplayName`
claims "loader 1회만 실행" but the assertion checks loaderCallCount == threadCount
(100). Fix by updating the `@DisplayName` to accurately describe the test (e.g.,
"같은 key 100개 동시 요청 → 직렬 실행으로 loader 100회 호출") or, if you intend to verify
stampede protection, change the test to call ProductCacheManager's double-check
flow instead of LocalCacheLock.executeWithLock; update the DisplayName to match
whichever behavior (LocalCacheLock.executeWithLock or ProductCacheManager) the
test actually verifies.
In `@round5-docs/05-to-be-cache-visualization.html`:
- Around line 314-339: The HTML arrow sequences like "-->" (e.g., "Client -->
Controller", "Redis GET --> [HIT]", "상품 CUD --> Active Invalidation") contain
raw ">" characters that trigger HTMLHint spec-char-escape; replace each ">" in
those arrow tokens with the escaped entity ">" (so "-->") across the block
(lines containing "Client --> Controller", "Redis GET -->", "PER (Probabilistic
Early Refresh) - TTL", and the cache invalidation lines) and then re-run
HTMLHint on the file to confirm zero errors.
- Around line 354-361: The HTML docs show TTLs that don't match the
implementation: update the HTML in round5-docs/05-to-be-cache-visualization.html
to reflect the actual constants in ProductCacheConstants (use DETAIL_TTL = 2
minutes + jitter and ID_LIST_TTL = 3 minutes + jitter, and change the
"consistency window" text from 5~10 minutes to 2~3 minutes), and add a
validation test (e.g., `@Test` void validateDocumentTtlValuesMatchCode()) that
reads ProductCacheConstants.ID_LIST_TTL and ProductCacheConstants.DETAIL_TTL to
assert the published documentation values match the code so future drift is
caught automatically.
---
Duplicate comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`:
- Around line 102-115: The partial-recovery path in ProductQueryService (around
extractMissedIds, loadAndCacheDetails, mergeInOrder) can leave stale IDs
filtered out later while totalElements (idList.totalElements()) remains
outdated; change the logic so that if fromDb.size() < missedIds.size() you
invalidate or refresh the ID-list cache and re-run the id fetch (or recompute
both content and totalElements from the refreshed ID list) instead of silently
filtering nulls, and ensure mergeInOrder is only used after a successful full
recovery; also add a regression test that simulates “ID-list cache hit but some
detail rows deleted” to assert that ProductPageOutDto.content.size() and
totalElements are consistent after the request.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java`:
- Around line 177-186: The PER async refresh currently fans out on hot keys and
may cache nulls; change the async refresh logic in the block using
shouldEarlyRefresh, CompletableFuture.runAsync, loader, put and perExecutor to
deduplicate refreshes per key (ensure only one in-flight refresh per key, e.g.,
track in-flight futures) and when the loader returns apply: if fresh != null
then put(key, fresh, ttl) else evict(key) or write a sentinel rather than
storing null; also add regression tests that send 100 concurrent requests for
the same key asserting loader is invoked only once and that a deleted/absent
product never leaves a "null" value in cache (test both put/evict behavior).
In `@round5-docs/05-to-be-cache-visualization.html`:
- Around line 307-389: The documentation currently implies LocalCacheLock
provides a global single-reader guarantee; update the Cache-Aside diagram and
the "스탬피드 보호 3계층" table to explicitly separate LocalCacheLock (per-instance) and
RedisCacheLock (distributed), and add branching for lock acquisition outcomes
(lock_acquired → DB load + Redis SET; lock_failed → wait/retry with backoff OR
return stale), marking where PER and TTL Jitter apply in each branch; also
annotate the diagram/table to show which techniques are per-node vs global and
add a note to validate the doc against the actual CacheLock implementation
(CacheLock, LocalCacheLock, RedisCacheLock) and benchmark/topology tests to
confirm behavior under failure and high concurrency.
- Line 7: The HTML currently loads Chart.js solely from the external CDN via the
script tag referencing "chart.umd.min.js", which risks rendering failure if the
CDN is blocked; change this to prefer a local self-hosted bundle (serve a local
copy of chart.umd.min.js), keep the CDN URL as a fallback, and add SRI integrity
and crossorigin attributes for the CDN script; also attach a load/error handler
(or small inline script after the tag) that falls back to the local file when
the CDN fails and displays a user-facing message if both fail; verify by testing
in offline/browser CDN-blocked scenarios to ensure charts or a clear alternative
message appear.
- Around line 399-406: The summary KPIs are hardcoded and can drift from the
chart data; change the static numbers in the HTML summary block (the "1000만건 목록
API" and related lines) to be computed at render time from the chart data
constant hitMissAll (e.g., compute min/max and averages from hitMissAll and use
those values in the rendered text), and add a unit test/assertion that validates
the computed summary equals the chart data (for example compare the rendered
"1000만건 목록 API" min/max to Math.min/Math.max over hitMissAll and fail the build
if they differ) so any future data changes will fail CI rather than produce
inconsistent reports.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/IdListCacheEntry.java`:
- Around line 13-14: IdListCacheEntry currently stores the provided List<Long>
as-is, allowing external mutation to corrupt cache; implement a canonical
constructor for the record IdListCacheEntry that validates ids is non-null
(Objects.requireNonNull), performs a defensive copy via List.copyOf(ids) (or new
ArrayList<>(ids) wrapped as unmodifiable), and optionally validate totalElements
(e.g., non-negative), so the record stores an immutable list and cannot be
modified by callers or tests.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java`:
- Around line 60-61: The fixed-size ExecutorService perExecutor declared in
ProductCacheManager should be tied to the component lifecycle to avoid leaked
threads: replace or manage perExecutor so Spring controls it (inject a
TaskExecutor/ThreadPoolTaskExecutor) or add lifecycle shutdown handling;
specifically, update the ProductCacheManager class to either accept an injected
Executor/TaskExecutor instead of using Executors.newFixedThreadPool(3) for
perExecutor, or add a `@PreDestroy` method that calls perExecutor.shutdown() and
awaits termination (with a timeout) to ensure tasks finish on context close and
tests don't hang.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java`:
- Around line 57-79: Add a unit test in ProductQueryFacadeTest that verifies
exception propagation when productQueryService.getOrLoadProductDetail(...)
throws a CoreException; stub productQueryService.getOrLoadProductDetail(999L) to
willThrow(new CoreException(ErrorType.PRODUCT_NOT_FOUND)) and assert that
calling productQueryFacade.getProduct(999L) throws CoreException with errorType
== ErrorType.PRODUCT_NOT_FOUND (use assertThatThrownBy or equivalent and
reference productQueryService.getOrLoadProductDetail,
productQueryFacade.getProduct, CoreException, and ErrorType.PRODUCT_NOT_FOUND).
- Around line 114-131: Add a negative test to ProductQueryFacadeTest that
verifies getAdminProduct(Long) propagates the service exception: mock
productQueryService.getAdminProductDetail(1L) to throw the appropriate exception
(e.g., ProductNotFoundException or NoSuchElementException), call
productQueryFacade.getAdminProduct(1L) and assert that the same exception is
thrown, and verify productQueryService.getAdminProductDetail(1L) was invoked;
place this alongside the existing getAdminProductSuccess test to cover the error
path.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java`:
- Around line 490-516: Add boundary tests to verify the Querydsl tie-breaker (id
desc) when sorting by PRICE_ASC or LIKE_COUNT so ordering is stable: extend or
add tests alongside searchProductIdsSuccess to create multiple ProductEntity
instances with identical price (and a separate case with identical likeCount),
ensure their ids differ, call productQueryPort.searchProductIds with
ProductSortType.PRICE_ASC (and a LIKE_COUNT sort test), and assert that within
the same page the higher id appears first and that pagination retains stable
order across pages (use PageCriteria to split results). Specifically target the
searchProductIdsSuccess scenario and assert ordering of result.ids() and
consistency on subsequent pages to catch regressions in the tie-breaker logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7dddd906-d089-4c67-a38f-2966ec031ea6
⛔ Files ignored due to path filters (6)
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/query/ProductQueryPort.javais excluded by!**/out/**and included by**docs/todo/cache-event-driven-refresh.mdis excluded by!**/*.mdand included by**round5-docs/05-to-be-cache-measurement.mdis excluded by!**/*.mdand included by**round5-docs/06-2layer-cache-implementation-design.mdis excluded by!**/*.mdand included by**round5-docs/07-cache-eviction-analysis.mdis excluded by!**/*.mdand included by**round5-docs/08-cross-domain-index-and-cache-analysis.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (22)
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheConstants.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/IdListCacheEntry.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/ProductCacheDto.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/CacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImpl.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLockTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.javaround5-docs/05-to-be-cache-visualization.html
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java
- apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java
| public void refreshProductDetailCache(Long productId) { | ||
| productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)); | ||
| } | ||
|
|
||
|
|
||
| // 12. 상품 상세 캐시 삭제 (상품 삭제 시 Facade에서 호출) | ||
| public void deleteProductDetailCache(Long productId) { | ||
| productCacheManager.deleteProductDetail(productId); | ||
| } |
There was a problem hiding this comment.
캐시 작업 실패 시 예외 처리가 없어 Redis 장애가 비즈니스 로직 실패로 이어진다
refreshProductDetailCache()와 deleteProductDetailCache()는 Redis 호출 실패 시 예외를 그대로 전파한다. 이 메서드들이 Facade의 @Transactional 내에서 호출되므로, Redis 장애 시 상품 생성/수정/삭제 전체가 롤백된다.
운영 관점 문제점:
- Redis 네트워크 순단 시 상품 CRUD 전체 불가
- 캐시는 보조 저장소이므로 장애가 비즈니스 차단으로 이어지면 안 됨
개선안:
public void refreshProductDetailCache(Long productId) {
try {
productCacheManager.refreshProductDetail(productId,
() -> productQueryPort.findProductCacheDtoById(productId));
} catch (Exception e) {
log.warn("[Cache] 상품 상세 캐시 갱신 실패. productId={}", productId, e);
// 모니터링 알림 추가 권장 (e.g., Micrometer counter)
}
}추가 테스트:
- Redis 연결 실패 시에도 상품 생성이 성공하는지 검증하는 통합 테스트 추가
As per coding guidelines, **/*Service*.java: 외부 호출에는 타임아웃/재시도/서킷브레이커 고려 여부를 점검하고, 실패 시 대체 흐름을 제안한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`
around lines 203 - 211, refreshProductDetailCache and deleteProductDetailCache
currently let Redis exceptions bubble up and can cause transaction rollbacks;
wrap the calls to productCacheManager.refreshProductDetail(productId, ...) and
productCacheManager.deleteProductDetail(productId) in try/catch that catches
Exception, logs a warning including productId and the exception (e.g.,
log.warn("[Cache] ... productId={}", productId, e)), and suppresses the
exception so cache failures don't break the surrounding `@Transactional` flow;
also consider adding monitoring (a Micrometer counter) in the catch and add an
integration test to verify product create/update/delete succeed when Redis is
unavailable.
| // 13. ID 리스트 캐시 write-through — 모든 정렬 (Facade에서 호출) | ||
| public void refreshIdListCacheForAllSorts(Long brandId) { | ||
| for (ProductSortType sort : ProductSortType.values()) { | ||
| refreshIdListCacheForSort(brandId, sort); | ||
| } | ||
| } | ||
|
|
||
|
|
||
| // 14. ID 리스트 캐시 write-through — 특정 정렬 (Facade에서 호출) | ||
| public void refreshIdListCacheForSort(Long brandId, ProductSortType sortType) { | ||
| for (int page = 0; page < MAX_CACHEABLE_PAGE; page++) { | ||
| // brandId 조건 갱신 | ||
| refreshSingleIdList(brandId, sortType, page); | ||
| // all 조건 갱신 | ||
| refreshSingleIdList(null, sortType, page); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# MAX_CACHEABLE_PAGE와 ProductSortType 값 확인
echo "=== MAX_CACHEABLE_PAGE 상수 확인 ==="
rg -n "MAX_CACHEABLE_PAGE" --type java -A 1 -B 1
echo ""
echo "=== ProductSortType enum 값 확인 ==="
ast-grep --pattern 'enum ProductSortType {
$$$
}'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3454
🏁 Script executed:
#!/bin/bash
# ProductSortType enum 정의 확인
echo "=== ProductSortType enum 정의 ==="
rg -n "enum ProductSortType" --type java -A 20
echo ""
echo "=== ProductSortType 파일 위치 ==="
fd "ProductSortType" --type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 827
🏁 Script executed:
#!/bin/bash
# refreshSingleIdList 메서드 구현 확인
echo "=== refreshSingleIdList 메서드 구현 ==="
rg -n "refreshSingleIdList" --type java -A 15 -B 2 | head -60
echo ""
echo "=== ProductCommandService 클래스 구조 및 호출 관계 ==="
rg -n "void refresh|refreshIdListCache" --type java -B 1 -A 3 apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4515
🏁 Script executed:
#!/bin/bash
# refreshIdListCacheForAllSorts와 refreshIdListCacheForSort 호출 지점 확인
echo "=== refreshIdListCacheForAllSorts/refreshIdListCacheForSort 호출 ==="
rg -n "refreshIdListCacheForAllSorts|refreshIdListCacheForSort" --type java -B 3 -A 1
echo ""
echo "=== ProductFacade 클래스에서 호출 패턴 확인 ==="
rg -n "@Transactional|@Async" apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java -B 1 -A 3 | head -50Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 10529
🏁 Script executed:
#!/bin/bash
# ProductCommandFacade 트랜잭션 설정 확인
echo "=== ProductCommandFacade 클래스 및 메서드 확인 ==="
rg -n "public void create|public void update|public void delete|@Transactional" apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java -B 2 -A 8 | head -80Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1658
ID 리스트 캐시 동기 갱신으로 인한 트랜잭션 내 다중 DB 쿼리 누적
refreshIdListCacheForAllSorts()는 ProductSortType.values().length × MAX_CACHEABLE_PAGE × 2(brandId + null)만큼의 DB 쿼리를 순차 실행한다. ProductSortType(LATEST, PRICE_ASC, LIKES_DESC) × 캐시 페이지(0~1) × 2 = 총 12회의 쿼리다. 이 메서드는 createProduct(), updateProduct(), deleteProduct()의 @Transactional 메서드 내에서 동기적으로 호출되므로, 상품 생성·수정·삭제 API의 응답 시간을 예측 불가능하게 증가시킨다.
운영 관점 문제:
- 캐시 갱신 중 DB 또는 Redis 지연 시 해당 API 응답 지연 (타임아웃 위험)
- 트랜잭션 지속 시간 증가로 동시성 제어 강도 상승, DB 락 경합 증가
- 대량 상품 생성 시나리오에서 DB 커넥션 풀 고갈 위험
개선안:
- 캐시 갱신을 비동기로 분리(
@Async또는 이벤트 발행)하여 API 응답과 디커플링 - 캐시 무효화(evict) 전략으로 변경하고 조회 시점에 미스 시 로드
- 최소한
createProduct의 경우 첫 페이지(page=0)만 동기 갱신하고 나머지는 지연 갱신 검토
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`
around lines 214 - 230, refreshIdListCacheForAllSorts() and
refreshIdListCacheForSort() synchronously run many DB queries via
refreshSingleIdList() inside transactional create/update/delete flows; change
this to decouple cache updates by making the refresh asynchronous or
event-driven: replace direct calls to
refreshIdListCacheForAllSort/refreshIdListCacheForSort from transactional
methods with either (a) an `@Async-invoked` wrapper or (b) publish a
CacheRefreshEvent that an application listener handles and calls
refreshIdListCacheForSort/refreshSingleIdList off the transaction; alternatively
implement an evict-only path (call a new
evictIdListCacheForSort/evictSingleIdList) from the transaction and let the
async listener lazily rebuild pages, and for createProduct consider keeping only
page=0 synchronous by invoking refreshSingleIdList(brandId, sortType, 0) and
delegating remaining pages to the async/event handler.
| // 캐시 미스 → 락 획득 후 DB 조회 (1회만) | ||
| return cacheLock.executeWithLock(key, () -> { | ||
|
|
||
| // double-check (대기 중 다른 스레드가 캐시 저장했을 수 있음) | ||
| Optional<T> doubleCheck = get(key, type); | ||
| if (doubleCheck.isPresent()) { | ||
| return doubleCheck.get(); | ||
| } | ||
|
|
||
| // DB 조회 + 캐시 저장 (null이면 캐시에 저장하지 않음) | ||
| T value = loader.get(); | ||
| if (value != null) { | ||
| put(key, value, ttl); | ||
| } |
There was a problem hiding this comment.
락 내부 double-check는 replica가 아니라 master를 봐야 한다.
Line 155의 double-check도 readTemplate 기반 get()을 타기 때문에, 주석대로 replica-preferred 구성이면 첫 요청이 master에 값을 써도 대기 요청들은 replica miss를 보고 다시 loader를 호출하게 된다. 그러면 stampede 방어와 write-through의 즉시성이 둘 다 깨진다. 락 안의 재확인만큼은 master-bound 조회로 바꾸고, replica는 stale miss를 반환하지만 master에는 값이 이미 있는 상황을 모킹해 동시 요청에서도 loader가 1회만 호출되는 테스트를 추가하는 편이 안전하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java`
around lines 151 - 164, The double-check inside the lock uses get() (which goes
through readTemplate and may prefer replicas), so change that second-check to
perform a master-bound read (bypass replica-preferred readTemplate) so the lock
sees the authoritative value; specifically, inside
cacheLock.executeWithLock(...) replace the Optional<T> doubleCheck = get(key,
type) call with a master/primary read variant (use the master-bound readTemplate
or a getMaster(key,type) helper) so it returns the master value even when
replicas are preferred, then keep the existing put(...) and loader.get() flow.
Also add a concurrency test that stubs replicas to return stale/miss and the
master to return the cached value to assert loader.get() is invoked exactly once
under simultaneous requests.
| <span class="highlight">Client</span> --> <span class="highlight">Controller</span> --> <span class="highlight">Facade</span> --> <span class="accent">Redis GET</span> --> <span class="info">[HIT]</span> --> JSON Deserialize --> <span class="highlight">Response (~3.5-5ms)</span> | ||
|
|
||
| <span class="dim">// Cache Miss 경로 (5%- 트래픽)</span> | ||
| <span class="highlight">Client</span> --> <span class="highlight">Controller</span> --> <span class="highlight">Facade</span> --> <span class="accent">Redis GET</span> --> <span class="warn">[MISS]</span> | ||
| --> <span class="accent">LocalCacheLock</span> <span class="dim">(double-check: 다른 스레드가 이미 로드했는지 확인)</span> | ||
| --> <span class="highlight">Service</span> --> <span class="highlight">Repository</span> --> <span class="info">DB (Index Scan)</span> | ||
| --> <span class="accent">Redis SET</span> <span class="dim">(TTL + Jitter)</span> | ||
| --> <span class="highlight">Response (~5-11ms)</span> | ||
|
|
||
| <span class="dim">// Redis 장애 경로 (fallback)</span> | ||
| <span class="highlight">Client</span> --> <span class="highlight">Controller</span> --> <span class="highlight">Facade</span> --> <span class="accent">Redis GET</span> --> <span class="warn">[ERROR]</span> | ||
| --> <span class="dim">try-catch: log & continue</span> | ||
| --> <span class="highlight">Service</span> --> <span class="highlight">Repository</span> --> <span class="info">DB (Index Scan)</span> | ||
| --> <span class="highlight">Response (~5-11ms)</span> <span class="dim">(가용성 100% 유지)</span> | ||
|
|
||
| <span class="dim">// PER (Probabilistic Early Refresh) - TTL 잔여 20% 구간</span> | ||
| <span class="highlight">Client</span> --> <span class="highlight">Facade</span> --> <span class="accent">Redis GET</span> --> <span class="info">[HIT, TTL 잔여 < 20%]</span> | ||
| --> <span class="warn">확률적 판단: 갱신 필요?</span> | ||
| --> <span class="info">[YES]</span> 비동기 DB 조회 + Redis SET <span class="dim">(기존 캐시 유지, stale 허용)</span> | ||
| --> <span class="info">[NO]</span> 기존 캐시 반환 | ||
| --> <span class="highlight">Response (~3.5-5ms)</span> <span class="dim">(사용자는 항상 즉시 응답)</span> | ||
|
|
||
| <span class="dim">// 캐시 무효화</span> | ||
| <span class="highlight">상품 CUD</span> --> <span class="accent">Active Invalidation</span> | ||
| --> 상세: <span class="accent">DEL product:{id}</span> | ||
| --> 목록: <span class="accent">SCAN products:list:* + DEL</span> <span class="dim">(패턴 기반 일괄 삭제)</span></div> |
There was a problem hiding this comment.
아키텍처 다이어그램의 > 미이스케이프는 린트 오류를 지속시킨다.
Line 314-339의 화살표 표기(-->)가 HTMLHint spec-char-escape 오류를 유발하고 있어 문서 품질 게이트를 깨뜨릴 수 있다. >를 >로 이스케이프해 정적 분석 오류를 제거해야 한다. 추가 테스트로 동일 파일에 대해 HTMLHint를 재실행해 오류 0건을 확인해야 한다.
🧰 Tools
🪛 HTMLHint (1.9.2)
[error] 314-314: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 314-314: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 314-314: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 314-314: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 314-314: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 314-314: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 317-317: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 317-317: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 317-317: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 317-317: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 318-318: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 319-319: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 319-319: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 319-319: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 320-320: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 321-321: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 324-324: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 324-324: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 324-324: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 324-324: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 325-325: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 326-326: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 326-326: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 326-326: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 327-327: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 330-330: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 330-330: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 330-330: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 331-331: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 332-332: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 333-333: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 334-334: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 337-337: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 338-338: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 339-339: Special characters must be escaped : [ > ].
(spec-char-escape)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@round5-docs/05-to-be-cache-visualization.html` around lines 314 - 339, The
HTML arrow sequences like "-->" (e.g., "Client --> Controller", "Redis GET -->
[HIT]", "상품 CUD --> Active Invalidation") contain raw ">" characters that
trigger HTMLHint spec-char-escape; replace each ">" in those arrow tokens with
the escaped entity ">" (so "-->") across the block (lines containing
"Client --> Controller", "Redis GET -->", "PER (Probabilistic Early Refresh) -
TTL", and the cache invalidation lines) and then re-run HTMLHint on the file to
confirm zero errors.
4178526 to
d23cfd0
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java (1)
39-56:⚠️ Potential issue | 🟠 Major캐시 갱신 로직이 트랜잭션 내부에 있어 롤백 위험이 존재한다.
syncReadModel,refreshProductDetailCache,refreshIdListCacheForAllSorts호출이@Transactional경계 안에 있다. Redis 캐시 갱신 실패 시 DB 저장까지 롤백되어 데이터 정합성 문제가 발생할 수 있다. 캐시는 결국 일관성(eventual consistency)으로 복구 가능하지만, DB 쓰기 실패는 비즈니스 손실이다.수정안: 캐시 갱신 로직을 트랜잭션 커밋 후 수행하도록 분리하거나,
@TransactionalEventListener(phase = AFTER_COMMIT)를 활용한다.추가 테스트: Redis 장애 상황에서 상품 생성이 정상 처리되는지 검증하는 통합 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java` around lines 39 - 56, The createProduct method in ProductCommandFacade currently performs cache/read-model updates (productCommandService.syncReadModel, refreshProductDetailCache, refreshIdListCacheForAllSorts) inside the `@Transactional` boundary, risking DB rollback if Redis fails; extract these post-commit side-effects so DB write is not rolled back on cache errors — either publish a domain event and handle it with a listener annotated `@TransactionalEventListener`(phase = AFTER_COMMIT) that calls syncReadModel/refreshProductDetailCache/refreshIdListCacheForAllSorts, or move those calls to a separate non-transactional method invoked after the transaction completes; also add an integration test simulating Redis failure to verify product creation still succeeds.
♻️ Duplicate comments (9)
apps/commerce-api/src/main/java/com/loopers/coupon/coupontemplate/infrastructure/entity/CouponTemplateEntity.java (1)
24-27:⚠️ Potential issue | 🟠 Major운영 환경에서
@Index만으로 인덱스 생성이 보장되지 않는다Line 24-27의 애너테이션 인덱스는 스키마 자동 생성이 꺼진 환경에서는 적용되지 않아, 배포 후에도
WHERE deleted_at IS NULL조회가 풀스캔으로 남을 수 있다. 운영 관점에서 성능 저하와 지연 급증 위험이 있다. 수정안은 마이그레이션(Flyway/Liquibase) 또는 DBA 실행용 명시적 DDL로idx_coupon_template_deleted를 생성하고 배포 파이프라인에 포함하는 것이다. 추가 테스트는 배포 환경에서information_schema.statistics로 인덱스 존재를 확인하고,EXPLAIN으로 대상 조회가 해당 인덱스를 실제 사용하는지 검증하는 것이다.#!/bin/bash set -euo pipefail echo "== 1) JPA DDL 설정 확인 ==" fd 'application\.(yml|yaml|properties)$|jpa\.(yml|yaml|properties)$|bootstrap\.(yml|yaml|properties)$' . \ -x sh -c 'echo "--- $1"; rg -n "ddl-auto|hbm2ddl|spring\.jpa|hibernate" "$1" || true' sh {} echo echo "== 2) 인덱스 마이그레이션 존재 여부 확인 ==" fd '.*\.(sql|yml|yaml|xml|json)$' . \ -x sh -c 'rg -n "idx_coupon_template_deleted|create index|alter table.*coupon_template" "$1" || true' sh {} echo echo "== 3) 엔티티 선언 확인 ==" rg -n "@Table|@Index|coupon_template|deleted_at" --type java -C2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/coupon/coupontemplate/infrastructure/entity/CouponTemplateEntity.java` around lines 24 - 27, The `@Index` on CouponTemplateEntity (idx_coupon_template_deleted for column deleted_at) is not sufficient in environments with JPA DDL disabled; add an explicit DB migration that creates the index on coupon_template(deleted_at) (e.g., a Flyway SQL migration or Liquibase changeSet named to create idx_coupon_template_deleted) and include it in the deployment pipeline, update any migration manifest to ensure it runs before app traffic, and add a post-deploy verification step that checks information_schema.statistics for idx_coupon_template_deleted and uses EXPLAIN on the common WHERE deleted_at IS NULL query to confirm the index is used.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.java (2)
56-59:⚠️ Potential issue | 🟠 Major락 미획득 시 loader 재실행 - 스탬피드 방어 의도와 불일치한다.
- 운영 관점: 락 대기 후 캐시 재확인 없이 바로
loader.get()을 실행하면, 락 보유 스레드가 이미 캐시를 채웠음에도 중복 DB 조회가 발생한다. 대량 트래픽에서 스탬피드 방어 효과가 제한적이다.- 수정안:
RedisCacheLock은 락만 담당하고, 캐시 재확인은 상위 레이어(ProductCacheManager)에서 수행하도록 계약을 명확히 한다. 또는 락 미획득 시null반환하여 상위에서 캐시 재확인을 유도한다.- 테스트: 락 보유 스레드가 캐시 저장 완료 후 대기 스레드가 캐시 히트하는지 통합 테스트로 검증.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.java` around lines 56 - 59, The RedisCacheLock currently waits for lock release then calls loader.get(), which defeats stampede protection; change RedisCacheLock so that when a lock is not acquired (the branch that calls waitForLockRelease(lockKey) and loader.get()), it instead waits for the lock to be released and returns null (i.e., do not invoke loader.get()), making the contract explicit that cache revalidation is the caller's responsibility (ProductCacheManager should re-check cache on null). Update the RedisCacheLock method signature/JavaDoc if needed to indicate it may return null, and ensure callers (e.g., ProductCacheManager) handle null by re-checking cache and only calling loader.get() when appropriate; add/update tests to assert that a waiting thread sees the cache hit after the lock holder populates it.
43-70:⚠️ Potential issue | 🟠 Major분산 락 소유권 검증 누락 - 다른 스레드의 락을 삭제할 수 있다.
- 운영 관점: TTL(5초) 만료 후 다른 스레드가 획득한 락을
finally블록에서 삭제할 수 있다. 이는 동시성 제어 실패로 이어져 캐시 스탬피드가 발생할 수 있다.- 수정안: 락 값에 고유 식별자(UUID + threadId)를 저장하고, 삭제 시 본인 락인지 확인 후 삭제한다. Redis Lua 스크립트로 원자적 비교-삭제를 권장한다.
- 테스트: TTL 만료 후 다른 스레드가 락을 획득한 상황에서 원래 스레드가 삭제를 시도해도 실패하는지 검증하는 테스트 추가.
🔒 락 소유권 검증 추가
+private static final String LOCK_VALUE_PREFIX = UUID.randomUUID().toString(); + `@Override` public <T> T executeWithLock(String key, Supplier<T> loader) { String lockKey = key + ":lock"; + String lockValue = LOCK_VALUE_PREFIX + ":" + Thread.currentThread().getId(); Boolean acquired = redisTemplate.opsForValue() - .setIfAbsent(lockKey, "1", LOCK_TTL); + .setIfAbsent(lockKey, lockValue, LOCK_TTL); try { // ... existing code ... } finally { if (Boolean.TRUE.equals(acquired)) { - redisTemplate.delete(lockKey); + // Lua 스크립트로 원자적 비교-삭제 + String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end"; + redisTemplate.execute( + new DefaultRedisScript<>(script, Long.class), + List.of(lockKey), + lockValue + ); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.java` around lines 43 - 70, executeWithLock currently sets a simple "1" value and may delete a lock it no longer owns; change the lock value to a unique owner token (e.g., UUID + Thread.currentThread().getId()) when calling redisTemplate.opsForValue().setIfAbsent(lockKey, ownerToken, LOCK_TTL), pass that ownerToken through to waitForLockRelease if needed, and in the finally block delete the key only when the stored value equals ownerToken by executing an atomic Redis Lua script (compare-and-del) via redisTemplate.execute; update any helper methods (waitForLockRelease) to accept the owner token as needed and add a unit/integration test that simulates TTL expiry and another thread acquiring the lock to assert the original thread’s cleanup does not remove the new owner’s lock.apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java (2)
250-260:⚠️ Potential issue | 🟠 Major캐시 미스 측정에서 warmup이 캐시를 채워 실제 MISS가 아닌 HIT를 측정한다
measureSingleApi내부에서 3회 warmup이 실행되어 첫 요청에서 캐시가 채워진다. 이후 5회 측정은 모두 캐시 히트가 된다.운영 의사결정에 사용할 MISS 지표가 왜곡되므로, 다음 중 하나를 적용해야 한다:
- MISS 전용 측정 메서드를 별도로 만들어 warmup 없이 측정
- 측정 직전 Redis 키 수가 0인지 검증하는 assertion 추가
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java` around lines 250 - 260, The current "cache miss" measurement is polluted because measureSingleApi runs its 3 warmup calls and thus fills Redis before the measured calls; update the test to either (recommended) add a new method (e.g., measureSingleApiMiss or measureApiWithoutWarmup) that calls the API measurements without executing warmup, and invoke that from the MISS loop, or (alternative) assert immediately before measurement that redisCleanUp.truncateAll() actually left zero keys (e.g., check redisCleanUp.keyCount() == 0) and fail the test if not; modify the calls in the MISS sections to use the new miss-only method or include the zero-key assertion so the measured 5 calls reflect true MISS behavior for measureSingleApi, measureApiWithoutWarmup, and redisCleanUp.truncateAll.
343-384:⚠️ Potential issue | 🟠 Major비정상 HTTP 응답이 성공 샘플로 집계된다
measureBurst및measureSustainedLoad에서mockMvc.perform(get(url))의 예외만 에러로 처리하고, 4xx/5xx 응답은 정상 latency로 포함된다.운영 환경에서 장애 상황의 성능 수치가 왜곡될 수 있으므로, 다음을 권장한다:
andExpect(status().is2xxSuccessful())또는 응답 상태 코드 검사- 2xx가 아니면
errors카운터에 반영🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java` around lines 343 - 384, measureBurst (and likewise measureSustainedLoad) currently treats only thrown exceptions as errors, so non-2xx HTTP responses (4xx/5xx) are counted as successful samples; change the request handling to inspect the HTTP status and treat non-2xx as errors by incrementing the AtomicInteger errors and marking latencies[idx] = -1. Locate the mockMvc.perform(get(url)) call inside measureBurst/measureSustainedLoad and either chain andExpect(status().is2xxSuccessful()) or capture the MvcResult and call getResponse().getStatus() to verify 2xx; on non-2xx, increment errors and set the latency element for the current idx to -1 instead of recording the elapsed time. Ensure exception handling still increments errors and sets latencies[idx] = -1 so both exceptions and non-2xx responses are excluded from the valid[] stats.apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java (1)
61-68:⚠️ Potential issue | 🟠 Major브랜드 수정 트랜잭션 내부 캐시 루프는 커밋 이후 처리로 분리해야 한다.
Line 65-68에서 대량 상품 캐시 갱신을 같은 트랜잭션에서 수행하면, 운영 시 트랜잭션 장기화와 롤백 시 캐시 선반영 불일치가 발생한다.
수정안은afterCommit훅 또는@TransactionalEventListener(AFTER_COMMIT)로 캐시 갱신 루프를 이관하는 것이다.
추가 테스트로 브랜드 수정 후 예외 발생 시 캐시 미갱신, 정상 커밋 시 전체 상품 갱신 수행을 검증해야 한다.수정 예시
+import org.springframework.transaction.support.TransactionSynchronization; +import org.springframework.transaction.support.TransactionSynchronizationManager; @@ - List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); - for (Long productId : productIds) { - productCommandService.refreshProductDetailCache(productId); - } + TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() { + `@Override` + public void afterCommit() { + List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); + productIds.forEach(productCommandService::refreshProductDetailCache); + } + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java` around lines 61 - 68, The cache refresh loop in BrandCommandFacade (calls to productQueryService.findActiveIdsByBrandId(...) and productCommandService.refreshProductDetailCache(...)) must be moved out of the brand update transaction and executed only after successful commit; implement this by publishing an event from the existing brand update method (or using TransactionSynchronizationManager#registerSynchronization) and handle it with an `@TransactionalEventListener`(phase = AFTER_COMMIT) or an afterCommit callback which reads the product IDs and performs the refresh loop, ensuring no cache writes occur if the transaction rolls back and that cache updates run only post-commit; add tests that assert cache is not refreshed on exception and is refreshed after successful commit.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java (2)
179-183:⚠️ Potential issue | 🟠 MajorPER 비동기 갱신에 키 단위 동시성 제어와 null 가드가 필요하다.
Line 179-183은 만료 임박 hit마다 비동기 갱신이 동시에 실행될 수 있고,
fresh == null일 때도put이 수행되어"null"캐시 저장 위험이 있다. 운영에서는 핫키에서 중복 DB 로드와 지속적 miss 루프를 유발한다.
수정안은 PER 경로에서도 동일 키 락을 획득한 뒤 갱신하고,fresh != null일 때만 저장하는 것이다.
추가 테스트로 동일 키 100 동시 요청에서 loader 1회 호출 및 null 미저장을 검증해야 한다.수정 예시
if (shouldEarlyRefresh(key, ttl)) { CompletableFuture.runAsync(() -> { try { - T fresh = loader.get(); - put(key, fresh, ttl); + cacheLock.executeWithLock(key, () -> { + T fresh = loader.get(); + if (fresh != null) { + put(key, fresh, ttl); + } + return null; + }); } catch (Exception e) { log.warn("PER 비동기 갱신 실패. key={}", key, e); } }, perExecutor); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 179 - 183, The async refresh lambda in CompletableFuture.runAsync currently calls loader.get() and put(key, fresh, ttl) without per-key concurrency control or null guarding; change the async path in ProductCacheManager to acquire the same per-key lock used in the PER path (e.g., the key-specific lock used elsewhere in this class) before calling loader.get(), ensure you only call put(key, fresh, ttl) when fresh != null, and always release the lock in a finally block; add a unit test that issues 100 concurrent requests for the same key and asserts loader.get() was invoked exactly once and that the cache does not contain a stored null value.
155-156:⚠️ Potential issue | 🟠 Major락 내부 double-check는 master 기준 조회로 고정해야 한다.
Line 155와 Line 196이 replica 조회를 사용하면, 복제 지연 시 같은 키에 대해
loader가 중복 실행되어 DB 부하와 캐시 write-through 일관성이 동시에 깨진다.
수정안은 락 내부 재확인만큼은 master 바인딩 조회를 사용하도록 분리하는 것이다.
추가 테스트로 “replica miss + master hit” 동시 요청 시loader호출 수가 1회인지 검증해야 한다.수정 예시
- Optional<T> doubleCheck = get(key, type); + Optional<T> doubleCheck = getFromMaster(key, type); @@ - Optional<T> doubleCheck = get(key, type); + Optional<T> doubleCheck = getFromMaster(key, type); +private <T> Optional<T> getFromMaster(String key, Class<T> type) { + try { + String json = writeTemplate.opsForValue().get(key); + if (json == null) return Optional.empty(); + return Optional.ofNullable(objectMapper.readValue(json, type)); + } catch (Exception e) { + log.warn("마스터 캐시 조회 실패. key={}", key, e); + return Optional.empty(); + } +}Also applies to: 196-197
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 155 - 156, Inside ProductCacheManager the lock-protected "double-check" calls to get(key, type) must use a master-bound lookup to avoid replica lag causing duplicate loader executions; change the two internal re-checks (the double-checks that run under the lock) to call the master-forced variant (use the method/flag that forces master reads or add a new getMaster(key, type) and call it) instead of the normal replica-aware get(key, type), and ensure the loader invocation remains unchanged. Update both occurrences of the double-check in ProductCacheManager to use the master lookup and add a test that simulates "replica miss + master hit" concurrent requests to assert loader is called exactly once.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java (1)
129-137:⚠️ Potential issue | 🟠 Major트랜잭션 내부 캐시 갱신을 커밋 이후로 분리해야 한다.
Line 136, Line 148, Line 170에서 Redis write-through 및 캐시용 DB 조회가 트랜잭션 내부에서 실행된다. 운영에서는 롤백 시 캐시 선반영 불일치와 트랜잭션 장기화가 동시에 발생한다.
수정안은 after-commit 콜백(TransactionSynchronizationManager) 또는@TransactionalEventListener(AFTER_COMMIT)로 캐시 갱신을 이관하는 것이다.
추가 테스트로 롤백 시 Redis 미변경, 커밋 시 1회 갱신을 검증해야 한다.수정 예시
+import org.springframework.transaction.support.TransactionSynchronization; +import org.springframework.transaction.support.TransactionSynchronizationManager; @@ - productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)); + runAfterCommit(() -> + productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)) + ); @@ +private void runAfterCommit(Runnable task) { + if (!TransactionSynchronizationManager.isActualTransactionActive()) { + task.run(); + return; + } + TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() { + `@Override` + public void afterCommit() { + task.run(); + } + }); +}As per coding guidelines,
**/*Service*.java: 트랜잭션 경계(@Transactional) 위치와 전파, readOnly, 롤백 조건을 점검한다.Also applies to: 141-149, 153-171
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java` around lines 129 - 137, The cache refresh currently occurs inside the transactional method increaseLikeCount (after readModelRepository.increaseLikeCount) which can cause cache inconsistency on rollback; move the productCacheManager.refreshProductDetail(productId, () -> productQueryPort.findProductCacheDtoById(productId)) out of the `@Transactional` boundary and execute it only after commit using either TransactionSynchronizationManager.registerSynchronization(...) or a `@TransactionalEventListener`(phase = AFTER_COMMIT) event handler; keep readModelRepository.increaseLikeCount within the transaction, emit or enqueue a post-commit task (or publish an event with the productId) and perform the Redis write-through and productQueryPort lookup in that after-commit handler, and add tests to assert no cache update on rollback and exactly one update on commit.
🧹 Nitpick comments (22)
apps/commerce-api/src/main/java/com/loopers/catalog/brand/infrastructure/entity/BrandEntity.java (1)
19-22: 복합 인덱스 선두 컬럼 선택도를 운영 데이터 기준으로 재검증해야 한다Line 20-21의 패턴처럼
deleted_at IS NULL비중이 높으면 선두 컬럼 저선택도로 인해 트래픽 증가 시 인덱스 효율이 떨어질 수 있다. 운영 관점에서는 캐시 미스 구간에서 DB 스캔량이 급증해 지연이 커질 수 있다.
수정안은 주요 브랜드 조회 쿼리 기준으로EXPLAIN ANALYZE를 확인한 뒤, 필요 시 인덱스를(visible_status, deleted_at)로 재정의하거나 조회 패턴별 인덱스를 분리하는 것이다.
추가 테스트로는 통합 테스트에서 해당 조회 쿼리의 실행계획(key,rows)을 점검해 기대 인덱스 사용과 스캔 행 수 상한을 검증하는 것이 좋다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/brand/infrastructure/entity/BrandEntity.java` around lines 19 - 22, Re-evaluate the composite index idx_brands_deleted_visible on BrandEntity: run EXPLAIN ANALYZE for your primary brand lookup queries (those filtering on deleted_at IS NULL and visible_status) to verify the chosen leading column's selectivity; if deleted_at IS NULL is high-frequency, change the index order to (visible_status, deleted_at) or create separate indexes for each access pattern, and add an integration test that executes the main query and asserts the planner uses the expected index (check EXPLAIN key) and that scanned rows (EXPLAIN rows) are within acceptable bounds to prevent full index scans.round5-docs/04-to-be-index-visualization.html (1)
7-7: 벤치마크 문서의 외부 CDN 의존성은 재현성을 저해한다.운영 관점에서 측정 산출물이 네트워크 상태나 CDN 정책에 따라 렌더링되지 않으면 추후 회귀 분석과 감사 근거로 사용할 수 없다. Chart.js를 저장소 내 고정 자산으로 self-host 하거나, 최소한
integrity,crossorigin속성과 로딩 실패 fallback을 추가해야 한다.수정안:
- Chart.js를
round5-docs/assets/경로에 복사하여 self-host- 또는 SRI(Subresource Integrity) 해시 추가:
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.7/dist/chart.umd.min.js" integrity="sha384-..." crossorigin="anonymous" onerror="document.body.innerHTML='<p>Chart.js 로드 실패. 오프라인 환경에서는 차트가 표시되지 않습니다.</p>'"> </script>추가 테스트: 브라우저 Offline 모드에서 HTML을 열어 차트 렌더링 또는 대체 안내 노출 여부를 확인해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/04-to-be-index-visualization.html` at line 7, The external Chart.js CDN import ("https://cdn.jsdelivr.net/npm/chart.js@4.4.7/dist/chart.umd.min.js") breaks reproducibility; fix by either self-hosting the library under round5-docs/assets (copy the chart.umd.min.js into that folder and update the script tag to point to the local asset) or by adding SRI + crossorigin + a loading-failure fallback (add a correct integrity hash for Chart.js v4.4.7, set crossorigin="anonymous", and include an onerror handler that displays a user-friendly message when chart rendering cannot load) so the script tag in the HTML is resilient to network/CDN failures.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/IdListCacheEntry.java (1)
13-14:ids필드에 방어적 복사가 누락되었다.
- 운영 관점: 캐시 계층에서 반환된
ids리스트가 외부에서 변조되면 캐시 데이터 정합성이 깨진다. 여러 스레드가 동일 캐시 엔트리를 참조할 때 예기치 않은ConcurrentModificationException또는 데이터 불일치가 발생할 수 있다.- 수정안: compact constructor에서
List.copyOf()로 방어적 복사를 수행한다.- 테스트: 반환된 리스트 변조 시 원본이 영향받지 않는지 검증하는 테스트 추가 권장.
🛡️ 방어적 복사 추가
-public record IdListCacheEntry(List<Long> ids, long totalElements) { -} +public record IdListCacheEntry(List<Long> ids, long totalElements) { + + public IdListCacheEntry { + ids = ids == null ? List.of() : List.copyOf(ids); + } + +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/IdListCacheEntry.java` around lines 13 - 14, The IdListCacheEntry record currently exposes the mutable `ids` list without defensive copying; add a compact constructor for record IdListCacheEntry that wraps the incoming `ids` with `List.copyOf(ids)` and assigns the copied list to the record component so external modification cannot affect the cached entry, and update/add tests to assert that mutating a returned list does not change the stored ids or totalElements.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java (2)
41-49:updateStock과softDelete가 영향 행 수를 반환하지 않아 대상 미존재를 감지할 수 없다
increaseLikeCount와decreaseLikeCount는int를 반환하여 영향 행 수로 대상 존재 여부를 검증한다. 그러나updateStock과softDelete는void를 반환하여 존재하지 않는 productId에 대해 silent fail이 발생한다.운영 환경에서 잘못된 productId로 호출 시 오류를 감지할 수 없으므로, 반환 타입을
int로 변경하고 호출부에서 검증하는 것을 권장한다.수정 제안
// 4. 재고 업데이트 `@Modifying` `@Query`("UPDATE ProductReadModelEntity e SET e.stock = :stock WHERE e.id = :id") -void updateStock(`@Param`("id") Long id, `@Param`("stock") Long stock); +int updateStock(`@Param`("id") Long id, `@Param`("stock") Long stock); // 5. soft delete (deletedAt 설정) `@Modifying` `@Query`("UPDATE ProductReadModelEntity e SET e.deletedAt = :deletedAt WHERE e.id = :productId") -void softDelete(`@Param`("productId") Long productId, `@Param`("deletedAt") ZonedDateTime deletedAt); +int softDelete(`@Param`("productId") Long productId, `@Param`("deletedAt") ZonedDateTime deletedAt);As per coding guidelines,
**/*Repository*.java: 쿼리 조건 누락/과다 조회를 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java` around lines 41 - 49, Change the return types of ProductReadModelJpaRepository.updateStock and .softDelete from void to int so the JPQL `@Modifying` queries return the number of affected rows; update any callers to check the returned int and throw/log an error or handle the "0 rows affected" case when the productId does not exist (match how increaseLikeCount/decreaseLikeCount are handled). Ensure method signatures remain annotated with `@Modifying` and `@Query` (no query changes), and add unit/integration tests to verify callers detect and respond to 0 affected rows.
26-39: @Modifying 쿼리에clearAutomatically = true미설정으로 영속성 컨텍스트 불일치 가능성이 있다같은 트랜잭션 내에서 엔티티를 조회한 후 이 UPDATE 쿼리를 실행하면, 영속성 컨텍스트의 캐시된 값과 DB 값이 불일치할 수 있다.
현재 사용 패턴에서 문제가 없다면 무시해도 되지만, 향후 복잡한 트랜잭션에서 예기치 않은 동작이 발생할 수 있다.
-@Modifying +@Modifying(clearAutomatically = true) `@Query`("UPDATE ProductReadModelEntity e SET e.brandName = :brandName WHERE e.brandId = :brandId") void updateBrandNameByBrandId(`@Param`("brandId") Long brandId, `@Param`("brandName") String brandName);As per coding guidelines,
**/*Repository*.java: 영속성 컨텍스트 오염 가능성을 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java` around lines 26 - 39, The `@Modifying` update queries in ProductReadModelJpaRepository (methods updateBrandNameByBrandId, increaseLikeCount, decreaseLikeCount) can leave the persistence context stale after executing bulk updates; update each `@Modifying` annotation to include clearAutomatically = true (or alternatively invoke EntityManager.clear() after the update) so that the first-level cache is cleared and in-transaction reads reflect the DB changes, ensuring consistency between the persistence context and the executed JPQL updates.apps/commerce-api/build.gradle.kts (1)
54-54: ArchUnit 버전이 하드코딩되어 있다다른 의존성들은
${project.properties["..."]}로 버전을 관리하는데, ArchUnit만1.3.0으로 하드코딩되어 있다. 버전 업그레이드 시 누락될 수 있으므로 일관성 있게gradle.properties나 버전 카탈로그로 관리하는 것을 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/build.gradle.kts` at line 54, The ArchUnit dependency version is hardcoded as "1.3.0"; replace it with the shared version property used elsewhere (e.g., project.properties["archunit.version"] or the version catalog alias like libs.versions.archunit) so the dependency line uses the centralized version variable instead of a literal; add the property to gradle.properties if missing or map it in the version catalog to keep version management consistent with other dependencies.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductReadModelEntity.java (1)
109-126:ZonedDateTime.now()직접 호출이 테스트 용이성을 저해한다Line 123에서
ZonedDateTime.now()를 직접 호출하면updatedAt값을 테스트에서 제어할 수 없다. 시간 기반 로직 검증이 어려워진다.다만 현재는
updatedAt을 검증하는 테스트가 없고, Read Model 특성상 DB 동기화 시점이 중요하므로 즉시 수정이 필수는 아니다. 향후 시간 기반 테스트가 필요하면Clock주입 패턴을 고려하라.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductReadModelEntity.java` around lines 109 - 126, The static factory ProductReadModelEntity.of calls ZonedDateTime.now() directly for the updatedAt field which makes time-dependent tests hard to control; change the API to accept an explicit updatedAt (ZonedDateTime) or a Clock so callers/tests can supply a deterministic time, and update usages of ProductReadModelEntity.of accordingly (or add an overloaded of method that takes updatedAt/Clock) so no direct ZonedDateTime.now() is invoked inside the method.apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductIndexPerformanceTest.java (1)
72-80:out()메서드에서 IOException을 무시한다결과 파일 쓰기 실패 시 예외가 무시되어 벤치마크 결과가 파일에 기록되지 않아도 알 수 없다. 벤치마크 특성상 치명적이지 않으나, 최소한 로깅하는 것을 권장한다.
} catch (IOException e) { - // 무시 + log.warn("결과 파일 쓰기 실패: {}", e.getMessage()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductIndexPerformanceTest.java` around lines 72 - 80, The out(...) method currently swallows IOExceptions when appending to RESULT_FILE; update the catch block in ProductIndexPerformanceTest.out to log the failure instead of ignoring it — use the existing logger (log) to call log.error with a clear message referencing RESULT_FILE and include the caught IOException (e) so the stacktrace and message are recorded; keep the rest of the method behavior the same.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java (1)
97-125:decreaseLikeCount의 경계 케이스 테스트가 누락되었다현재
decreaseLikeCount테스트는 초기 likeCount=10에서 10건 감소하여 0이 되는 케이스만 검증한다. 그러나 실제 운영 환경에서는 likeCount가 이미 0인 상태에서 감소 요청이 올 수 있다.
ProductReadModelRepositoryImpl.decreaseLikeCount()는 updatedRows=0을 허용하지만, 실제로 음수 방지가 동작하는지, 혹은 대상 미존재와 구분되는지 테스트가 없다.다음 케이스 추가를 권장한다:
- likeCount=0인 상태에서 감소 요청 → 여전히 0 유지
- 존재하지 않는 productId로 감소 요청 → 예외 미발생 확인 (또는 발생 여부 명시)
As per coding guidelines,
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java` around lines 97 - 125, Add boundary-case tests for decreaseLikeCount in ProductLikeCountConcurrencyTest: create one test that saves a ProductReadModelEntity with likeCount=0, invoke productCommandService.decreaseLikeCount(productId) and assert the read-model likeCount remains 0 (verify ProductReadModelEntity via productReadModelJpaRepository.findById); add another test that calls productCommandService.decreaseLikeCount on a non-existent productId (e.g., productId+999 or Long.MAX_VALUE) and assert the call does not throw (or assert the repository still has no entity for that id) to document the expected behavior; these tests target the behavior implemented in ProductReadModelRepositoryImpl.decreaseLikeCount and the productCommandService.decreaseLikeCount flow.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java (1)
25-36:save()메서드가 매번 SELECT 후 INSERT/UPDATE를 수행한다
findById()로 기존 엔티티를 조회한 후save()를 호출하여 총 2회의 쿼리가 발생한다. 상품 생성/수정 시마다 Read Model 동기화가 필요하므로, 빈번한 쓰기 작업에서 병목이 될 수 있다.대안으로
ON DUPLICATE KEY UPDATE패턴의 네이티브 쿼리를 고려할 수 있다. 다만 현재 쓰기 빈도가 높지 않다면 가독성을 위해 현재 구조를 유지해도 무방하다.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java (1)
61-61: 전용 스레드풀 종료 훅이 없어 컨텍스트 종료 시 스레드 누수 위험이 있다.Line 61의
newFixedThreadPool(3)은 명시적 종료가 없으면 테스트/재배포 시 스레드가 잔존해 종료 지연이나 리소스 누수가 발생할 수 있다.
수정안은@PreDestroy로 graceful shutdown을 추가하거나 Spring 관리TaskExecutor빈으로 대체하는 것이다.
추가 테스트로 애플리케이션 컨텍스트 종료 시 executor가 종료 상태인지 검증해야 한다.수정 예시
+import jakarta.annotation.PreDestroy; @@ private final ExecutorService perExecutor = Executors.newFixedThreadPool(3); + +@PreDestroy +void shutdownPerExecutor() { + perExecutor.shutdown(); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` at line 61, The perExecutor field in ProductCacheManager is created with Executors.newFixedThreadPool(3) and lacks a shutdown hook; add a graceful shutdown to avoid thread leaks by either (a) annotating a new shutdown method in ProductCacheManager with `@PreDestroy` that calls shutdown(), awaits termination with a timeout and calls shutdownNow() if necessary, or (b) replace the raw ExecutorService with a Spring-managed TaskExecutor bean injected into ProductCacheManager; also add a test that closes the application context and asserts perExecutor is terminated (or that the injected TaskExecutor has been shut down) to verify proper lifecycle cleanup.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java (2)
168-179:increaseLikeCount테스트에서 캐시 갱신 순서 검증이 누락되었다.Read Model 업데이트 후 캐시 갱신이 순서대로 발생해야 데이터 정합성이 보장된다. 현재 테스트는 두 메서드 호출 여부만 검증하고 순서는 검증하지 않는다.
수정안:
InOrder검증을 추가하여readModelRepository.increaseLikeCount→productCacheManager.refreshProductDetail순서를 보장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java` around lines 168 - 179, Update the test increaseLikeCountSuccess to assert call order: create an InOrder via Mockito.inOrder(readModelRepository, productCacheManager) and use inOrder.verify(...) to first verify readModelRepository.increaseLikeCount(1L) and then verify productCacheManager.refreshProductDetail(eq(1L), any()), instead of only using independent verify calls; keep the test name (increaseLikeCountSuccess) and assertions otherwise the same.
393-401: 테스트에서 매직 넘버 12를 사용하여 상수 변경 시 테스트 실패 원인 파악이 어렵다.
3 정렬 × 2 페이지 × 2 (brand + all) = 12라는 계산이 주석에 있으나, 실제 상수(MAX_CACHEABLE_PAGE,ProductSortType.values().length)가 변경되면 테스트가 실패하면서도 원인 파악이 어렵다.수정안: 상수를 import하여 계산하거나, 테스트 헬퍼 메서드로 예상 호출 횟수를 산출한다.
🔧 수정안 예시
// 예상 호출 횟수를 상수 기반으로 계산 int expectedCalls = ProductSortType.values().length * MAX_CACHEABLE_PAGE * 2; // brand + all verify(productCacheManager, times(expectedCalls)).refreshIdList(any(), any());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java` around lines 393 - 401, Replace the hard-coded 12 expectation with a computed value using the actual constants so the test stays correct when constants change: compute expectedCalls using ProductSortType.values().length, MAX_CACHEABLE_PAGE, and the factor 2 (brand + all), then assert verify(productCacheManager, times(expectedCalls)).refreshIdList(any(), any()); update the test method refreshIdListCacheForAllSortsSuccess to import/reference ProductSortType and MAX_CACHEABLE_PAGE and use that computed expectedCalls instead of the literal 12.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java (1)
147-156:findProductCacheDtosByIds가 입력 ID 순서를 보존하지 않는다.
IN절 쿼리는 결과 순서를 보장하지 않는다. 호출측(ProductQueryService.mergeInOrder)에서 재정렬하므로 현재 동작에는 문제가 없으나, 이 메서드를 직접 사용하는 다른 코드가 추가될 경우 순서 관련 버그가 발생할 수 있다.수정안: Javadoc에 "반환 순서는 입력 ID 순서와 다를 수 있음"을 명시하거나,
FIELD(id, ...)또는 애플리케이션 레벨 재정렬을 적용한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java` around lines 147 - 156, The method findProductCacheDtosByIds currently uses an IN query which does not guarantee result ordering; update the code to either document this behavior or enforce input-order results—specifically, add a Javadoc on ProductQuerydslRepository.findProductCacheDtosByIds stating "returned order may differ from input ID order" if you choose documentation, or change the implementation to preserve order by reordering after fetch using the input productIds list (e.g., map fetched ProductCacheDto by id and rebuild a list in the same sequence) or by using a DB ordering function (e.g., FIELD/CASE) if supported; ensure ProductQueryService.mergeInOrder is still compatible with the chosen approach.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java (1)
34-35: Facade와 Service 양쪽에@Transactional(readOnly = true)가 중복 선언되어 있다.Spring의 트랜잭션 전파 기본값(REQUIRED)에 의해 Facade의 트랜잭션이 Service로 전파되므로 동작에는 문제가 없다. 다만 중복 선언이 의도적인지 명확히 하고, 불필요한 경우 Facade 레벨 어노테이션 제거를 검토할 수 있다.
Also applies to: 50-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.java` around lines 34 - 35, Facade-level `@Transactional`(readOnly = true) is duplicated with the service layer; remove the redundant annotation from ProductQueryFacade methods (e.g., getProduct(Long id) and the other method at the 50-51 region) and rely on the service's `@Transactional`(readOnly = true) to control transaction boundaries, or alternatively document/justify why the facade needs its own annotation if you intend different semantics. Ensure ProductQueryService (or the corresponding service class) retains the transaction annotation and that tests still pass after removing the facade annotations.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java (1)
192-194: 캐시 가능 조건에서size != DEFAULT_PAGE_SIZE일 때 캐시를 사용하지 않는 이유를 주석으로 명시해야 한다.
isCacheable메서드가size == DEFAULT_PAGE_SIZE를 강제하는데, 다른 사이즈 요청 시 캐시를 우회하는 이유가 코드만으로는 명확하지 않다. 캐시 키 폭발 방지를 위한 의도라면 주석으로 설계 결정을 명시해야 유지보수 시 혼란을 방지할 수 있다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java` around lines 192 - 194, Update the isCacheable method to add a clear comment explaining that requests with size != DEFAULT_PAGE_SIZE intentionally bypass caching to avoid cache key explosion and unpredictable cache hit rates; reference the method name isCacheable and the constants MAX_CACHEABLE_PAGE and DEFAULT_PAGE_SIZE so maintainers understand that only the default page size is cached for pages < MAX_CACHEABLE_PAGE as a deliberate design decision.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java (1)
74-106: 캐시/Read Model 동기화 실패 시나리오 테스트가 누락되었다.현재 테스트는 모든 협력 객체가 정상 동작하는 happy path만 검증한다. 운영 환경에서는 Redis 장애, Read Model 저장 실패 등이 발생할 수 있으며, 이때 트랜잭션 롤백 여부와 부분 실패 처리 로직 검증이 필요하다.
추가 테스트 제안:
syncReadModel예외 발생 시 트랜잭션 롤백 검증refreshProductDetailCache예외 발생 시 상품 생성 성공 여부 검증- 캐시 갱신 순서가 의도대로 동작하는지
InOrder검증🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.java` around lines 74 - 106, Add unit tests in ProductCommandFacadeTest covering failure scenarios and call ordering: (1) a test where productCommandService.syncReadModel(product, "나이키") is stubbed to throw and you assert the facade propagates the exception and that downstream cache methods (refreshProductDetailCache, refreshIdListCacheForAllSorts, productQueryService.getAdminProductDetail) were NOT invoked to simulate rollback behavior; (2) a test where productCommandService.refreshProductDetailCache(1L) is stubbed to throw but createProduct still returns AdminProductDetailOutDto and verify product creation path completed (verify createProduct and syncReadModel were called and productQueryService.getAdminProductDetail was invoked) despite cache failure; and (3) an InOrder test asserting the call sequence on mocks: brandQueryService.getBrandById, productCommandService.createProduct, productCommandService.syncReadModel, productCommandService.refreshProductDetailCache, productCommandService.refreshIdListCacheForAllSorts, productQueryService.getAdminProductDetail. Use the existing symbols createProduct(inDto), syncReadModel(product, "나이키"), refreshProductDetailCache(1L), refreshIdListCacheForAllSorts(1L) and getAdminProductDetail(1L) to locate and stub/verify interactions.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java (1)
76-78: updateProduct의 캐시 전략은 올바르다. 가격 변경만 PRICE_ASC 정렬 순서에 영향을 미친다.변경 가능한 필드(name, price, stock, description) 중 각 정렬에 미치는 영향을 분석하면:
- LATEST: 생성 시간 기준 → 필드 변경이 순서에 무영향
- PRICE_ASC: 가격 기준 → price 변경만 순서 영향 (현재 갱신함)
- LIKES_DESC: 좋아요 수 기준 → 필드 변경이 순서에 무영향
따라서 상세 캐시(refreshProductDetailCache)와 PRICE_ASC 캐시만 갱신하는 것이 적절하다. 다만 현재 주석 "(가격 변경 영향)"은 다른 필드들이 정렬에 무영향인 이유를 명시하지 않으므로, 아래와 같이 강화를 권장한다:
// write-through: 상세 캐시 + PRICE_ASC 정렬 ID 리스트 갱신 // (다른 필드 변경은 LATEST/LIKES_DESC 정렬 순서에 영향 없음)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java` around lines 76 - 78, Update the inline comment above the cache refresh calls to clearly state why only detail cache and PRICE_ASC id-list are refreshed: replace the existing "(가격 변경 영향)" comment with a clearer two-line comment such as "// write-through: 상세 캐시 + PRICE_ASC 정렬 ID 리스트 갱신" and "// (다른 필드 변경은 LATEST/LIKES_DESC 정렬 순서에 영향 없음)" so the intent is explicit near productCommandService.refreshProductDetailCache(id) and productCommandService.refreshIdListCacheForSort(updatedProduct.getBrandId(), ProductSortType.PRICE_ASC).apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java (4)
623-683:findProductCacheDtoById()테스트에서null입력 케이스가 누락되었다.
productQueryPort.findProductCacheDtoById(null)호출 시 NPE 발생 여부 또는 정상적인 null 반환 여부를 검증해야 한다. 캐시 레이어에서 null ID가 전달될 경우 예기치 않은 오류가 발생할 수 있다.`@Test` `@DisplayName`("[findProductCacheDtoById()] null ID -> null 반환 또는 예외") void findProductCacheDtoByIdNullInput() { // Act & Assert // 구현에 따라 null 반환 또는 IllegalArgumentException 예상 assertThat(productQueryPort.findProductCacheDtoById(null)).isNull(); // 또는 assertThrows(IllegalArgumentException.class, // () -> productQueryPort.findProductCacheDtoById(null)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java` around lines 623 - 683, Add a test for null input to the FindProductCacheDtoByIdTest: create a new test method (e.g., findProductCacheDtoByIdNullInput) that calls productQueryPort.findProductCacheDtoById(null) and asserts the expected behavior (either assertThat(...).isNull() or assertThrows(IllegalArgumentException.class, () -> productQueryPort.findProductCacheDtoById(null))) to document and verify how the implementation handles null IDs; place it alongside the existing findProductCacheDtoByIdSuccess/findProductCacheDtoByIdDeleted tests and update expectations if the implementation should be changed to return null or throw a specific exception.
106-129:saveDeletedProductWithReadModel헬퍼에서 불필요한 중복 저장이 발생한다.
productJpaRepository.save(productEntity)호출 후delete()→save(saved)순서로 두 번 저장하고 있다. 단일 트랜잭션 내에서productEntity.delete()호출 후 한 번만 저장해도 동일한 결과를 얻을 수 있다.♻️ 개선안
private ProductEntity saveDeletedProductWithReadModel(ProductEntity productEntity, String brandName) { - // 1. ProductEntity 저장 및 삭제 처리 - ProductEntity saved = productJpaRepository.save(productEntity); - saved.delete(); - ProductEntity deletedSaved = productJpaRepository.save(saved); + // 1. ProductEntity 삭제 처리 후 저장 + productEntity.delete(); + ProductEntity deletedSaved = productJpaRepository.save(productEntity); // 2. Product 도메인 모델 reconstruct (deletedAt 포함) Product product = Product.reconstruct(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java` around lines 106 - 129, The helper saveDeletedProductWithReadModel currently saves the entity twice (productJpaRepository.save(productEntity) then saved.delete(); productJpaRepository.save(saved)); instead call productEntity.delete() first and then persist once with productJpaRepository.save(productEntity) (assign to a single savedDeleted variable), use that savedDeleted when reconstructing the Product and when creating the ProductReadModelEntity, and return savedDeleted; update references to saved/deletedSaved accordingly so only one save occurs.
686-752:findProductCacheDtosByIds()테스트에서 경계값 케이스가 누락되었다.다음 케이스 추가를 권장한다:
- 빈 목록 입력:
List.of()전달 시 빈 목록 반환 또는 예외 발생 여부 확인- 부분 일치: 일부 ID만 존재하는 경우(예: 2개 중 1개만 존재) 반환 동작 확인
운영 환경에서 캐시 미스 시 DB 조회가 발생하는데, 빈 목록 입력 시 불필요한 쿼리 발생 여부를 검증해야 한다.
`@Test` `@DisplayName`("[findProductCacheDtosByIds()] 빈 ID 목록 -> 빈 목록 반환") void findProductCacheDtosByIdsEmptyInput() { // Act List<ProductCacheDto> result = productQueryPort.findProductCacheDtosByIds(List.of()); // Assert assertThat(result).isEmpty(); } `@Test` `@DisplayName`("[findProductCacheDtosByIds()] 일부 ID만 존재 -> 존재하는 상품만 반환") void findProductCacheDtosByIdsPartialMatch() { // Arrange BrandEntity brand = brandJpaRepository.save( BrandEntity.of("브랜드", "설명", VisibleStatus.VISIBLE)); ProductEntity existing = saveProductWithReadModel( ProductEntity.of(brand.getId(), "존재하는 상품", new BigDecimal("10000.00"), 100L, null), "브랜드"); // Act List<ProductCacheDto> result = productQueryPort.findProductCacheDtosByIds( List.of(existing.getId(), 999L)); // Assert assertAll( () -> assertThat(result).hasSize(1), () -> assertThat(result.get(0).id()).isEqualTo(existing.getId()) ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java` around lines 686 - 752, Add two boundary tests for findProductCacheDtosByIds on productQueryPort: (1) findProductCacheDtosByIdsEmptyInput — call productQueryPort.findProductCacheDtosByIds(List.of()) and assert the returned List<ProductCacheDto> is empty to prevent unnecessary DB queries on empty input; (2) findProductCacheDtosByIdsPartialMatch — create a BrandEntity via brandJpaRepository.save and one ProductEntity via saveProductWithReadModel, call productQueryPort.findProductCacheDtosByIds with a list containing the existing product ID and a non-existent ID (e.g., 999L), and assert the result contains only the existing ProductCacheDto (check size and id/name).
349-483:searchAdminProducts()테스트에서LIKES_DESC정렬 케이스 추가를 고려하라.
searchProducts()에서는LIKES_DESC정렬을 테스트하고 있으나,searchAdminProducts()에서는PRICE_ASC만 테스트하고 있다. 관리자 API에서도 동일한 정렬 옵션을 지원한다면 일관된 테스트 커버리지가 필요하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java` around lines 349 - 483, Add a test for LIKES_DESC in the SearchAdminProductsTest: create a new `@Test` method (e.g., searchAdminProductsSortByLikesDesc) that uses brandJpaRepository/saveProductWithReadModel to persist two admin products with different like counts, build ProductSearchCriteria with ProductSortType.LIKES_DESC and PageCriteria, call productQueryPort.searchAdminProducts(criteria, pageCriteria), and assert the returned AdminProductOutDto list is ordered by descending likes (higher likes first) and size is correct; reference ProductSearchCriteria, ProductSortType.LIKES_DESC, productQueryPort.searchAdminProducts, and saveProductWithReadModel to locate where to add this case.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0759987d-bdbc-4d8a-acc4-6658aa95e54a
⛔ Files ignored due to path filters (14)
.claude/skills/index-design/SKILL.mdis excluded by!**/*.mdand included by**CLAUDE.mdis excluded by!**/*.mdand included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/AdminProductDetailOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/AdminProductOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/ProductDetailOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/dto/out/ProductOutDto.javais excluded by!**/out/**and included by**apps/commerce-api/src/main/java/com/loopers/catalog/product/application/port/out/query/ProductQueryPort.javais excluded by!**/out/**and included by**docs/todo/cache-event-driven-refresh.mdis excluded by!**/*.mdand included by**docs/todo/like-count-read-model-recount-batch.mdis excluded by!**/*.mdand included by**round5-docs/04-to-be-index-measurement.mdis excluded by!**/*.mdand included by**round5-docs/05-to-be-cache-measurement.mdis excluded by!**/*.mdand included by**round5-docs/06-2layer-cache-implementation-design.mdis excluded by!**/*.mdand included by**round5-docs/07-cache-eviction-analysis.mdis excluded by!**/*.mdand included by**round5-docs/08-cross-domain-index-and-cache-analysis.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (53)
apps/commerce-api/build.gradle.ktsapps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.javaapps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductIndexPerformanceTest.javaapps/commerce-api/src/main/java/com/loopers/cart/cart/infrastructure/entity/CartItemEntity.javaapps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/brand/infrastructure/entity/BrandEntity.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductQueryFacade.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/domain/model/Product.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/domain/repository/ProductCommandRepository.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/domain/repository/ProductReadModelRepository.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheConstants.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/IdListCacheEntry.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/dto/ProductCacheDto.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/CacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductEntity.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductReadModelEntity.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/mapper/ProductEntityMapper.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImpl.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductCommandRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/coupon/coupontemplate/infrastructure/entity/CouponTemplateEntity.javaapps/commerce-api/src/main/java/com/loopers/coupon/issuedcoupon/infrastructure/entity/IssuedCouponEntity.javaapps/commerce-api/src/main/java/com/loopers/engagement/productlike/infrastructure/entity/ProductLikeEntity.javaapps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderEntity.javaapps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderItemEntity.javaapps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductCommandFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductStockConcurrencyTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/domain/model/ProductTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLockTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/mapper/ProductEntityMapperTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/repository/ProductCommandRepositoryTest.javaapps/commerce-api/src/test/java/com/loopers/catalog/product/interfaces/ProductControllerE2ETest.javaapps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.javaapps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.javaround5-docs/04-to-be-index-visualization.htmlround5-docs/05-to-be-cache-visualization.html
💤 Files with no reviewable changes (4)
- apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/repository/ProductCommandRepositoryTest.java
- apps/commerce-api/src/main/java/com/loopers/catalog/product/domain/repository/ProductCommandRepository.java
- apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductCommandRepositoryImpl.java
- apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductJpaRepository.java
🚧 Files skipped from review as they are similar to previous changes (11)
- apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/CacheStampedeTest.java
- apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLock.java
- apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManagerTest.java
- apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java
- apps/commerce-api/src/test/java/com/loopers/catalog/product/application/facade/ProductQueryFacadeTest.java
- apps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderItemEntity.java
- apps/commerce-api/src/main/java/com/loopers/cart/cart/infrastructure/entity/CartItemEntity.java
- apps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderEntity.java
- apps/commerce-api/src/main/java/com/loopers/catalog/product/domain/repository/ProductReadModelRepository.java
- apps/commerce-api/src/main/java/com/loopers/engagement/productlike/infrastructure/entity/ProductLikeEntity.java
- apps/commerce-api/src/test/java/com/loopers/catalog/brand/application/facade/BrandCommandFacadeTest.java
| @Override | ||
| public void decreaseLikeCount(Long productId) { | ||
|
|
||
| // 원자적 감소 (단일 SQL UPDATE — 0 이하로 내려가지 않음) | ||
| int updatedRows = jpaRepository.decreaseLikeCount(productId); | ||
|
|
||
| // 대상 Read Model 미존재 시 예외 (likeCount가 이미 0인 경우는 정상 — 0행 반환 허용) | ||
| // Note: decreaseLikeCount WHERE likeCount > 0 조건으로 0행 반환은 이미 0인 경우도 포함 | ||
| // 따라서 여기서는 검증하지 않음 (음수 방지가 목적) | ||
| } |
There was a problem hiding this comment.
decreaseLikeCount에서 대상 미존재와 likeCount=0 케이스를 구분하지 않는다
주석에 의도가 설명되어 있으나, 운영 관점에서 다음 상황이 구분되지 않는다:
- productId가 존재하지 않는 경우
- productId는 존재하지만 likeCount가 이미 0인 경우
두 경우 모두 updatedRows=0을 반환하므로, 잘못된 productId로 호출해도 silent fail이 된다. 의도적 설계라면 주석을 보강하고, 구분이 필요하다면 사전 존재 확인 로직을 추가하라.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java`
around lines 59 - 68, The decreaseLikeCount method in
ProductReadModelRepositoryImpl (which calls jpaRepository.decreaseLikeCount)
currently treats updatedRows==0 the same for both non-existent productId and
likeCount==0; add a pre-check using the repository existence method (e.g.,
jpaRepository.existsById(productId) or findById) before calling
decreaseLikeCount and throw a NotFound (or IllegalArgument) exception when the
product does not exist, otherwise call the atomic update and allow
updatedRows==0 to indicate likeCount was already 0; alternatively, if silent
no-op is desired, update the method comment to explicitly document that
non-existent productIds are ignored and leave behavior unchanged.
...i/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java
Show resolved
Hide resolved
| @Test | ||
| @DisplayName("[searchProducts()] 빈 ID 리스트 -> 빈 페이지 반환. Layer 2 MGET 미호출") | ||
| @SuppressWarnings("unchecked") | ||
| void searchProductsEmptyIdList() { | ||
| // Arrange — Layer 1: 빈 ID 리스트 | ||
| IdListCacheEntry idList = new IdListCacheEntry(List.of(), 0); | ||
| given(productCacheManager.getOrLoad( | ||
| eq("products:ids:v1:all:LATEST:0:20"), | ||
| eq(IdListCacheEntry.class), | ||
| any(Duration.class), | ||
| any(Supplier.class) | ||
| )).willReturn(idList); | ||
|
|
||
| // Act | ||
| ProductPageOutDto result = productQueryService.searchProducts(null, ProductSortType.LATEST, 0, 20); | ||
|
|
||
| // Assert | ||
| assertAll( | ||
| () -> assertThat(result.content()).isEmpty(), | ||
| () -> assertThat(result.totalElements()).isEqualTo(0), | ||
| () -> verify(productCacheManager).getOrLoad(any(), any(), any(), any()) | ||
| ); | ||
| } |
There was a problem hiding this comment.
빈 ID 리스트 테스트에서 Layer 2 MGET 미호출 검증이 누락되었다.
테스트 명칭에 "Layer 2 MGET 미호출"이라고 명시되어 있으나, 실제로 verifyNoMoreInteractions(productCacheManager) 또는 verify(productCacheManager, never()).mgetProductDetails(any())를 호출하지 않는다. getOrLoad만 검증하고 있어 테스트 의도가 완전히 충족되지 않는다.
🔧 수정안
// Assert
assertAll(
() -> assertThat(result.content()).isEmpty(),
() -> assertThat(result.totalElements()).isEqualTo(0),
- () -> verify(productCacheManager).getOrLoad(any(), any(), any(), any())
+ () -> verify(productCacheManager).getOrLoad(any(), any(), any(), any()),
+ () -> verify(productCacheManager, never()).mgetProductDetails(any())
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Test | |
| @DisplayName("[searchProducts()] 빈 ID 리스트 -> 빈 페이지 반환. Layer 2 MGET 미호출") | |
| @SuppressWarnings("unchecked") | |
| void searchProductsEmptyIdList() { | |
| // Arrange — Layer 1: 빈 ID 리스트 | |
| IdListCacheEntry idList = new IdListCacheEntry(List.of(), 0); | |
| given(productCacheManager.getOrLoad( | |
| eq("products:ids:v1:all:LATEST:0:20"), | |
| eq(IdListCacheEntry.class), | |
| any(Duration.class), | |
| any(Supplier.class) | |
| )).willReturn(idList); | |
| // Act | |
| ProductPageOutDto result = productQueryService.searchProducts(null, ProductSortType.LATEST, 0, 20); | |
| // Assert | |
| assertAll( | |
| () -> assertThat(result.content()).isEmpty(), | |
| () -> assertThat(result.totalElements()).isEqualTo(0), | |
| () -> verify(productCacheManager).getOrLoad(any(), any(), any(), any()) | |
| ); | |
| } | |
| `@Test` | |
| `@DisplayName`("[searchProducts()] 빈 ID 리스트 -> 빈 페이지 반환. Layer 2 MGET 미호출") | |
| `@SuppressWarnings`("unchecked") | |
| void searchProductsEmptyIdList() { | |
| // Arrange — Layer 1: 빈 ID 리스트 | |
| IdListCacheEntry idList = new IdListCacheEntry(List.of(), 0); | |
| given(productCacheManager.getOrLoad( | |
| eq("products:ids:v1:all:LATEST:0:20"), | |
| eq(IdListCacheEntry.class), | |
| any(Duration.class), | |
| any(Supplier.class) | |
| )).willReturn(idList); | |
| // Act | |
| ProductPageOutDto result = productQueryService.searchProducts(null, ProductSortType.LATEST, 0, 20); | |
| // Assert | |
| assertAll( | |
| () -> assertThat(result.content()).isEmpty(), | |
| () -> assertThat(result.totalElements()).isEqualTo(0), | |
| () -> verify(productCacheManager).getOrLoad(any(), any(), any(), any()), | |
| () -> verify(productCacheManager, never()).mgetProductDetails(any()) | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java`
around lines 285 - 307, The test searchProductsEmptyIdList in
ProductQueryServiceTest currently only verifies productCacheManager.getOrLoad
but not that the Layer 2 mget is not invoked; update the test to assert that no
Layer‑2 call occurs by adding a verification such as verify(productCacheManager,
never()).mgetProductDetails(any()) or
verifyNoMoreInteractions(productCacheManager) after the existing assertions so
the behavior of productCacheManager.mgetProductDetails (or absence thereof) is
explicitly checked.
| @Nested | ||
| @DisplayName("searchProductIds()") | ||
| class SearchProductIdsTest { | ||
|
|
||
| @Test | ||
| @DisplayName("[searchProductIds()] 활성 상품 존재 -> IdListCacheEntry(ids, totalElements) 반환. 정렬 적용") | ||
| void searchProductIdsSuccess() { | ||
| // Arrange | ||
| BrandEntity brand = brandJpaRepository.save( | ||
| BrandEntity.of("브랜드", "설명", VisibleStatus.VISIBLE)); | ||
| ProductEntity p1 = saveProductWithReadModel( | ||
| ProductEntity.of(brand.getId(), "비싼 상품", new BigDecimal("50000.00"), 10L, null), | ||
| "브랜드"); | ||
| ProductEntity p2 = saveProductWithReadModel( | ||
| ProductEntity.of(brand.getId(), "저렴한 상품", new BigDecimal("10000.00"), 20L, null), | ||
| "브랜드"); | ||
|
|
||
| ProductSearchCriteria criteria = new ProductSearchCriteria(null, ProductSortType.PRICE_ASC); | ||
| PageCriteria pageCriteria = new PageCriteria(0, 10); | ||
|
|
||
| // Act | ||
| IdListCacheEntry result = productQueryPort.searchProductIds(criteria, pageCriteria); | ||
|
|
||
| // Assert — PRICE_ASC 정렬: 저렴한 상품(p2) → 비싼 상품(p1) | ||
| assertAll( | ||
| () -> assertThat(result.ids()).hasSize(2), | ||
| () -> assertThat(result.ids().get(0)).isEqualTo(p2.getId()), | ||
| () -> assertThat(result.ids().get(1)).isEqualTo(p1.getId()), | ||
| () -> assertThat(result.totalElements()).isEqualTo(2) | ||
| ); | ||
| } | ||
|
|
||
|
|
||
| @Test | ||
| @DisplayName("[searchProductIds()] brandId 필터 -> 해당 브랜드의 ID만 반환") | ||
| void searchProductIdsWithBrandFilter() { | ||
| // Arrange | ||
| BrandEntity brand1 = brandJpaRepository.save( | ||
| BrandEntity.of("브랜드A", "설명A", VisibleStatus.VISIBLE)); | ||
| BrandEntity brand2 = brandJpaRepository.save( | ||
| BrandEntity.of("브랜드B", "설명B", VisibleStatus.VISIBLE)); | ||
| ProductEntity p1 = saveProductWithReadModel( | ||
| ProductEntity.of(brand1.getId(), "상품A", new BigDecimal("10000.00"), 100L, null), | ||
| "브랜드A"); | ||
| saveProductWithReadModel( | ||
| ProductEntity.of(brand2.getId(), "상품B", new BigDecimal("20000.00"), 200L, null), | ||
| "브랜드B"); | ||
|
|
||
| ProductSearchCriteria criteria = new ProductSearchCriteria(brand1.getId(), null); | ||
| PageCriteria pageCriteria = new PageCriteria(0, 10); | ||
|
|
||
| // Act | ||
| IdListCacheEntry result = productQueryPort.searchProductIds(criteria, pageCriteria); | ||
|
|
||
| // Assert | ||
| assertAll( | ||
| () -> assertThat(result.ids()).hasSize(1), | ||
| () -> assertThat(result.ids().get(0)).isEqualTo(p1.getId()), | ||
| () -> assertThat(result.totalElements()).isEqualTo(1) | ||
| ); | ||
| } | ||
|
|
||
|
|
||
| @Test | ||
| @DisplayName("[searchProductIds()] 삭제된 상품 제외 -> 활성 상품 ID만 반환") | ||
| void searchProductIdsExcludesDeleted() { | ||
| // Arrange | ||
| BrandEntity brand = brandJpaRepository.save( | ||
| BrandEntity.of("브랜드", "설명", VisibleStatus.VISIBLE)); | ||
| ProductEntity active = saveProductWithReadModel( | ||
| ProductEntity.of(brand.getId(), "활성 상품", new BigDecimal("10000.00"), 100L, null), | ||
| "브랜드"); | ||
| saveDeletedProductWithReadModel( | ||
| ProductEntity.of(brand.getId(), "삭제 상품", new BigDecimal("20000.00"), 200L, null), | ||
| "브랜드"); | ||
|
|
||
| ProductSearchCriteria criteria = new ProductSearchCriteria(null, null); | ||
| PageCriteria pageCriteria = new PageCriteria(0, 10); | ||
|
|
||
| // Act | ||
| IdListCacheEntry result = productQueryPort.searchProductIds(criteria, pageCriteria); | ||
|
|
||
| // Assert | ||
| assertAll( | ||
| () -> assertThat(result.ids()).hasSize(1), | ||
| () -> assertThat(result.ids().get(0)).isEqualTo(active.getId()), | ||
| () -> assertThat(result.totalElements()).isEqualTo(1) | ||
| ); | ||
| } | ||
|
|
||
|
|
||
| @Test | ||
| @DisplayName("[searchProductIds()] 페이지네이션 -> 지정된 페이지 크기만큼 ID 반환. totalElements는 전체 개수") | ||
| void searchProductIdsWithPagination() { | ||
| // Arrange | ||
| BrandEntity brand = brandJpaRepository.save( | ||
| BrandEntity.of("브랜드", "설명", VisibleStatus.VISIBLE)); | ||
| for (int i = 1; i <= 5; i++) { | ||
| saveProductWithReadModel( | ||
| ProductEntity.of(brand.getId(), "상품" + i, new BigDecimal("10000.00"), 100L, null), | ||
| "브랜드"); | ||
| } | ||
|
|
||
| ProductSearchCriteria criteria = new ProductSearchCriteria(null, null); | ||
| PageCriteria pageCriteria = new PageCriteria(0, 2); | ||
|
|
||
| // Act | ||
| IdListCacheEntry result = productQueryPort.searchProductIds(criteria, pageCriteria); | ||
|
|
||
| // Assert | ||
| assertAll( | ||
| () -> assertThat(result.ids()).hasSize(2), | ||
| () -> assertThat(result.totalElements()).isEqualTo(5) | ||
| ); | ||
| } | ||
|
|
||
|
|
||
| @Test | ||
| @DisplayName("[searchProductIds()] 빈 결과 -> ids 빈 목록, totalElements 0") | ||
| void searchProductIdsEmpty() { | ||
| // Arrange | ||
| ProductSearchCriteria criteria = new ProductSearchCriteria(null, null); | ||
| PageCriteria pageCriteria = new PageCriteria(0, 10); | ||
|
|
||
| // Act | ||
| IdListCacheEntry result = productQueryPort.searchProductIds(criteria, pageCriteria); | ||
|
|
||
| // Assert | ||
| assertAll( | ||
| () -> assertThat(result.ids()).isEmpty(), | ||
| () -> assertThat(result.totalElements()).isEqualTo(0) | ||
| ); | ||
| } | ||
|
|
||
| } |
There was a problem hiding this comment.
searchProductIds() 테스트에서 LIKES_DESC 정렬 케이스가 누락되었다.
searchProducts()에서는 LIKES_DESC 정렬 테스트(Line 251-275)가 존재하나, searchProductIds()에서는 PRICE_ASC만 테스트하고 있다. 캐시 ID 목록 조회 시 좋아요 순 정렬이 정상 동작하는지 검증이 필요하다.
`@Test`
`@DisplayName`("[searchProductIds()] LIKES_DESC 정렬 -> 좋아요 많은순으로 ID 반환")
void searchProductIdsSortByLikesDesc() {
// Arrange
BrandEntity brand = brandJpaRepository.save(
BrandEntity.of("브랜드", "설명", VisibleStatus.VISIBLE));
ProductEntity p1 = saveProductWithReadModel(
ProductEntity.of(brand.getId(), "인기 상품", new BigDecimal("10000.00"), 100L, null),
"브랜드", 50L);
ProductEntity p2 = saveProductWithReadModel(
ProductEntity.of(brand.getId(), "일반 상품", new BigDecimal("10000.00"), 100L, null),
"브랜드", 5L);
ProductSearchCriteria criteria = new ProductSearchCriteria(null, ProductSortType.LIKES_DESC);
PageCriteria pageCriteria = new PageCriteria(0, 10);
// Act
IdListCacheEntry result = productQueryPort.searchProductIds(criteria, pageCriteria);
// Assert — LIKES_DESC 정렬: 인기 상품(p1) → 일반 상품(p2)
assertAll(
() -> assertThat(result.ids()).hasSize(2),
() -> assertThat(result.ids().get(0)).isEqualTo(p1.getId()),
() -> assertThat(result.ids().get(1)).isEqualTo(p2.getId())
);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java`
around lines 486 - 620, Add a test in the SearchProductIdsTest nested class to
cover LIKES_DESC sorting: create two products via saveProductWithReadModel
(using the overload that accepts a likes value) with different like counts,
build ProductSearchCriteria with ProductSortType.LIKES_DESC and
PageCriteria(0,10), call productQueryPort.searchProductIds(criteria,
pageCriteria), and assert the returned IdListCacheEntry.ids() are ordered by
descending likes (first the higher-liked product then the lower) and size
matches; name it searchProductIdsSortByLikesDesc and include a clear
`@DisplayName` and assertions comparing ids to p1.getId()/p2.getId().
| <div class="context">AS-IS 90% -> 완전 해소 (실측)</div> | ||
| </div> | ||
| <div class="kpi ok"> | ||
| <div class="label">1000만건 지속부하 QPS</div> | ||
| <div class="value">20.0</div> | ||
| <div class="context">AS-IS 0.6~0.8 -> 목표 달성 (실측)</div> | ||
| </div> | ||
| <div class="kpi ok"> | ||
| <div class="label">EXPLAIN type</div> | ||
| <div class="value">ref</div> | ||
| <div class="context">AS-IS ALL -> 인덱스 활용</div> |
There was a problem hiding this comment.
KPI 카드 내 화살표 표기(->)가 HTMLHint 오류를 유발한다.
운영 관점에서 문서 품질 게이트를 통과하려면 정적 분석 오류가 0건이어야 한다. >를 >로 이스케이프하거나, 화살표를 HTML 엔티티(→, →)로 대체해야 한다.
제안 수정
- <div class="context">AS-IS 90% -> 완전 해소 (실측)</div>
+ <div class="context">AS-IS 90% → 완전 해소 (실측)</div>
...
- <div class="context">AS-IS 0.6~0.8 -> 목표 달성 (실측)</div>
+ <div class="context">AS-IS 0.6~0.8 → 목표 달성 (실측)</div>
...
- <div class="context">AS-IS ALL -> 인덱스 활용</div>
+ <div class="context">AS-IS ALL → 인덱스 활용</div>추가 테스트: 수정 후 npx htmlhint round5-docs/04-to-be-index-visualization.html 실행하여 오류 0건 확인.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <div class="context">AS-IS 90% -> 완전 해소 (실측)</div> | |
| </div> | |
| <div class="kpi ok"> | |
| <div class="label">1000만건 지속부하 QPS</div> | |
| <div class="value">20.0</div> | |
| <div class="context">AS-IS 0.6~0.8 -> 목표 달성 (실측)</div> | |
| </div> | |
| <div class="kpi ok"> | |
| <div class="label">EXPLAIN type</div> | |
| <div class="value">ref</div> | |
| <div class="context">AS-IS ALL -> 인덱스 활용</div> | |
| <div class="context">AS-IS 90% → 완전 해소 (실측)</div> | |
| </div> | |
| <div class="kpi ok"> | |
| <div class="label">1000만건 지속부하 QPS</div> | |
| <div class="value">20.0</div> | |
| <div class="context">AS-IS 0.6~0.8 → 목표 달성 (실측)</div> | |
| </div> | |
| <div class="kpi ok"> | |
| <div class="label">EXPLAIN type</div> | |
| <div class="value">ref</div> | |
| <div class="context">AS-IS ALL → 인덱스 활용</div> |
🧰 Tools
🪛 HTMLHint (1.9.2)
[error] 145-145: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 150-150: Special characters must be escaped : [ > ].
(spec-char-escape)
[error] 155-155: Special characters must be escaped : [ > ].
(spec-char-escape)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@round5-docs/04-to-be-index-visualization.html` around lines 145 - 155, KPI
card text nodes containing the literal arrow "->" (e.g., within <div
class="context"> elements inside the "kpi" blocks such as the lines "AS-IS 90%
-> 완전 해소 (실측)" and "AS-IS 0.6~0.8 -> 목표 달성 (실측)") must escape the greater-than
sign or use a proper arrow entity; update those context strings to use > or a
semantic entity like → / → so HTMLHint stops flagging them, then re-run npx
htmlhint round5-docs/04-to-be-index-visualization.html to confirm zero errors.
| <h2 class="section-title" id="sec-error">3. 에러율 비교 (AS-IS vs TO-BE)</h2> | ||
| <p class="section-desc"> | ||
| 모든 차트가 동일한 Y축(에러율 0~100%)을 사용하여 직접 비교 가능.<br> | ||
| 빨강 = AS-IS, 초록 = TO-BE. 인덱스 적용으로 커넥션 점유 시간이 대폭 감소 -> 에러율 해소. |
There was a problem hiding this comment.
섹션 설명의 화살표 표기도 동일한 이스케이프 처리가 필요하다.
- 빨강 = AS-IS, 초록 = TO-BE. 인덱스 적용으로 커넥션 점유 시간이 대폭 감소 -> 에러율 해소.
+ 빨강 = AS-IS, 초록 = TO-BE. 인덱스 적용으로 커넥션 점유 시간이 대폭 감소 → 에러율 해소.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 빨강 = AS-IS, 초록 = TO-BE. 인덱스 적용으로 커넥션 점유 시간이 대폭 감소 -> 에러율 해소. | |
| 빨강 = AS-IS, 초록 = TO-BE. 인덱스 적용으로 커넥션 점유 시간이 대폭 감소 → 에러율 해소. |
🧰 Tools
🪛 HTMLHint (1.9.2)
[error] 382-382: Special characters must be escaped : [ > ].
(spec-char-escape)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@round5-docs/04-to-be-index-visualization.html` at line 382, The section
description uses an unescaped arrow ("->") in the sentence "빨강 = AS-IS, 초록 =
TO-BE. 인덱스 적용으로 커넥션 점유 시간이 대폭 감소 -> 에러율 해소." — update this to use the same
escaping convention as other arrows in the document (e.g., replace "->" with the
escaped/HTML entity form used elsewhere such as "→" or "\->" depending on
the repo convention) so the arrow renders consistently; modify that exact string
in the section text to the escaped form.
d23cfd0 to
45fc52a
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java (1)
233-269:⚠️ Potential issue | 🟡 Minor실패 경로에서 Read Model/캐시 무변경을 함께 검증하다.
현재 예외만 확인하고
save·updateStock·refreshProductDetail가 호출되지 않았는지는 검증하지 않는다. 운영에서는 재고 차감이 실패했는데 Read Model이나 Redis만 갱신되면 재고가 잘못 보이는 일관성 장애로 이어진다.never()또는verifyNoInteractions()로 저장/동기화/캐시 갱신이 0회인 것을 함께 검증하다. 추가로PRODUCT_NOT_FOUND와PRODUCT_OUT_OF_STOCK두 경로 모두에서 부수 효과가 없음을 분리해 테스트하다. As per coding guidelines, "**/*Test*.java: Java 테스트 리뷰 기준이다. 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java` around lines 233 - 269, Both failure-path tests for decreaseStock (PRODUCT_NOT_FOUND and PRODUCT_OUT_OF_STOCK) must also assert that no side-effect methods were invoked; after asserting the CoreException, add Mockito verifications that productSave/updateStock/refreshProductDetail (e.g., productRepository.save(...), productCommandRepository.updateStock(...), and productDetailCache.refreshProductDetail(...)) were never called using verify(..., never()) or verifyNoInteractions(...) so each test verifies zero Read Model/cache updates in its respective failure path. Ensure you do this in both decreaseStockProductNotFound and decreaseStockOutOfStock tests and keep the exception assertions intact.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java (1)
97-124:⚠️ Potential issue | 🟡 Minor음수 방지 경계가 아직 검증되지 않는다.
현재 케이스는 초기 likeCount=10에 감소 10건만 보내므로
like_count > 0가드를 초과하려는 경쟁 상황을 만들지 못한다. 운영에서는 중복 unlike나 재시도로 감소 요청이 더 많이 들어올 수 있어 0 이하 방지 회귀를 놓치기 쉽다. 초기값보다 많은 감소 요청(예: 10 → 20건 이상)을 동시에 보내고 최종값이 0으로 유지되는 케이스를 추가하는 편이 안전하다. 추가 테스트로 예외 없이 0을 유지하고 음수가 되지 않는지 함께 검증해야 한다.As per coding guidelines,
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다. 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java` around lines 97 - 124, Update the concurrentDecreaseLikeCount test (or add a new test) to verify the non-negative guard by sending more decrease requests than the initial likeCount: initialize the read model via saveReadModel(...) with likeCount=10, spawn e.g. 20 concurrent calls to productCommandService.decreaseLikeCount(productId) (use ExecutorService and collect futures), wait for completion and assert no futures threw exceptions, then fetch the ProductReadModelEntity via productReadModelJpaRepository.findById(productId) and assert result.getLikeCount() == 0L and result.getLikeCount() >= 0L to ensure no negative values; keep references to the existing helpers (saveReadModel, productCommandService.decreaseLikeCount, productReadModelJpaRepository) so the change is localized.
♻️ Duplicate comments (20)
apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.java (2)
56-63:⚠️ Potential issue | 🟠 Major락 미획득/인터럽트 경로에서
loader를 바로 실행해 스탬피드 방어가 약화된다.운영 관점에서 락 대기 후 무조건
loader.get()을 호출하면 동시 미스 시 중복 로드가 발생해 DB 급증 위험이 커진다. 대기 후에는 재획득 시도(또는 캐시 재확인) 없이 직접loader를 호출하지 않도록 제어 흐름을 바꿔야 하며, 인터럽트 시에도 동일 원칙을 지켜야 한다. 추가 테스트로 동일 키 동시 요청에서loader호출 횟수가 1회(또는 정책상 허용치)로 제한되는지, 그리고 인터럽트 발생 시 중복 호출이 늘어나지 않는지 검증해야 한다.Also applies to: 74-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.java` around lines 56 - 63, In RedisCacheLock, do not call loader.get() immediately after failing to acquire the lock or when catching InterruptedException; instead change the control flow to re-check the cache and attempt to re-acquire the lock (e.g., loop: check cache, if miss try acquireLock(lockKey) again, if acquired run loader and populate cache, else waitForLockRelease(lockKey) and repeat) so that waitForLockRelease(lockKey) paths and InterruptedException handling do not directly invoke loader.get() and thus preserve stampede protection; update the logic in the methods around the else branch and the catch (InterruptedException) in RedisCacheLock to implement this retry/recheck pattern and add tests verifying single loader invocation under concurrent misses and during interrupts.
49-50:⚠️ Potential issue | 🟠 Major락 해제 시 소유권 검증이 없어 타 스레드 락을 삭제할 수 있다.
운영 관점에서 TTL 만료 후 다른 노드가 재획득한 락을 이전 소유자가
delete하면 상호배제가 깨져 동시 DB 접근이 폭증한다. 락 값에 owner UUID를 저장하고, 해제는 Lua compare-and-delete(값 일치 시에만 DEL)로 원자 처리해야 한다. 추가 테스트로loader를LOCK_TTL보다 길게 지연시켜 재획득이 발생하는 상황에서, 선행 소유자 해제가 후행 소유자 락을 제거하지 않는지 검증해야 한다.수정 예시
-Boolean acquired = redisTemplate.opsForValue().setIfAbsent(lockKey, "1", LOCK_TTL); +String ownerId = java.util.UUID.randomUUID().toString(); +Boolean acquired = redisTemplate.opsForValue().setIfAbsent(lockKey, ownerId, LOCK_TTL); ... -redisTemplate.delete(lockKey); +redisTemplate.execute( + (org.springframework.data.redis.core.RedisCallback<Long>) connection -> + connection.scriptingCommands().eval( + ("if redis.call('get', KEYS[1]) == ARGV[1] then " + + "return redis.call('del', KEYS[1]) else return 0 end").getBytes(), + org.springframework.data.redis.connection.ReturnType.INTEGER, + 1, + lockKey.getBytes(), + ownerId.getBytes() + ) +);Also applies to: 65-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.java` around lines 49 - 50, The current lock acquisition stores a fixed "1" and release lacks ownership check; change RedisCacheLock to generate and store a unique ownerId (UUID) as the value in redisTemplate.opsForValue().setIfAbsent(lockKey, ownerId, LOCK_TTL) (where lockKey and LOCK_TTL are used), and implement unlock to run an atomic Lua compare-and-delete (DEL only if value == ownerId) via redisTemplate.execute or script execution APIs; update any related methods (e.g., the release/unlock method around lines 65-68 referenced) to pass and verify ownerId, and add a test that delays the loader longer than LOCK_TTL to simulate re-acquisition and assert that the original owner’s unlock does NOT remove the new owner’s lock.round5-docs/05-to-be-cache-visualization.html (5)
307-339:⚠️ Potential issue | 🟠 Major아키텍처 설명이 실제 분산 경로와 무효화 전략을 오해하게 만든다.
운영 관점에서 문서가 LocalCacheLock 중심 단일 경로와
SCAN+DEL을 제시하면 멀티 인스턴스 장애 대응 시 잘못된 가정을 만든다. 다이어그램/표를단일 노드(LocalCacheLock)와분산 노드(RedisCacheLock)경로로 분리하고, 락 획득 성공/실패 후 fallback 흐름 및 버전 키 기반 무효화 경로로 정정해야 한다. 추가 테스트로 문서 내용을ProductCacheManager/CacheLock구현과 대조하는 체크리스트를 두어 불일치를 방지해야 한다.Also applies to: 367-387
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` around lines 307 - 339, The architecture doc currently implies a single-node LocalCacheLock path and SCAN+DEL invalidation which can mislead about distributed behavior; update the diagram and text to clearly separate single-node (LocalCacheLock) vs distributed (RedisCacheLock) flows, show lock acquisition success and failure fallback paths (e.g., when ProductCacheManager fails to get Redis lock fall back to local lock or serve stale), replace blanket "SCAN + DEL" with a version-key or publish/subscribe invalidation path and describe fallback (e.g., fall back to pattern-based background invalidation only if safe), and add a short checklist mapping the doc steps to the actual ProductCacheManager and CacheLock (LocalCacheLock, RedisCacheLock) implementations to verify behavior parity.
354-361:⚠️ Potential issue | 🟠 MajorTTL/일관성 윈도우 수치가 구현값과 불일치한다.
운영 관점에서 TTL 문서값이 코드와 다르면 스탬피드/일관성 장애 추정 시간이 틀려 대응 판단을 왜곡한다. 표와 결론의 TTL 및 일관성 윈도우를 실제 상수값(상세/목록 TTL + jitter)으로 맞추고,
SCAN+DEL문구도 현재 무효화 전략에 맞게 갱신해야 한다. 추가 테스트로 문서 수치와ProductCacheConstants를 비교하는 자동 검증(문서 드리프트 체크)을 넣어야 한다.Also applies to: 416-416
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` around lines 354 - 361, Update the TTL and consistency window numbers in the document to match the actual constants used in code (e.g., ProductCacheConstants detail and list TTLs plus jitter) and replace the generic "SCAN+DEL" invalidation text with the real current strategy name used in code (e.g., the method or routine that performs list invalidation). Locate referenced constants and methods by name (ProductCacheConstants, the list invalidation function/method, and any jitter constant) and make the table and conclusions use those exact numeric values and wording; then add an automated drift check that compares the documented TTL/consistency values against ProductCacheConstants at build/test time and fails or warns if they diverge.
399-403:⚠️ Potential issue | 🟡 Minor결론 KPI 텍스트가 차트 원본 데이터와 동기화되지 않았다.
운영 관점에서 결론 숫자가 데이터셋과 다르면 성능 의사결정 근거가 훼손된다. 결론 구간 수치를 하드코딩하지 말고 동일한 dataset 상수에서 계산해 렌더링하도록 바꿔야 한다. 추가 테스트로 결론 텍스트와 차트 데이터 범위를 비교하는 자동 검증을 넣어야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` around lines 399 - 403, The KPI summary text is hardcoded and must be computed from the same chart dataset constant; replace the static list items (the h3 and the three <li> values) with a renderKpiSummary function that calls computeKpiFromDataset(chartData) (or the existing dataset constant) to derive Cache Hit latency, list API hit/miss ranges, and burst metrics, and injects those computed values into the DOM (target the current h3 and list container). Also add a small automated check function validateKpiMatchesChart(chartData) that compares rendered KPI numbers to the chartData ranges and fails CI or logs an error if they diverge, then invoke that validation after initial render. Ensure the unique identifiers used are the DOM container where the h3 and <ul> currently live (assign an id if needed) and the chartData/dataset variable used by the charts.
7-7:⚠️ Potential issue | 🟠 Major벤치마크 문서가 외부 CDN 단일 의존이라 재현성이 깨진다.
운영 관점에서 문서 렌더링이 네트워크 상태에 좌우되면 회귀 분석 근거가 불안정해진다. Chart.js를 저장소 고정 자산으로 self-host하거나, 최소한
integrity/crossorigin과 로드 실패 fallback 안내를 추가해야 한다. 추가 테스트로 브라우저 오프라인 모드에서 차트 또는 대체 안내가 정상 노출되는지 확인해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` at line 7, The HTML currently loads Chart.js only via an external CDN script tag (https://cdn.jsdelivr.net/npm/chart.js@4.4.7/dist/chart.umd.min.js), which breaks reproducibility; either vendor Chart.js into the repo and change the script src to the local asset (e.g., /assets/vendor/chart.umd.min.js) or, if you must keep the CDN, add SRI by adding an integrity and crossorigin attribute for the exact version and implement a load-failure fallback: attach an onerror handler to the script element that replaces the chart area with a clear offline/fallback message and logs the failure, and add a simple synchronous check to ensure Chart is defined before attempting to render charts; finally, verify in browser offline mode that the chart or the fallback message appears.
314-339:⚠️ Potential issue | 🟡 Minor아키텍처 화살표의 raw
>가 린트 오류를 지속시킨다.운영 관점에서 문서 품질 게이트 실패가 누적되면 릴리스 신뢰도가 떨어진다.
-->표기를-->또는→로 일괄 치환해 HTML escape 규칙을 충족해야 한다. 추가 테스트로htmlhint round5-docs/05-to-be-cache-visualization.html재실행 시 오류 0건을 확인해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/05-to-be-cache-visualization.html` around lines 314 - 339, Replace all raw occurrences of the arrow token "-->" in the HTML markup (including inside span elements like <span class="highlight">Client</span> --> ...) with an HTML-safe alternative (either replace "-->" with "-->" or with the unicode arrow "→") so the document no longer contains an unescaped '>' in text nodes; update every instance (PER, cache miss, error paths, comments, and cache invalidation lines) and then re-run htmlhint to verify zero errors.apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java (1)
42-74:⚠️ Potential issue | 🟡 Minor성공 케이스만으로는 ACL 경계 회귀를 충분히 고정하지 못한다.
운영 관점에서 외부 조회 경계의 호출 횟수 증가나 예외 흐름 변경은 카탈로그 부하 급증 및 주문 실패로 이어질 수 있다. 성공 테스트 끝에
findActiveByIds1회 호출과 추가 상호작용 없음 검증을 넣고, 별도 테스트로findActiveByIds예외 발생 시readProducts의 전파 정책을 고정해야 한다. 추가 테스트로 (1) 호출 횟수 고정 테스트, (2) 예외 전파/변환 테스트를 각각 추가해야 한다.As per coding guidelines, "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
Also applies to: 77-88
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java` around lines 42 - 74, Add two tests around OrderProductReaderImpl.readProducts to harden ACL boundaries: (1) in the existing readProductsSuccess test, after the assertions verify that productQueryFacade.findActiveByIds(productIds) was invoked exactly once and there were no further interactions (use verify and verifyNoMoreInteractions against productQueryFacade) to lock the call count; (2) add a new test where productQueryFacade.findActiveByIds(productIds) is mocked to throw an exception and assert readProducts either propagates or converts that exception according to the module policy (assertThrows against orderProductReaderImpl.readProducts), referencing the methods orderProductReaderImpl.readProducts and productQueryFacade.findActiveByIds to locate the code under test.round5-docs/04-to-be-index-visualization.html (1)
145-155:⚠️ Potential issue | 🟡 Minor문서 전반의
->표기가 HTML escape 규칙을 위반해 품질 게이트를 깨뜨린다.운영 관점에서 정적 분석 실패가 반복되면 문서 배포 파이프라인 신뢰도가 낮아진다. 모든
->를→또는->로 일괄 치환해 일관성 있게 정리해야 한다. 추가 테스트로htmlhint round5-docs/04-to-be-index-visualization.html실행 결과 오류 0건을 확인해야 한다.Also applies to: 382-382, 410-413, 453-461
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@round5-docs/04-to-be-index-visualization.html` around lines 145 - 155, Replace all literal "->" occurrences in the HTML (e.g., inside <div class="context"> and other <div class="kpi"> blocks shown in the diff) with a proper HTML-safe arrow such as "→" or the entity "→" (or use "->" if you prefer the original glyph with escaped hyphen), ensuring you do not leave raw "->" characters in text nodes; apply this change consistently across the file (including the other mentioned locations) and re-run the static check with htmlhint round5-docs/04-to-be-index-visualization.html to verify zero errors before committing.apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java (1)
64-68:⚠️ Potential issue | 🟠 Major커밋 전에 상품 상세 캐시를 갱신하지 않다.
이 구간은 브랜드 수정 트랜잭션 안에서 Redis write-through를 수행한다. 운영에서는 상품 수가 많을수록 트랜잭션이 길어지고, 이후 롤백이나 커밋 실패가 나면 캐시만 새 브랜드명으로 앞서가 일관성 장애가 발생한다.
syncBrandNameInReadModel까지만 트랜잭션 안에 두고, 상세 캐시 갱신은TransactionSynchronizationManager.registerSynchronization(...)나@TransactionalEventListener(phase = AFTER_COMMIT)로 커밋 이후에 실행하다. 추가로 브랜드 수정 후 예외가 발생했을 때 캐시가 갱신되지 않는 롤백 케이스와, 정상 커밋 후 active product 전부가 갱신되는 케이스를 테스트하다.🛠️ 수정 예시
+import org.springframework.transaction.support.TransactionSynchronization; +import org.springframework.transaction.support.TransactionSynchronizationManager; ... - List<Long> productIds = productQueryService.findActiveIdsByBrandId(id); - for (Long productId : productIds) { - productCommandService.refreshProductDetailCache(productId); - } + TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() { + `@Override` + public void afterCommit() { + productQueryService.findActiveIdsByBrandId(id) + .forEach(productCommandService::refreshProductDetailCache); + } + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java` around lines 64 - 68, The brand update currently refreshes product detail cache inside the BrandCommandFacade transaction (calling productQueryService.findActiveIdsByBrandId and productCommandService.refreshProductDetailCache in the same method that calls syncBrandNameInReadModel), which can cause cache/DB inconsistency on rollback; move the loop that invokes productCommandService.refreshProductDetailCache for each productId out of the transactional boundary by registering a TransactionSynchronizationManager.registerSynchronization callback or by publishing an event handled with `@TransactionalEventListener`(phase = AFTER_COMMIT) to run after commit, and add tests that verify (1) on commit all active products returned by productQueryService.findActiveIdsByBrandId have their caches refreshed and (2) on rollback no cache refresh is executed.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java (1)
48-56:⚠️ Potential issue | 🟠 Major상품 쓰기 흐름의 Redis 갱신을 after-commit으로 분리하다.
생성/수정/삭제 흐름 모두에서 상세 캐시와 ID 리스트 캐시를 트랜잭션 안에서 바로 갱신한다. 운영에서는 롤백 시 캐시가 DB보다 앞서가고, 쓰기 트랜잭션이 Redis round-trip만큼 길어져 락 경합과 응답 지연이 커진다.
syncReadModel과 응답 조회는 그대로 두고,refreshProductDetailCache·refreshIdListCache...·deleteProductDetailCache는 after-commit 훅이나 이벤트 리스너로 분리하다. 추가로 create/update/delete 각각에 대해 커밋 전 예외 시 캐시가 바뀌지 않고, 정상 커밋 후에만 상세/리스트 캐시가 반영되는 테스트를 넣다.🛠️ 수정 예시
+import org.springframework.transaction.support.TransactionSynchronization; +import org.springframework.transaction.support.TransactionSynchronizationManager; ... - productCommandService.refreshProductDetailCache(savedProduct.getId()); - productCommandService.refreshIdListCacheForAllSorts(savedProduct.getBrandId()); + runAfterCommit(() -> { + productCommandService.refreshProductDetailCache(savedProduct.getId()); + productCommandService.refreshIdListCacheForAllSorts(savedProduct.getBrandId()); + }); ... - productCommandService.refreshProductDetailCache(id); - productCommandService.refreshIdListCacheForSort(updatedProduct.getBrandId(), ProductSortType.PRICE_ASC); + runAfterCommit(() -> { + productCommandService.refreshProductDetailCache(id); + productCommandService.refreshIdListCacheForSort(updatedProduct.getBrandId(), ProductSortType.PRICE_ASC); + }); ... - productCommandService.deleteProductDetailCache(id); - productCommandService.refreshIdListCacheForAllSorts(product.getBrandId()); + runAfterCommit(() -> { + productCommandService.deleteProductDetailCache(id); + productCommandService.refreshIdListCacheForAllSorts(product.getBrandId()); + }); + + private void runAfterCommit(Runnable task) { + TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() { + `@Override` + public void afterCommit() { + task.run(); + } + }); + }Also applies to: 73-81, 95-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java` around lines 48 - 56, The current write flow in ProductCommandFacade calls productCommandService.refreshProductDetailCache and productCommandService.refreshIdListCacheForAllSorts (and deleteProductDetailCache on deletes) inside the transaction; move those cache-refresh calls out of the transaction and invoke them only after successful commit via an after-commit hook or transactional event listener (e.g., TransactionSynchronizationManager/TransactionSynchronizationAdapter or `@TransactionalEventListener`) while leaving productCommandService.syncReadModel(savedProduct, ...) and the subsequent productQueryService.getAdminProductDetail(...) unchanged; apply the same change to the other write paths that call refreshProductDetailCache/refreshIdListCacheForAllSorts/deleteProductDetailCache (the create/update/delete methods) and add tests that assert cache is unchanged on pre-commit exceptions and is updated only after a successful commit.apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java (1)
285-306:⚠️ Potential issue | 🟡 Minor테스트 이름과 달리 Layer 2 미호출을 검증하지 않는다.
현재 Assertion은
getOrLoad()호출만 확인하고mgetProductDetails()미호출은 보지 않는다. 운영 버그는 아니지만, 빈 ID 리스트 최적화가 깨져도 이 테스트는 계속 통과한다.verify(productCacheManager, never()).mgetProductDetails(any())또는verifyNoMoreInteractions(productCacheManager)를 추가하는 편이 안전하다. 추가 테스트로 빈 ID 리스트 캐시 히트 시 DB/Redis 2차 조회가 전혀 발생하지 않는지 검증해야 한다.As per coding guidelines,
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다. 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java` around lines 285 - 306, The test searchProductsEmptyIdList currently only verifies productCacheManager.getOrLoad(...) was called but doesn't assert that the second-layer lookup is skipped; update the test to also verify that productCacheManager.mgetProductDetails(...) is never invoked (e.g., add verify(productCacheManager, never()).mgetProductDetails(any()) or alternatively verifyNoMoreInteractions(productCacheManager)) so the empty-ID-list optimization is enforced; keep the existing arrangement and assertions and add this extra verification step referencing the searchProductsEmptyIdList test and the productCacheManager.mgetProductDetails method.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java (2)
128-170:⚠️ Potential issue | 🟠 Major캐시 쓰기가 커밋 전에 실행되어 롤백 시 유령 상태가 남는다.
increaseLikeCount()/decreaseLikeCount()/decreaseStock()와 Facade용 refresh 메서드가 모두 트랜잭션 안에서 Redis 갱신을 바로 호출한다.refreshProductDetail()는 loader를 즉시 실행하므로 이후 예외나 커밋 실패가 나면 Redis만 앞선 상태가 남아 DB와 조회 결과가 어긋난다. 캐시 갱신/삭제는TransactionSynchronizationManager나@TransactionalEventListener(phase = AFTER_COMMIT)로 커밋 후에만 실행하도록 분리하는 편이 안전하다. 추가 테스트로 캐시 갱신 이후 예외를 발생시켜 롤백시키고 Redis 값이 바뀌지 않는지 검증해야 한다.As per coding guidelines,
**/*Service*.java: 트랜잭션 경계(@Transactional) 위치와 전파, readOnly, 롤백 조건을 점검한다.Also applies to: 202-242
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java` around lines 128 - 170, The cache writes currently occur inside the transactional methods increaseLikeCount, decreaseLikeCount, and decreaseStock via productCacheManager.refreshProductDetail(...) which calls the loader immediately, causing stale/ghost cache on rollback; change the flow so cache refresh runs only after successful commit by deferring the call using TransactionSynchronizationManager.registerSynchronization (afterCommit) or by publishing an application event and handling it with an `@TransactionalEventListener`(phase = AFTER_COMMIT) that invokes productCacheManager.refreshProductDetail with productQueryPort.findProductCacheDtoById as the loader; update the three methods to remove direct calls to refreshProductDetail and instead publish the post-commit event (or register the synchronization), and add tests that trigger an exception after publishing to verify Redis is not modified on rollback.
128-148:⚠️ Potential issue | 🟠 Major좋아요 변경이
LIKES_DESC목록 캐시를 그대로 남긴다.Line 136과 Line 148은 상세 캐시만 재적재하고 ID 리스트 캐시는 건드리지 않는다. 운영에서는 상세의
likeCount는 최신인데 인기순 목록 순서는 최대 TTL 동안 예전 상태로 노출된다. 좋아요 변경 후에는 브랜드별/전체LIKES_DESC목록 캐시를 evict하거나 version bump로 무효화해 다음 조회에서 다시 계산되게 해야 한다. 추가 테스트로 두 상품의 likeCount가 교차한 뒤 첫 페이지 정렬이 즉시 뒤바뀌는지 검증해야 한다.As per coding guidelines,
**/*Service*.java: 유스케이스 단위로 책임이 정리되어 있는지, 부수 효과가 명확한지 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java` around lines 128 - 148, increaseLikeCount and decreaseLikeCount currently refresh only the product detail cache via productCacheManager.refreshProductDetail, leaving the LIKES_DESC list caches stale; update both methods to also evict or version-bump the LIKES_DESC list cache entries (brand-specific and global) so the popularity-sorted lists are recomputed on next read (use the same cache manager or listCache abstraction used elsewhere), and add a test that simulates two products crossing likeCount values and asserts the first-page LIKES_DESC ordering flips immediately after the like change.apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java (2)
325-385:⚠️ Potential issue | 🟠 Major비정상 HTTP 응답이 성공 latency로 집계된다.
현재 단일/버스트/지속부하 측정은 예외만 실패로 보고 HTTP status는 검사하지 않는다. 운영 장애로 4xx/5xx가 반환돼도 성능 수치가 정상처럼 기록되어 결과를 신뢰하기 어렵다.
mockMvc.perform(...).andReturn().getResponse().getStatus()를 확인해 2xx만 latency 배열에 넣고, 나머지는errors에 반영하거나 즉시 실패시키는 편이 안전하다. 추가 테스트로 의도적으로 404/500을 발생시키는 케이스를 넣어 error rate 집계가 실제 응답 코드와 일치하는지 검증해야 한다.As per coding guidelines,
**/*Test*.java: 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.Also applies to: 390-432, 437-488
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java` around lines 325 - 385, measureSingleApiNoCache and measureSingleApi currently treat any request as successful and record latency even when HTTP 4xx/5xx are returned; update both methods to call mockMvc.perform(get(url)).andReturn(), read the response status via getResponse().getStatus(), and only record the measured time into the times[] array when status is 2xx; for non-2xx responses either increment an errors counter (or throw an AssertionError to fail the run) and do not include that run's duration in the latency stats, then compute averages based on successful run count; also add a unit-case in the test suite that triggers 404 and 500 responses to assert error rate is aggregated correctly.
264-273:⚠️ Potential issue | 🟠 MajorMISS 측정이 실제로는 캐시 히트를 재고 있다.
Line 266-273에서 Redis를 비운 뒤
measureSingleApi()를 호출하지만, 이 메서드는 Line 359-362에서 warmup 3회를 먼저 수행해 첫 요청에 캐시를 채운다. 운영 의사결정에 쓰일 MISS 지표가 실제보다 과소평가된다. MISS 전용 측정 경로를 분리해 warmup 없이 첫 요청만 집계하거나, 각 반복 직전에 Redis를 비우고 첫 호출만 기록해야 한다. 추가 테스트로 측정 직전 Redis key 수가 0이고 측정 직후에만 값이 생성되는지 검증해야 한다.As per coding guidelines,
**/*Test*.java: 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.Also applies to: 358-385
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java` around lines 264 - 273, The MISS measurement is being polluted by the warmup calls inside measureSingleApi; create a MISS-only path by either adding a boolean parameter to measureSingleApi (e.g., measureSingleApi(label, path, params, boolean warmup)) or implementing a new method measureSingleApiMiss that skips the warmup loop, call redisCleanUp.truncateAll() immediately before invoking the MISS method, and after the single request verify Redis key count is zero before the request and non-zero only after the request to assert correct behavior; update the loops that iterate over listApis and detailApis to call the new MISS-only method (or pass warmup=false) while leaving the existing warmup-enabled calls for HIT measurements.apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java (1)
486-620:⚠️ Potential issue | 🟡 Minor
searchProductIds()에LIKES_DESC정렬 검증이 누락되어 회귀 탐지가 약하다운영에서 ID 리스트 캐시는
sortType별로 분기되는데, 현재 이 블록은PRICE_ASC만 검증한다. 좋아요순 쿼리/인덱스 변경 회귀가 발생해도 테스트가 잡지 못한다.
searchProductIdsSortByLikesDesc테스트를 추가해 likeCount가 큰 상품 ID가 먼저 반환되는지와totalElements를 함께 검증해야 한다. 가능하면 동점 likeCount의 2차 정렬 기준도 기대값으로 고정하는 편이 안전하다.As per coding guidelines "
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java` around lines 486 - 620, Add a new unit test method to verify searchProductIds sorts by likes descending: create three products (use saveProductWithReadModel/saveDeletedProductWithReadModel as needed) with varying like counts and a deterministic secondary sort (e.g., id or price) for ties, call productQueryPort.searchProductIds(new ProductSearchCriteria(null, ProductSortType.LIKES_DESC), new PageCriteria(0, 10)), and assert that result.ids() orders by highest likeCount first and result.totalElements() equals the total active products; name the test searchProductIdsSortByLikesDesc and place it inside the existing SearchProductIdsTest nested class.apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java (2)
177-187:⚠️ Potential issue | 🟠 MajorPER 비동기 갱신 경로가 중복 로드와
null캐시 쓰기를 허용한다Line 179-186은 TTL 임박 히트마다 비동기 loader를 직접 실행하므로 핫키에서 중복 DB 조회가 누적된다. 또한 Line 182는
fresh == null이어도 저장해"null"payload가 들어갈 수 있어 이후 조회 품질과 로그 노이즈를 악화시킨다.비동기 갱신도 키 단위 락으로 감싸고, 락 내부 재확인 후
fresh != null일 때만 저장해야 한다.fresh == null이면 저장 생략(또는 명시적 sentinel 정책)으로 통일해야 한다. 추가로 동일 키 동시 요청(예: 100개)에서 loader 호출 1회 검증과null반환 시 캐시에"null"이 기록되지 않는 테스트를 넣어야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 177 - 187, The PER async refresh currently calls loader.get() directly inside CompletableFuture.runAsync (using perExecutor) which allows duplicate DB loads under hot keys and writes null values; change the async path in the block that checks shouldEarlyRefresh(key, ttl) to acquire the existing per-key lock used elsewhere (or introduce a key-level lock), re-check cache/TTL inside the lock to avoid duplicate loads, call loader.get() only when needed, and call put(key, fresh, ttl) only if fresh != null (skip or store a sentinel otherwise); also add a unit/integration test that simulates many concurrent requests for the same key and asserts loader was invoked exactly once and that a null return does not result in a "null" payload in cache.
154-156:⚠️ Potential issue | 🟠 Major락 내부 double-check는 replica가 아니라 master 기준으로 확인해야 한다
Line 155, Line 196의
get(key, type)는 replica 지연 시 miss를 반환할 수 있어, 이미 master에 값이 있어도 loader를 다시 호출한다. 스탬피드 방어 효과가 약화되고 DB 부하가 증가한다.락 내부 재확인 전용으로 master 바인딩 조회 헬퍼를 두고 해당 경로를 사용해야 한다. 추가로 “replica miss + master hit” 상황에서 동시 요청 시 loader 호출이 1회로 제한되는 테스트를 추가해야 한다.
Also applies to: 195-197
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java` around lines 154 - 156, The double-check inside the lock in ProductCacheManager currently calls get(key, type), which may hit a replica and return a miss even when the master already has the value; replace that call with a master-bound lookup helper (e.g., add a private masterGet(key, type) or getFromMaster(...) that forces master reads) and use that helper for the lock-time re-check in both places (the blocks around the existing get(...) calls at lines ~155 and ~196). Update the loader-stampede test to simulate a replica miss + master hit concurrent scenario and assert the loader is invoked exactly once, covering the case where concurrent requests see replica miss but master has the data. Ensure the new helper reuses existing cache/key resolution logic but binds to master read path only.apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java (1)
103-115:⚠️ Potential issue | 🟠 Major부분 miss 복구 실패 시 페이지 크기와 total 값이 어긋난다
Line 109-113에서
null을 제거한 뒤에도 Line 115에서idList.totalElements()를 그대로 반환하면, 운영에서 stale ID 리스트가 남은 동안 페이지 길이와 total이 계속 불일치한다. 사용자 페이지네이션/무한스크롤 동작이 깨지는 문제다.
fromDb.size() < missedIds.size()를 감지하면 ID 리스트 캐시를 즉시 무효화하고 DB 경로로 1회 재조회해content와totalElements를 함께 재계산해야 한다. 추가로 “ID 리스트 캐시 hit + 상세 1건 삭제” 케이스를 테스트로 넣어 두 값의 일관성을 검증해야 한다.수정 예시
List<Long> missedIds = extractMissedIds(idList.ids(), cached); if (!missedIds.isEmpty()) { List<ProductCacheDto> fromDb = loadAndCacheDetails(missedIds); + if (fromDb.size() < missedIds.size()) { + productCacheManager.evict(idListKey); + return searchFromDb(brandId, sortType, page, size); + } cached = mergeInOrder(idList.ids(), cached, fromDb); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java` around lines 103 - 115, Detect when a partial miss occurs by checking if loadAndCacheDetails(missedIds).size() < missedIds.size(); in that case invalidate the ID-list cache and re-query the ID list from the DB (so you can recompute both content and total) rather than returning idList.totalElements() from the stale cache; then rebuild cached via mergeInOrder/extractMissedIds and produce ProductPageOutDto with the fresh total and content. Also add a unit/integration test covering the “ID list cache hit + one detailed record deleted” scenario to assert content.size() and totalElements() remain consistent.
🧹 Nitpick comments (1)
apps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderItemEntity.java (1)
5-5: 와일드카드 임포트(jakarta.persistence.*) 사용은 명시적 임포트보다 가독성과 유지보수성이 떨어진다.변경 전에는
Column,Entity,Table등 개별 임포트가 명시되어 있었으나, 와일드카드로 변경되면서 실제 사용 클래스를 파악하기 어려워졌다. IDE의 "Organize Imports" 기능 또는 정적 분석 도구에서 경고가 발생할 수 있으며, 향후 불필요한 클래스가 섞여 컴파일 충돌이나 혼란을 야기할 수 있다.수정 예시
-import jakarta.persistence.*; +import jakarta.persistence.Column; +import jakarta.persistence.Entity; +import jakarta.persistence.Index; +import jakarta.persistence.Table;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderItemEntity.java` at line 5, Replace the wildcard Jakarta Persistence import with explicit imports of only the annotations/types used by OrderItemEntity (e.g., Entity, Table, Id, GeneratedValue, Column, ManyToOne, JoinColumn, FetchType, CascadeType, etc. as applicable) so the class-level annotations on OrderItemEntity and its fields remain clear and IDE/static-analysis warnings are avoided; use your IDE's "Organize Imports" or manually list the exact jakarta.persistence.* symbols referenced in the file to restore explicit imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/cart/cart/infrastructure/entity/CartItemEntity.java`:
- Around line 19-27: 추가한 인덱스(`@Index` name="idx_cart_user_selected",
columnList="user_id, selected" 및 `@Index` name="idx_cart_product",
columnList="product_id")는 쿼리 패턴에 적절하지만 운영 배포 시 온라인 마이그레이션과 인덱스 적용 여부를 검증해야 합니다:
운영 DDL에서 대규모 테이블 락을 피하도록 MySQL 8+이면 ALGORITHM=INPLACE, LOCK=NONE 옵션을 사용하거나 구버전이면
pt-online-schema-change로 인덱스를 적용하고, 배포 전후로 EXPLAIN SELECT ... WHERE user_id = ?
AND selected = true; 및 EXPLAIN DELETE ... WHERE product_id = ?; 를 실행해 옵티마이저가 각각
idx_cart_user_selected와 idx_cart_product를 사용하는지 확인하며, 인덱스가 무시될 경우 ANALYZE TABLE
cart_items로 통계 재생성하고 또한 findByUserIdAndIdIn 및 deleteAllByUserIdAndIdIn 메서드의
실행계획과 성능을 검토해 필요한 추가 인덱스(예: user_id+id 조합 등)를 고려하세요.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java`:
- Around line 27-49: The bulk update queries (updateBrandNameByBrandId,
increaseLikeCount, decreaseLikeCount, updateStock, softDelete) do not set
updatedAt and updateStock/softDelete swallow zero-row updates; modify each
`@Query` to set e.updatedAt = :updatedAt and add a `@Param`("updatedAt")
ZonedDateTime updatedAt parameter, change updateStock and softDelete signatures
to return int (affected rows) instead of void so callers can detect 0 updates,
and update call sites to pass the current updatedAt and handle non‑positive
return values (throw or log and prevent cache writes); also add tests asserting
updatedAt advances and that callers stop cache updates when the repository
returns 0.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java`:
- Around line 189-203: getOrderSpecifiers currently appends a secondary
tie-breaker readModel.id.desc() which likely causes filesort because id isn't
part of the existing read-model indexes; change the tie-breaker to
readModel.id.asc() in getOrderSpecifiers to align with index ordering (or
alternatively add id to the end of the relevant indexes if you must keep DESC),
then run local EXPLAIN for LATEST, PRICE_ASC and LIKES_DESC queries to confirm
no "Using filesort" appears.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java`:
- Around line 25-35: The current save(Product product, String brandName) reads
an entity then calls jpaRepository.save(entity), which can overwrite a
concurrently-updated likeCount (e.g., increaseLikeCount()/decreaseLikeCount())
with a stale snapshot; change save to do an insert-or-partial-update that
preserves existing createdAt and likeCount when the row exists: detect existence
via jpaRepository.existsById(product.getId()) (or findById) and then perform
either an INSERT or a targeted UPDATE that only updates mutable read-model
fields (name, price, brandName, etc.) but explicitly excludes/does not touch
like_count and created_at (implement as a `@Modifying` JPQL/SQL update method or a
repository upsert that merges only specific columns), and add integration tests
around syncReadModel() running in parallel with like count increments to ensure
no loss of likeCount.
In
`@apps/commerce-api/src/main/java/com/loopers/coupon/coupontemplate/infrastructure/entity/CouponTemplateEntity.java`:
- Around line 24-27: The single-column index on deleted_at causes filesort for
queries from
CouponTemplateQueryRepositoryImpl.findAllByDeletedAtIsNull(pageable) which run
WHERE deleted_at IS NULL ORDER BY id DESC LIMIT; update the index definition in
CouponTemplateEntity (the `@Table/`@Index on the class) to a composite index on
(deleted_at, id) so the WHERE and ORDER BY can be served by the index, deploy
the change, run EXPLAIN on the paged query to confirm "Using filesort" is
removed and measure p95 page latency before/after to validate the improvement.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java`:
- Around line 394-417: The tests refreshIdListCacheForAllSorts() and
refreshIdListCacheForSort() only assert call counts (times(12), times(4)) which
allows duplicate/incorrect keys to pass; update these tests to capture and
assert the exact cache-key/criteria combos passed to
productCacheManager.refreshIdList by using an ArgumentCaptor for the
key/criteria parameter in the tests for
productCommandService.refreshIdListCacheForAllSorts and
productCommandService.refreshIdListCacheForSort, build the expected set of
(sort, page, brand/all) combinations (based on ProductSortType and your
cacheable page range), assert that the captured arguments exactly match that
expected set (each combination appears exactly once), and keep the original call
count assertion as a sanity check.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLockTest.java`:
- Around line 34-83: The test only asserts loaderCallCount == threadCount, which
won't fail if locking is broken; update LocalCacheLockTest so the loader also
tracks current concurrent executions and records the peak (add AtomicInteger
concurrentCount and AtomicInteger maxConcurrent updated inside the loader passed
to executeWithLock), incrementing concurrentCount at start, updating
maxConcurrent with Math.max, and decrementing concurrentCount at end; then
assert maxConcurrent.get() == 1 and loaderCallCount.get() == threadCount to
ensure serial execution for the same key; also rename the test method (currently
sameKeyConcurrentRequests_loaderCalledOnce) to reflect intent (e.g.,
sameKeyConcurrentRequests_serialExecutionEnsured) and keep the existing
CountDownLatch/startLatch pattern.
In
`@apps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.java`:
- Around line 40-51: Add tests in ProductLikeCountSyncerImplTest to cover
failure and boundary cases: for both
productLikeCountSyncerImpl.increaseLikeCount and .decreaseLikeCount, add tests
that stub productCommandFacade.increaseLikeCount / .decreaseLikeCount to throw a
runtime exception (e.g., new RuntimeException("...")) and assert that the Syncer
propagates that exception (use assertThrows) rather than swallowing or
converting it; also add tests for null productId to explicitly assert the
expected behavior (either that the Syncer throws a NullPointerException or that
it delegates and lets the facade handle it), so tests validate exception
propagation and null-boundary handling.
---
Outside diff comments:
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java`:
- Around line 233-269: Both failure-path tests for decreaseStock
(PRODUCT_NOT_FOUND and PRODUCT_OUT_OF_STOCK) must also assert that no
side-effect methods were invoked; after asserting the CoreException, add Mockito
verifications that productSave/updateStock/refreshProductDetail (e.g.,
productRepository.save(...), productCommandRepository.updateStock(...), and
productDetailCache.refreshProductDetail(...)) were never called using
verify(..., never()) or verifyNoInteractions(...) so each test verifies zero
Read Model/cache updates in its respective failure path. Ensure you do this in
both decreaseStockProductNotFound and decreaseStockOutOfStock tests and keep the
exception assertions intact.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductLikeCountConcurrencyTest.java`:
- Around line 97-124: Update the concurrentDecreaseLikeCount test (or add a new
test) to verify the non-negative guard by sending more decrease requests than
the initial likeCount: initialize the read model via saveReadModel(...) with
likeCount=10, spawn e.g. 20 concurrent calls to
productCommandService.decreaseLikeCount(productId) (use ExecutorService and
collect futures), wait for completion and assert no futures threw exceptions,
then fetch the ProductReadModelEntity via
productReadModelJpaRepository.findById(productId) and assert
result.getLikeCount() == 0L and result.getLikeCount() >= 0L to ensure no
negative values; keep references to the existing helpers (saveReadModel,
productCommandService.decreaseLikeCount, productReadModelJpaRepository) so the
change is localized.
---
Duplicate comments:
In
`@apps/commerce-api/src/benchmark/java/com/loopers/catalog/product/infrastructure/ProductApiPerformanceTest.java`:
- Around line 325-385: measureSingleApiNoCache and measureSingleApi currently
treat any request as successful and record latency even when HTTP 4xx/5xx are
returned; update both methods to call mockMvc.perform(get(url)).andReturn(),
read the response status via getResponse().getStatus(), and only record the
measured time into the times[] array when status is 2xx; for non-2xx responses
either increment an errors counter (or throw an AssertionError to fail the run)
and do not include that run's duration in the latency stats, then compute
averages based on successful run count; also add a unit-case in the test suite
that triggers 404 and 500 responses to assert error rate is aggregated
correctly.
- Around line 264-273: The MISS measurement is being polluted by the warmup
calls inside measureSingleApi; create a MISS-only path by either adding a
boolean parameter to measureSingleApi (e.g., measureSingleApi(label, path,
params, boolean warmup)) or implementing a new method measureSingleApiMiss that
skips the warmup loop, call redisCleanUp.truncateAll() immediately before
invoking the MISS method, and after the single request verify Redis key count is
zero before the request and non-zero only after the request to assert correct
behavior; update the loops that iterate over listApis and detailApis to call the
new MISS-only method (or pass warmup=false) while leaving the existing
warmup-enabled calls for HIT measurements.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/brand/application/facade/BrandCommandFacade.java`:
- Around line 64-68: The brand update currently refreshes product detail cache
inside the BrandCommandFacade transaction (calling
productQueryService.findActiveIdsByBrandId and
productCommandService.refreshProductDetailCache in the same method that calls
syncBrandNameInReadModel), which can cause cache/DB inconsistency on rollback;
move the loop that invokes productCommandService.refreshProductDetailCache for
each productId out of the transactional boundary by registering a
TransactionSynchronizationManager.registerSynchronization callback or by
publishing an event handled with `@TransactionalEventListener`(phase =
AFTER_COMMIT) to run after commit, and add tests that verify (1) on commit all
active products returned by productQueryService.findActiveIdsByBrandId have
their caches refreshed and (2) on rollback no cache refresh is executed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/facade/ProductCommandFacade.java`:
- Around line 48-56: The current write flow in ProductCommandFacade calls
productCommandService.refreshProductDetailCache and
productCommandService.refreshIdListCacheForAllSorts (and
deleteProductDetailCache on deletes) inside the transaction; move those
cache-refresh calls out of the transaction and invoke them only after successful
commit via an after-commit hook or transactional event listener (e.g.,
TransactionSynchronizationManager/TransactionSynchronizationAdapter or
`@TransactionalEventListener`) while leaving
productCommandService.syncReadModel(savedProduct, ...) and the subsequent
productQueryService.getAdminProductDetail(...) unchanged; apply the same change
to the other write paths that call
refreshProductDetailCache/refreshIdListCacheForAllSorts/deleteProductDetailCache
(the create/update/delete methods) and add tests that assert cache is unchanged
on pre-commit exceptions and is updated only after a successful commit.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductCommandService.java`:
- Around line 128-170: The cache writes currently occur inside the transactional
methods increaseLikeCount, decreaseLikeCount, and decreaseStock via
productCacheManager.refreshProductDetail(...) which calls the loader
immediately, causing stale/ghost cache on rollback; change the flow so cache
refresh runs only after successful commit by deferring the call using
TransactionSynchronizationManager.registerSynchronization (afterCommit) or by
publishing an application event and handling it with an
`@TransactionalEventListener`(phase = AFTER_COMMIT) that invokes
productCacheManager.refreshProductDetail with
productQueryPort.findProductCacheDtoById as the loader; update the three methods
to remove direct calls to refreshProductDetail and instead publish the
post-commit event (or register the synchronization), and add tests that trigger
an exception after publishing to verify Redis is not modified on rollback.
- Around line 128-148: increaseLikeCount and decreaseLikeCount currently refresh
only the product detail cache via productCacheManager.refreshProductDetail,
leaving the LIKES_DESC list caches stale; update both methods to also evict or
version-bump the LIKES_DESC list cache entries (brand-specific and global) so
the popularity-sorted lists are recomputed on next read (use the same cache
manager or listCache abstraction used elsewhere), and add a test that simulates
two products crossing likeCount values and asserts the first-page LIKES_DESC
ordering flips immediately after the like change.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/application/service/ProductQueryService.java`:
- Around line 103-115: Detect when a partial miss occurs by checking if
loadAndCacheDetails(missedIds).size() < missedIds.size(); in that case
invalidate the ID-list cache and re-query the ID list from the DB (so you can
recompute both content and total) rather than returning idList.totalElements()
from the stale cache; then rebuild cached via mergeInOrder/extractMissedIds and
produce ProductPageOutDto with the fresh total and content. Also add a
unit/integration test covering the “ID list cache hit + one detailed record
deleted” scenario to assert content.size() and totalElements() remain
consistent.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/lock/RedisCacheLock.java`:
- Around line 56-63: In RedisCacheLock, do not call loader.get() immediately
after failing to acquire the lock or when catching InterruptedException; instead
change the control flow to re-check the cache and attempt to re-acquire the lock
(e.g., loop: check cache, if miss try acquireLock(lockKey) again, if acquired
run loader and populate cache, else waitForLockRelease(lockKey) and repeat) so
that waitForLockRelease(lockKey) paths and InterruptedException handling do not
directly invoke loader.get() and thus preserve stampede protection; update the
logic in the methods around the else branch and the catch (InterruptedException)
in RedisCacheLock to implement this retry/recheck pattern and add tests
verifying single loader invocation under concurrent misses and during
interrupts.
- Around line 49-50: The current lock acquisition stores a fixed "1" and release
lacks ownership check; change RedisCacheLock to generate and store a unique
ownerId (UUID) as the value in redisTemplate.opsForValue().setIfAbsent(lockKey,
ownerId, LOCK_TTL) (where lockKey and LOCK_TTL are used), and implement unlock
to run an atomic Lua compare-and-delete (DEL only if value == ownerId) via
redisTemplate.execute or script execution APIs; update any related methods
(e.g., the release/unlock method around lines 65-68 referenced) to pass and
verify ownerId, and add a test that delays the loader longer than LOCK_TTL to
simulate re-acquisition and assert that the original owner’s unlock does NOT
remove the new owner’s lock.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/cache/ProductCacheManager.java`:
- Around line 177-187: The PER async refresh currently calls loader.get()
directly inside CompletableFuture.runAsync (using perExecutor) which allows
duplicate DB loads under hot keys and writes null values; change the async path
in the block that checks shouldEarlyRefresh(key, ttl) to acquire the existing
per-key lock used elsewhere (or introduce a key-level lock), re-check cache/TTL
inside the lock to avoid duplicate loads, call loader.get() only when needed,
and call put(key, fresh, ttl) only if fresh != null (skip or store a sentinel
otherwise); also add a unit/integration test that simulates many concurrent
requests for the same key and asserts loader was invoked exactly once and that a
null return does not result in a "null" payload in cache.
- Around line 154-156: The double-check inside the lock in ProductCacheManager
currently calls get(key, type), which may hit a replica and return a miss even
when the master already has the value; replace that call with a master-bound
lookup helper (e.g., add a private masterGet(key, type) or getFromMaster(...)
that forces master reads) and use that helper for the lock-time re-check in both
places (the blocks around the existing get(...) calls at lines ~155 and ~196).
Update the loader-stampede test to simulate a replica miss + master hit
concurrent scenario and assert the loader is invoked exactly once, covering the
case where concurrent requests see replica miss but master has the data. Ensure
the new helper reuses existing cache/key resolution logic but binds to master
read path only.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductQueryServiceTest.java`:
- Around line 285-306: The test searchProductsEmptyIdList currently only
verifies productCacheManager.getOrLoad(...) was called but doesn't assert that
the second-layer lookup is skipped; update the test to also verify that
productCacheManager.mgetProductDetails(...) is never invoked (e.g., add
verify(productCacheManager, never()).mgetProductDetails(any()) or alternatively
verifyNoMoreInteractions(productCacheManager)) so the empty-ID-list optimization
is enforced; keep the existing arrangement and assertions and add this extra
verification step referencing the searchProductsEmptyIdList test and the
productCacheManager.mgetProductDetails method.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/infrastructure/query/ProductQueryPortImplTest.java`:
- Around line 486-620: Add a new unit test method to verify searchProductIds
sorts by likes descending: create three products (use
saveProductWithReadModel/saveDeletedProductWithReadModel as needed) with varying
like counts and a deterministic secondary sort (e.g., id or price) for ties,
call productQueryPort.searchProductIds(new ProductSearchCriteria(null,
ProductSortType.LIKES_DESC), new PageCriteria(0, 10)), and assert that
result.ids() orders by highest likeCount first and result.totalElements() equals
the total active products; name the test searchProductIdsSortByLikesDesc and
place it inside the existing SearchProductIdsTest nested class.
In
`@apps/commerce-api/src/test/java/com/loopers/ordering/order/infrastructure/acl/catalog/OrderProductReaderImplTest.java`:
- Around line 42-74: Add two tests around OrderProductReaderImpl.readProducts to
harden ACL boundaries: (1) in the existing readProductsSuccess test, after the
assertions verify that productQueryFacade.findActiveByIds(productIds) was
invoked exactly once and there were no further interactions (use verify and
verifyNoMoreInteractions against productQueryFacade) to lock the call count; (2)
add a new test where productQueryFacade.findActiveByIds(productIds) is mocked to
throw an exception and assert readProducts either propagates or converts that
exception according to the module policy (assertThrows against
orderProductReaderImpl.readProducts), referencing the methods
orderProductReaderImpl.readProducts and productQueryFacade.findActiveByIds to
locate the code under test.
In `@round5-docs/04-to-be-index-visualization.html`:
- Around line 145-155: Replace all literal "->" occurrences in the HTML (e.g.,
inside <div class="context"> and other <div class="kpi"> blocks shown in the
diff) with a proper HTML-safe arrow such as "→" or the entity "→" (or use
"->" if you prefer the original glyph with escaped hyphen), ensuring you do
not leave raw "->" characters in text nodes; apply this change consistently
across the file (including the other mentioned locations) and re-run the static
check with htmlhint round5-docs/04-to-be-index-visualization.html to verify zero
errors before committing.
In `@round5-docs/05-to-be-cache-visualization.html`:
- Around line 307-339: The architecture doc currently implies a single-node
LocalCacheLock path and SCAN+DEL invalidation which can mislead about
distributed behavior; update the diagram and text to clearly separate
single-node (LocalCacheLock) vs distributed (RedisCacheLock) flows, show lock
acquisition success and failure fallback paths (e.g., when ProductCacheManager
fails to get Redis lock fall back to local lock or serve stale), replace blanket
"SCAN + DEL" with a version-key or publish/subscribe invalidation path and
describe fallback (e.g., fall back to pattern-based background invalidation only
if safe), and add a short checklist mapping the doc steps to the actual
ProductCacheManager and CacheLock (LocalCacheLock, RedisCacheLock)
implementations to verify behavior parity.
- Around line 354-361: Update the TTL and consistency window numbers in the
document to match the actual constants used in code (e.g., ProductCacheConstants
detail and list TTLs plus jitter) and replace the generic "SCAN+DEL"
invalidation text with the real current strategy name used in code (e.g., the
method or routine that performs list invalidation). Locate referenced constants
and methods by name (ProductCacheConstants, the list invalidation
function/method, and any jitter constant) and make the table and conclusions use
those exact numeric values and wording; then add an automated drift check that
compares the documented TTL/consistency values against ProductCacheConstants at
build/test time and fails or warns if they diverge.
- Around line 399-403: The KPI summary text is hardcoded and must be computed
from the same chart dataset constant; replace the static list items (the h3 and
the three <li> values) with a renderKpiSummary function that calls
computeKpiFromDataset(chartData) (or the existing dataset constant) to derive
Cache Hit latency, list API hit/miss ranges, and burst metrics, and injects
those computed values into the DOM (target the current h3 and list container).
Also add a small automated check function validateKpiMatchesChart(chartData)
that compares rendered KPI numbers to the chartData ranges and fails CI or logs
an error if they diverge, then invoke that validation after initial render.
Ensure the unique identifiers used are the DOM container where the h3 and <ul>
currently live (assign an id if needed) and the chartData/dataset variable used
by the charts.
- Line 7: The HTML currently loads Chart.js only via an external CDN script tag
(https://cdn.jsdelivr.net/npm/chart.js@4.4.7/dist/chart.umd.min.js), which
breaks reproducibility; either vendor Chart.js into the repo and change the
script src to the local asset (e.g., /assets/vendor/chart.umd.min.js) or, if you
must keep the CDN, add SRI by adding an integrity and crossorigin attribute for
the exact version and implement a load-failure fallback: attach an onerror
handler to the script element that replaces the chart area with a clear
offline/fallback message and logs the failure, and add a simple synchronous
check to ensure Chart is defined before attempting to render charts; finally,
verify in browser offline mode that the chart or the fallback message appears.
- Around line 314-339: Replace all raw occurrences of the arrow token "-->" in
the HTML markup (including inside span elements like <span
class="highlight">Client</span> --> ...) with an HTML-safe alternative (either
replace "-->" with "-->" or with the unicode arrow "→") so the document no
longer contains an unescaped '>' in text nodes; update every instance (PER,
cache miss, error paths, comments, and cache invalidation lines) and then re-run
htmlhint to verify zero errors.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/ordering/order/infrastructure/entity/OrderItemEntity.java`:
- Line 5: Replace the wildcard Jakarta Persistence import with explicit imports
of only the annotations/types used by OrderItemEntity (e.g., Entity, Table, Id,
GeneratedValue, Column, ManyToOne, JoinColumn, FetchType, CascadeType, etc. as
applicable) so the class-level annotations on OrderItemEntity and its fields
remain clear and IDE/static-analysis warnings are avoided; use your IDE's
"Organize Imports" or manually list the exact jakarta.persistence.* symbols
referenced in the file to restore explicit imports.
apps/commerce-api/src/main/java/com/loopers/cart/cart/infrastructure/entity/CartItemEntity.java
Show resolved
Hide resolved
| @Modifying | ||
| @Query("UPDATE ProductReadModelEntity e SET e.brandName = :brandName WHERE e.brandId = :brandId") | ||
| void updateBrandNameByBrandId(@Param("brandId") Long brandId, @Param("brandName") String brandName); | ||
|
|
||
| // 2. 좋아요 수 원자적 증가 (영향 행 수 반환 — 대상 미존재 검증용) | ||
| @Modifying | ||
| @Query("UPDATE ProductReadModelEntity e SET e.likeCount = e.likeCount + 1 WHERE e.id = :id") | ||
| int increaseLikeCount(@Param("id") Long id); | ||
|
|
||
| // 3. 좋아요 수 원자적 감소 (0 이하로 내려가지 않음, 영향 행 수 반환) | ||
| @Modifying | ||
| @Query("UPDATE ProductReadModelEntity e SET e.likeCount = e.likeCount - 1 WHERE e.id = :id AND e.likeCount > 0") | ||
| int decreaseLikeCount(@Param("id") Long id); | ||
|
|
||
| // 4. 재고 업데이트 | ||
| @Modifying | ||
| @Query("UPDATE ProductReadModelEntity e SET e.stock = :stock WHERE e.id = :id") | ||
| void updateStock(@Param("id") Long id, @Param("stock") Long stock); | ||
|
|
||
| // 5. soft delete (deletedAt 설정) | ||
| @Modifying | ||
| @Query("UPDATE ProductReadModelEntity e SET e.deletedAt = :deletedAt WHERE e.id = :productId") | ||
| void softDelete(@Param("productId") Long productId, @Param("deletedAt") ZonedDateTime deletedAt); |
There was a problem hiding this comment.
벌크 동기화 쿼리가 드리프트를 숨기고 updatedAt도 남기지 않는다.
현재 수정 쿼리들은 updatedAt를 갱신하지 않고, 특히 updateStock과 softDelete는 영향 행 수를 버려 대상 row가 없을 때도 조용히 성공한다. 운영에서는 Read Model row 유실이나 백필 누락이 발생해도 서비스가 이를 감지하지 못한 채 캐시만 갱신해 DB·Read Model·Redis가 영구히 엇갈릴 수 있다. 모든 수정 쿼리에 updatedAt = :updatedAt를 포함하고, 단건 대상인 updateStock과 softDelete는 int를 반환해 0건 수정 시 예외나 경고로 승격하다. 추가로 각 수정 메서드가 updatedAt를 전진시키는지, 그리고 0건 수정 시 상위 계층이 캐시 갱신을 중단하는지 테스트하다.
🛠️ 수정 예시
`@Modifying`
- `@Query`("UPDATE ProductReadModelEntity e SET e.stock = :stock WHERE e.id = :id")
- void updateStock(`@Param`("id") Long id, `@Param`("stock") Long stock);
+ `@Query`("""
+ UPDATE ProductReadModelEntity e
+ SET e.stock = :stock,
+ e.updatedAt = :updatedAt
+ WHERE e.id = :id
+ """)
+ int updateStock(`@Param`("id") Long id,
+ `@Param`("stock") Long stock,
+ `@Param`("updatedAt") ZonedDateTime updatedAt);
`@Modifying`
- `@Query`("UPDATE ProductReadModelEntity e SET e.deletedAt = :deletedAt WHERE e.id = :productId")
- void softDelete(`@Param`("productId") Long productId, `@Param`("deletedAt") ZonedDateTime deletedAt);
+ `@Query`("""
+ UPDATE ProductReadModelEntity e
+ SET e.deletedAt = :deletedAt,
+ e.updatedAt = :updatedAt
+ WHERE e.id = :productId
+ """)
+ int softDelete(`@Param`("productId") Long productId,
+ `@Param`("deletedAt") ZonedDateTime deletedAt,
+ `@Param`("updatedAt") ZonedDateTime updatedAt);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/jpa/ProductReadModelJpaRepository.java`
around lines 27 - 49, The bulk update queries (updateBrandNameByBrandId,
increaseLikeCount, decreaseLikeCount, updateStock, softDelete) do not set
updatedAt and updateStock/softDelete swallow zero-row updates; modify each
`@Query` to set e.updatedAt = :updatedAt and add a `@Param`("updatedAt")
ZonedDateTime updatedAt parameter, change updateStock and softDelete signatures
to return int (affected rows) instead of void so callers can detect 0 updates,
and update call sites to pass the current updatedAt and handle non‑positive
return values (throw or log and prevent cache writes); also add tests asserting
updatedAt advances and that callers stop cache updates when the repository
returns 0.
| // 정렬 조건 변환 (tie-breaker: 동률 시 id 내림차순) | ||
| private OrderSpecifier<?>[] getOrderSpecifiers(ProductSortType sortType) { | ||
| OrderSpecifier<?> primary; | ||
| if (sortType == null) { | ||
| return product.createdAt.desc(); | ||
| primary = readModel.createdAt.desc(); | ||
| } else { | ||
| primary = switch (sortType) { | ||
| case LATEST -> readModel.createdAt.desc(); | ||
| case PRICE_ASC -> readModel.price.asc(); | ||
| case LIKES_DESC -> readModel.likeCount.desc(); | ||
| }; | ||
| } | ||
| return switch (sortType) { | ||
| case LATEST -> product.createdAt.desc(); | ||
| case PRICE_ASC -> product.price.asc(); | ||
| case LIKES_DESC -> product.likeCount.desc(); | ||
| }; | ||
| // tie-breaker: 동률 시 id 내림차순 (최신 상품 우선, 페이지 경계 안정화) | ||
| OrderSpecifier<?> secondary = readModel.id.desc(); | ||
| return new OrderSpecifier<?>[]{ primary, secondary }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== ORDER BY specifiers in ProductQuerydslRepository ==="
sed -n '189,203p' apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java
echo
echo "=== product_read_model index definitions ==="
sed -n '1,120p' apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductReadModelEntity.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4781
🏁 Script executed:
# ProductQuerydslRepository의 전체 구조 확인
wc -l apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 203
🏁 Script executed:
# PRICE_ASC와 함께 사용되는 WHERE 조건과 쿼리 빌드 로직 확인
grep -n "PRICE_ASC\|where\|where(" apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java | head -40Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 426
🏁 Script executed:
# getOrderSpecifiers 메서드 전후 문맥과 쿼리 빌드 방식 확인
sed -n '150,250p' apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1818
🏁 Script executed:
# getOrderSpecifiers를 호출하는 public 메서드 찾기
grep -n "getOrderSpecifiers\|findUserProducts\|findAdminProducts" apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 371
🏁 Script executed:
# 사용자/관리자 상품 조회 메서드 전체 확인 (line 50-130)
sed -n '50,140p' apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3158
🏁 Script executed:
# 인덱스 정의 전체 재확인
sed -n '14,50p' apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/entity/ProductReadModelEntity.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1799
모든 정렬 옵션에서 tie-breaker로 인한 filesort 발생 가능성이 있다.
현재 getOrderSpecifiers()는 LATEST, PRICE_ASC, LIKES_DESC 모두에 secondary로 id.desc()를 적용한다. 그런데 read-model 인덱스는 (brand_id, deleted_at, created_at), (deleted_at, created_at), (brand_id, price), (price) 등 sort_col까지만 포함하고 id는 포함하지 않는다. 따라서 ORDER BY의 secondary 컬럼이 인덱스에 없어 모든 쿼리에서 filesort가 발생할 가능성이 크다.
해결책: id DESC tie-breaker를 id ASC로 변경하거나, 또는 모든 인덱스 끝에 id를 추가하여 현재 tie-breaker 방향을 유지한다. 전자가 덜 침습적이므로, 먼저 secondary = readModel.id.asc()로 변경한 뒤 로컬 DB에서 각 정렬(LATEST, PRICE_ASC, LIKES_DESC)에 대해 EXPLAIN 결과에 Using filesort가 없는지 검증해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/catalog/product/infrastructure/querydsl/ProductQuerydslRepository.java`
around lines 189 - 203, getOrderSpecifiers currently appends a secondary
tie-breaker readModel.id.desc() which likely causes filesort because id isn't
part of the existing read-model indexes; change the tie-breaker to
readModel.id.asc() in getOrderSpecifiers to align with index ordering (or
alternatively add id to the end of the relevant indexes if you must keep DESC),
then run local EXPLAIN for LATEST, PRICE_ASC and LIKES_DESC queries to confirm
no "Using filesort" appears.
...va/com/loopers/catalog/product/infrastructure/repository/ProductReadModelRepositoryImpl.java
Outdated
Show resolved
Hide resolved
| @Table(name = "coupon_template", indexes = { | ||
| // 활성 쿠폰 템플릿 목록: WHERE deleted_at IS NULL | ||
| @Index(name = "idx_coupon_template_deleted", columnList = "deleted_at") | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the CouponTemplateEntity and related files
find . -name "CouponTemplateEntity.java" -type f
find . -name "CouponTemplateQueryRepositoryImpl.java" -type f
find . -name "CouponTemplateRepository*.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 333
🏁 Script executed:
# Find any migration files related to coupon_template
find . -type f \( -name "*.sql" -o -name "*.yaml" -o -name "*.yml" \) | xargs grep -l "coupon_template" 2>/dev/null | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# Read the CouponTemplateEntity to see full index configuration
cat -n ./apps/commerce-api/src/main/java/com/loopers/coupon/coupontemplate/infrastructure/entity/CouponTemplateEntity.java | head -60Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2124
🏁 Script executed:
# Read the CouponTemplateQueryRepositoryImpl to see actual query implementation
cat -n ./apps/commerce-api/src/main/java/com/loopers/coupon/coupontemplate/infrastructure/repository/CouponTemplateQueryRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2614
🏁 Script executed:
# Check SoftDeleteBaseEntity to see if it has any indexes
find . -name "SoftDeleteBaseEntity.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 157
🏁 Script executed:
# Also check if there are any migration/liquibase files for coupon schema
find . -path "*/db/migration*" -o -path "*/liquibase*" | grep -E "\.(sql|xml|yaml|yml)$" | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# Read SoftDeleteBaseEntity
cat -n ./modules/jpa/src/main/java/com/loopers/domain/SoftDeleteBaseEntity.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1394
삭제 필터 + ID 정렬 조회에서 단일 컬럼 인덱스로 인한 filesort 발생
CouponTemplateQueryRepositoryImpl의 findAllByDeletedAtIsNull(pageable) 쿼리는 WHERE deleted_at IS NULL ORDER BY id DESC LIMIT패턴으로 실행된다. 현재 Line 24-27의 deleted_at 단일 인덱스만으로는 WHERE 절만 커버되고, ORDER BY id DESC는 인덱스 지원을 받지 못해 데이터가 많을수록 filesort 비용이 누적된다. 이는 페이지 조회 시 p95 지연을 악화시킨다.
수정안은 인덱스를 (deleted_at, id) 복합 인덱스로 변경하는 것이다. 이를 통해 삭제되지 않은 행들을 필터링하면서 동시에 id 내림차순으로 정렬된 상태를 유지할 수 있다.
스테이징에서 EXPLAIN으로 Using filesort 유무를 확인하고, 변경 전후 페이지 조회의 지연시간을 비교하여 개선 효과를 검증해야 한다.
제안 수정안
`@Table`(name = "coupon_template", indexes = {
- // 활성 쿠폰 템플릿 목록: WHERE deleted_at IS NULL
- `@Index`(name = "idx_coupon_template_deleted", columnList = "deleted_at")
+ // 활성 쿠폰 템플릿 목록: WHERE deleted_at IS NULL ORDER BY id DESC
+ `@Index`(name = "idx_coupon_template_deleted_id", columnList = "deleted_at, id")
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/coupon/coupontemplate/infrastructure/entity/CouponTemplateEntity.java`
around lines 24 - 27, The single-column index on deleted_at causes filesort for
queries from
CouponTemplateQueryRepositoryImpl.findAllByDeletedAtIsNull(pageable) which run
WHERE deleted_at IS NULL ORDER BY id DESC LIMIT; update the index definition in
CouponTemplateEntity (the `@Table/`@Index on the class) to a composite index on
(deleted_at, id) so the WHERE and ORDER BY can be served by the index, deploy
the change, run EXPLAIN on the paged query to confirm "Using filesort" is
removed and measure p95 page latency before/after to validate the improvement.
| @DisplayName("[refreshIdListCacheForAllSorts()] brandId -> 모든 정렬 × cacheable 페이지 × (brand + all) ID 리스트 갱신") | ||
| void refreshIdListCacheForAllSortsSuccess() { | ||
| // Act | ||
| productCommandService.refreshIdListCacheForAllSorts(1L); | ||
|
|
||
| // Assert — 3 정렬 × 2 페이지 × 2 (brand + all) = 12 calls | ||
| verify(productCacheManager, times(12)).refreshIdList(any(), any()); | ||
| } | ||
|
|
||
| } | ||
|
|
||
|
|
||
| @Nested | ||
| @DisplayName("refreshIdListCacheForSort()") | ||
| class RefreshIdListCacheForSortTest { | ||
|
|
||
| @Test | ||
| @DisplayName("[refreshIdListCacheForSort()] brandId + PRICE_ASC -> 해당 정렬의 cacheable 페이지 × (brand + all) ID 리스트 갱신") | ||
| void refreshIdListCacheForSortSuccess() { | ||
| // Act | ||
| productCommandService.refreshIdListCacheForSort(1L, ProductSortType.PRICE_ASC); | ||
|
|
||
| // Assert — 2 페이지 × 2 (brand + all) = 4 calls | ||
| verify(productCacheManager, times(4)).refreshIdList(any(), any()); |
There was a problem hiding this comment.
ID 리스트 캐시 테스트가 누락된 조합을 잡지 못한다.
times(12)와 times(4)만 확인하면 같은 키를 반복 갱신해도 테스트가 통과한다. 운영에서는 특정 정렬·페이지·브랜드 조합만 stale 상태로 남아도 이 테스트가 녹색으로 유지된다. ArgumentCaptor로 실제 전달된 캐시 키나 criteria 집합을 캡처해 기대 조합과 정확히 일치하는지 검증하다. 추가로 모든 sort × cacheable page × (brand/all) 조합이 정확히 1회씩 호출되는 테스트를 넣다. As per coding guidelines, "**/*Test*.java: Java 테스트 리뷰 기준이다. 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/catalog/product/application/service/ProductCommandServiceTest.java`
around lines 394 - 417, The tests refreshIdListCacheForAllSorts() and
refreshIdListCacheForSort() only assert call counts (times(12), times(4)) which
allows duplicate/incorrect keys to pass; update these tests to capture and
assert the exact cache-key/criteria combos passed to
productCacheManager.refreshIdList by using an ArgumentCaptor for the
key/criteria parameter in the tests for
productCommandService.refreshIdListCacheForAllSorts and
productCommandService.refreshIdListCacheForSort, build the expected set of
(sort, page, brand/all) combinations (based on ProductSortType and your
cacheable page range), assert that the captured arguments exactly match that
expected set (each combination appears exactly once), and keep the original call
count assertion as a sanity check.
.../src/test/java/com/loopers/catalog/product/infrastructure/cache/lock/LocalCacheLockTest.java
Show resolved
Hide resolved
| @DisplayName("[increaseLikeCount()] 상품 ID 전달 -> Provider Facade에 동일한 상품 ID로 좋아요 수 증가 위임") | ||
| void increaseLikeCountSuccess() { | ||
| // Arrange | ||
| willDoNothing().given(productCommandFacade).increaseLikeCount(1L); | ||
| Long productId = 42L; | ||
|
|
||
| // Act | ||
| productLikeCountSyncerImpl.increaseLikeCount(1L); | ||
| productLikeCountSyncerImpl.increaseLikeCount(productId); | ||
|
|
||
| // Assert | ||
| verify(productCommandFacade).increaseLikeCount(1L); | ||
| // Assert — 전달된 상품 ID가 정확히 위임됨을 검증 | ||
| ArgumentCaptor<Long> captor = ArgumentCaptor.forClass(Long.class); | ||
| verify(productCommandFacade).increaseLikeCount(captor.capture()); | ||
| assertThat(captor.getValue()).isEqualTo(productId); |
There was a problem hiding this comment.
예외 흐름 검증이 없어 장애 상황 회귀를 놓칠 수 있다
운영 관점에서 Provider Facade가 장애(예: DB/Redis 오류)로 예외를 던질 때 Syncer가 예외를 삼키거나 다른 예외로 변환하면 데이터 동기화 장애를 조기에 탐지하기 어렵다.
수정안으로 increaseLikeCount()/decreaseLikeCount() 각각에 대해 Facade 예외 전파 테스트를 추가하는 것이 좋다.
추가 테스트로 null productId 입력 시 기대 동작(즉시 예외 또는 그대로 위임)을 명시적으로 고정하는 경계값 테스트도 포함하는 것이 좋다.
테스트 보강 예시 diff
import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
import static org.mockito.BDDMockito.*;
import static org.mockito.Mockito.verify;
@@
class IncreaseLikeCountTest {
@@
void increaseLikeCountSuccess() {
@@
}
+
+ `@Test`
+ `@DisplayName`("[increaseLikeCount()] Provider Facade 예외 발생 시 동일 예외 전파")
+ void increaseLikeCountPropagatesException() {
+ Long productId = 42L;
+ RuntimeException expected = new RuntimeException("facade failure");
+ willThrow(expected).given(productCommandFacade).increaseLikeCount(productId);
+
+ assertThatThrownBy(() -> productLikeCountSyncerImpl.increaseLikeCount(productId))
+ .isSameAs(expected);
+ }
}
@@
class DecreaseLikeCountTest {
@@
void decreaseLikeCountSuccess() {
@@
}
+
+ `@Test`
+ `@DisplayName`("[decreaseLikeCount()] Provider Facade 예외 발생 시 동일 예외 전파")
+ void decreaseLikeCountPropagatesException() {
+ Long productId = 42L;
+ RuntimeException expected = new RuntimeException("facade failure");
+ willThrow(expected).given(productCommandFacade).decreaseLikeCount(productId);
+
+ assertThatThrownBy(() -> productLikeCountSyncerImpl.decreaseLikeCount(productId))
+ .isSameAs(expected);
+ }
}As per coding guidelines **/*Test*.java: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
Also applies to: 62-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/engagement/productlike/infrastructure/acl/catalog/ProductLikeCountSyncerImplTest.java`
around lines 40 - 51, Add tests in ProductLikeCountSyncerImplTest to cover
failure and boundary cases: for both
productLikeCountSyncerImpl.increaseLikeCount and .decreaseLikeCount, add tests
that stub productCommandFacade.increaseLikeCount / .decreaseLikeCount to throw a
runtime exception (e.g., new RuntimeException("...")) and assert that the Syncer
propagates that exception (use assertThrows) rather than swallowing or
converting it; also add tests for null productId to explicitly assert the
expected behavior (either that the Syncer throws a NullPointerException or that
it delegates and lets the facade handle it), so tests validate exception
propagation and null-boundary handling.
- round5-docs/00-requirements.md - round5-docs/01-performance-improvement-analysis.md - round5-docs/02-performance-improvement-plan.md - round5-docs/03-as-is-performance-measurement.md - round5-docs/03-as-is-performance-visualization.html Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- ProductEntity, BrandEntity, CartItemEntity 인덱스 추가 - CouponTemplateEntity, IssuedCouponEntity, ProductLikeEntity 인덱스 추가 - OrderEntity, OrderItemEntity 인덱스 추가 - CLAUDE.md 4.10-4.11 인덱스 설계 섹션 추가 - .claude/skills/index-design/SKILL.md 인덱스 설계 스킬 추가 - apps/commerce-api/build.gradle.kts 벤치마크 sourceSet 설정 - ProductIndexPerformanceTest, ProductApiPerformanceTest 벤치마크 추가 - round5-docs/04-to-be-index-measurement.md 측정 결과 - round5-docs/04-to-be-index-visualization.html 시각화 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Product 도메인 모델에서 likeCount 필드 제거 - ProductCommandRepository에서 likeCount 증감 메서드 제거 - ProductReadModel 도메인 레포지토리 인터페이스 추가 - ProductReadModelEntity, JpaRepository, RepositoryImpl 추가 - ProductEntityMapper reconstruct 시그니처 변경 - ProductTest, ProductEntityMapperTest, ProductCommandRepositoryTest 업데이트 - docs/todo/like-count-read-model-recount-batch.md 후속 TODO Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- AdminProductDetailOutDto, AdminProductOutDto ReadModel 기반으로 변경 - ProductDetailOutDto, ProductOutDto ReadModel 기반으로 변경 - ProductQueryPort에 ReadModel 조회 메서드 추가 - ProductQueryPortImpl ReadModel 조회 구현 - ProductQuerydslRepository ReadModel 단일 테이블 조회로 전환 (JOIN 제거) - ProductQueryPortImplTest ReadModel 기반 테스트로 전환 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- ProductCommandService에 ReadModel 동기화 메서드 추가 (생성/수정/삭제/좋아요 증감) - BrandCommandFacade 브랜드 수정/삭제 시 ReadModel 동기화 호출 - ProductCommandServiceTest ReadModel 동기화 테스트 추가 - BrandCommandFacadeTest ReadModel 동기화 검증 추가 - ProductLikeCountConcurrencyTest ReadModel 기반으로 전환 - ProductStockConcurrencyTest, ProductLikeCountSyncerImplTest, OrderProductReaderImplTest 업데이트 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- ProductCacheManager: Cache-Aside 패턴, try-catch 장애 격리, TTL jitter - ProductCacheConstants: 캐시 키/TTL 상수 정의 - ProductCacheDto: 캐시 전용 DTO (Redis 직렬화) - IdListCacheEntry: ID 목록 캐시 엔트리 - CacheLock, LocalCacheLock, RedisCacheLock: 캐시 stampede 방어 - ProductCacheManagerTest: 직렬화/역직렬화, TTL, 장애 격리 테스트 - CacheStampedeTest: stampede 방어 동시성 테스트 - LocalCacheLockTest: 로컬 락 단위 테스트 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- ProductQueryFacade 상품 상세 캐시 적용 - ProductQueryService 상품 목록 캐시 적용 - ProductCommandFacade 캐시 write-through (수정/삭제 시 즉시 재적재) - ProductCommandFacadeTest, ProductQueryFacadeTest, ProductQueryServiceTest 캐시 테스트 추가 - ProductControllerE2ETest 캐시 무효화 E2E 테스트 추가 - round5-docs/05-to-be-cache-measurement.md 캐시 측정 결과 - round5-docs/05-to-be-cache-visualization.html 시각화 - round5-docs/06-2layer-cache-implementation-design.md 2계층 캐시 설계 - round5-docs/07-cache-eviction-analysis.md 캐시 무효화 분석 - docs/todo/cache-event-driven-refresh.md 후속 TODO Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- round5-docs/08-cross-domain-index-and-cache-analysis.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- lock/: CacheLock, LocalCacheLock, RedisCacheLock (스탬피드 방지 관심사) - dto/: ProductCacheDto, IdListCacheEntry (캐시 값 타입) - ProductCacheManager, ProductCacheConstants는 cache/ 루트 유지 - 전체 import 경로 업데이트 (main + test) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
45fc52a to
d31dfd2
Compare
📌 Summary
products LEFT JOIN brands제거 → 단일 테이블 조회product_read_model테이블에 복합 인덱스 12개 추가 (6 유즈케이스 × 사용자/관리자/브랜드유무 조합)page 0~1,size 20)는 Layer 1 = ID 리스트 캐시, Layer 2 = 상세 캐시 → MGET으로 조립like_count,created_at보존LocalCacheLock을 ref-counted ReentrantLock으로 보강하여 same-key miss 직렬화 보장lock/(스탬피드 방지) +dto/(캐시 값 타입)🧭 Context & Decision
문제 정의
products LEFT JOIN brands로 매번 JOIN 수행선택지와 결정
product_read_model별도 테이블(brand_id, deleted_at, sort_col)— 카디널리티 높은 brand_id 선두product:v1:{id},products:ids:v1:{...}@class미포함@Primary, RedisCacheLock 분산 환경 대기)🏗️ Design Overview
변경 범위
product_read_model테이블 도입,brand_name/like_count비정규화, Product 도메인에서 likeCount 제거like_count/created_atoverwrite 방지, LocalCacheLock은 ref-count 기반으로 holder 조기 제거 race 제거lock/(스탬피드 방지) +dto/(캐시 값 타입) 서브패키지로 분리주요 컴포넌트 책임
ProductReadModelEntity: 조회 전용 비정규화 테이블 (brandName,likeCount,description포함)ProductCacheManager: Redis Cache-Aside, try-catch 장애 격리, TTL jitter, MGET, write-throughcache/dto/ProductCacheDto: PLP+PDP 공용 캐시 DTO (Read Model projection)cache/dto/IdListCacheEntry: ID 리스트 +totalElements캐시 값 recordcache/lock/CacheLock(LocalCacheLock@Primary+ RedisCacheLock 대기): 캐시 스탬피드 방어 (Double-Check Locking)ProductCommandService: ReadModel 동기화 + 상세 캐시 write-through (좋아요/재고/브랜드명)ProductCommandFacade: ID 리스트 캐시 write-through 오케스트레이션 (생성/수정/삭제, 가격 변경)🔁 Flow Diagram
상품 목록 조회 (2계층 캐시)
sequenceDiagram autonumber participant Client participant Service as ProductQueryService participant Cache as ProductCacheManager participant DB Client->>Service: searchProducts(brandId, sort, page, size) Service->>Service: isCacheable(page, size) alt page 0~1 && size 20 Service->>Cache: get(ID 리스트 캐시 키) alt ID 리스트 Cache Hit Cache-->>Service: IdListCacheEntry (ids, total) else Cache Miss Service->>DB: searchProductIds (QueryDSL) DB-->>Service: ids + total Service->>Cache: put(ID 리스트, TTL 3분) end Service->>Cache: MGET(product:v1:{id1}, ..., product:v1:{id20}) alt 전체 Hit Cache-->>Service: List<ProductCacheDto> else Partial Miss Service->>DB: findProductCacheDtosByIds(missedIds) DB-->>Service: List<ProductCacheDto> Service->>Cache: put(각 상세, TTL 2분) end else 비캐시 경로 Service->>DB: searchProducts (QueryDSL) DB-->>Service: PageResult<ProductOutDto> end Service-->>Client: ProductPageOutDtoWrite-Through 갱신 (상품 수정 예시)
sequenceDiagram autonumber participant Admin participant Facade as ProductCommandFacade participant Service as ProductCommandService participant ReadModel as ProductReadModelRepository participant Cache as ProductCacheManager Admin->>Facade: updateProduct(id, dto) Facade->>Service: updateProduct + syncReadModel Service->>ReadModel: save (partial update on hit / insert on miss) Facade->>Service: refreshProductDetailCache(id) Service->>Cache: put(product:v1:{id}, 재적재) Facade->>Service: refreshIdListCache(brandId, PRICE_ASC) Service->>Cache: put(products:ids:v1:..., 재적재)📝 Commits
docsfeatfeatfeatfeatfeatfeatdocsrefactor📊 성능 측정 결과 요약
인덱스 적용 (Read Model + 복합 인덱스 12개)
ALL(Full Table Scan)ref/range(Index Range Scan)캐시 적용 (2계층 캐시 + write-through)
핵심
brandId=X + PRICE_ASCworst-case는 3884.57ms다.like_count,created_atoverwrite를 방지page 0~1,size 20만 캐시. 그 외 페이지/사이즈는 DB 직접 조회🧪 테스트 및 검증
ProductIndexPerformanceTest(인덱스 EXPLAIN + 쿼리 시간) /ProductApiPerformanceTest(cold miss / hot hit 분리 측정)ProductCacheManagerTest(직렬화, TTL jitter, getOrLoad, MGET, 장애 격리)CacheStampedeTest(100 concurrent → loader 1회만 호출),LocalCacheLockTest(same-key 직렬화 / different-key 병렬성)ProductReadModelRepositoryImplTest(like_count,created_at보존),ProductCommandServiceTest,ProductLikeCountConcurrencyTestProductQueryServiceTest(2계층 hit/miss/partial miss) /ProductQueryFacadeTest(상세 캐시)ProductCommandFacadeTest(수정/삭제 시 재적재 호출 검증)ProductControllerE2ETest(캐시 write-through 후 데이터 정합성)