[volume-5] 상품 조회 성능 최적화 — 인덱스, 좋아요 비정규화, Redis 캐싱#207
[volume-5] 상품 조회 성능 최적화 — 인덱스, 좋아요 비정규화, Redis 캐싱#207leeedohyun wants to merge 11 commits intoLoopers-dev-lab:leeedohyunfrom
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthrough상품 조회/수정/삭제 로직을 read/write 관심사로 분리하는 리팩토링이다. ProductReader(읽기)와 ProductWriter(쓰기) 도메인 서비스를 도입하고, Redis 기반 이중 캐시 시스템(리스트/상세 캐시)과 캐시 스탬피드 방지 잠금 메커니즘을 추가하며, k6 부하 테스트 스크립트를 포함한다다. Changes
Sequence DiagramsequenceDiagram
participant Client as Client
participant Reader as ProductReader
participant Cache as CacheRepository
participant Service as ProductService
Client->>Reader: readActiveProducts(brandId, sort, page)
Reader->>Cache: get(listKey)
Cache-->>Reader: null
Reader->>Reader: tryLock(listKey)
Reader->>Service: fetch id-page (DB)
Service-->>Reader: id-page
Reader->>Cache: multiGet(detailKeys)
Cache-->>Reader: partial/missing
Reader->>Service: fetch missing details (DB)
Service-->>Reader: Products
Reader->>Cache: multiPut(detailEntries, ttl)
Cache-->>Reader: ok
Reader->>Cache: put(listKey, id-page, ttl)
Cache-->>Reader: ok
Reader-->>Client: Page<Product>
sequenceDiagram
participant Client as Client
participant Writer as ProductWriter
participant Service as ProductService
participant Cache as CacheRepository
Client->>Writer: increaseLikeCount(productId)
Writer->>Service: increaseLikeCount(productId)
Service-->>Writer: ok
Writer->>Service: readActiveProduct(productId)
Service-->>Writer: Product
Writer->>Cache: put(detailKey, Product, ttl)
Cache-->>Writer: ok
Writer-->>Client: ok
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable sequence diagrams in the walkthrough.Disable the |
There was a problem hiding this comment.
Pull request overview
상품 목록/상세 조회의 병목(풀스캔+filesort, DB 커넥션 풀 고갈)을 해결하기 위해 인덱스 추가 + 좋아요 수 비정규화 기반 정렬 + Redis 캐싱(목록 ID 리스트 + 상세 캐시) + 스탬피드 방지 락을 도입한 PR입니다.
Changes:
- Product 정렬/필터 조합에 맞춘 인덱스 6개 추가 및 정렬 기준 조정
- 상품 조회에 Cache-Aside 기반 Redis 캐싱 도입(목록은 ID 리스트, 상세는 단일 원본) + 락/TTL jitter 기반 스탬피드 완화
- k6 부하테스트 스크립트 및 성능 보고서 문서/테스트 코드 추가
Reviewed changes
Copilot reviewed 30 out of 31 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| modules/redis/src/testFixtures/java/com/loopers/testcontainers/RedisTestContainersConfig.java | Redis Testcontainers 설정 시점 조정(시스템 프로퍼티 설정을 static 초기화로 이동) |
| k6/run.sh | k6 실행/리포트 export 헬퍼 스크립트 추가 |
| k6/product-list-test.js | 상품 목록 부하 시나리오(정렬/회원/브랜드 조합) 추가 |
| k6/product-detail-test.js | 상품 상세 부하 시나리오(롱테일 분포) 추가 |
| apps/commerce-api/src/test/java/com/loopers/support/BaseE2ETest.java | E2E 종료 후 Redis flush 정리 추가 |
| apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java | RedisCacheRepository 통합 테스트 추가 |
| apps/commerce-api/src/test/java/com/loopers/domain/shared/cache/CacheKeyTest.java | CacheKey 단위 테스트 추가 |
| apps/commerce-api/src/test/java/com/loopers/domain/product/ProductWriterTest.java | ProductWriter 단위 테스트 추가 |
| apps/commerce-api/src/test/java/com/loopers/domain/product/ProductReaderTest.java | ProductReader(락/부분미스/캐싱) 단위 테스트 추가 |
| apps/commerce-api/src/test/java/com/loopers/domain/product/ProductFixture.java | ProductReader/Writer 테스트용 fixture 추가 |
| apps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.java | CacheRepository의 Redis(JSON) 구현 추가(+SCAN 기반 패턴 삭제) |
| apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheType.java | 제네릭 타입 보존용 Super Type Token 추가 |
| apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheRepository.java | 캐시 저장소 Port 인터페이스 추가 |
| apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheKey.java | 캐시 키 생성/패턴 VO 추가 |
| apps/commerce-api/src/main/java/com/loopers/domain/product/ProductWriter.java | 쓰기 이벤트별 캐시 무효화/Write-Through 도메인 서비스 추가 |
| apps/commerce-api/src/main/java/com/loopers/domain/product/ProductSortType.java | 정렬 조건 단순화(인덱스 활용 목적) |
| apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java | update 반환값 변경(void→Product)으로 캐시 overwrite 지원 |
| apps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.java | 조회 캐시 레이어링(ID 리스트 + 상세) 및 락 기반 스탬피드 방지 도입 |
| apps/commerce-api/src/main/java/com/loopers/domain/product/ProductCacheConstants.java | 캐시 키/TTL/jitter/type 상수 추가 |
| apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java | 정렬/브랜드필터 인덱스 추가 + likeCount 조정 메서드 추가 |
| apps/commerce-api/src/main/java/com/loopers/application/product/UpdateProductUseCase.java | update 시 ProductWriter로 위임(캐시 overwrite 포함) |
| apps/commerce-api/src/main/java/com/loopers/application/product/ReadActiveProductsUseCase.java | 목록 조회를 ProductReader 경유로 전환 |
| apps/commerce-api/src/main/java/com/loopers/application/product/ReadActiveProductDetailUseCase.java | 상세 조회를 ProductReader 경유로 전환 |
| apps/commerce-api/src/main/java/com/loopers/application/product/ProductDetailAssembler.java | 목록 조합 로직에 트랜잭션(readOnly) 적용 |
| apps/commerce-api/src/main/java/com/loopers/application/product/DeleteProductUseCase.java | delete 시 ProductWriter로 위임(목록 패턴 evict 포함) |
| apps/commerce-api/src/main/java/com/loopers/application/like/UnlikeProductUseCase.java | unlike 시 ProductWriter로 likeCount 감소 + 캐시 갱신 |
| apps/commerce-api/src/main/java/com/loopers/application/like/LikeProductUseCase.java | like 시 ProductWriter로 likeCount 증가 + 캐시 갱신 |
| .gitignore | k6 웹 대시보드 리포트 파일 ignore 추가 |
| .docs/performance/performance-report-index.md | 인덱스 최적화 성능 리포트 추가 |
| .docs/performance/performance-report-cache.md | 캐시 전략/락/jitter 성능 리포트 추가 |
| .docs/performance/performance-base.md | 성능 테스트 환경/베이스라인 문서 추가 |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| List<String> keys = new ArrayList<>(); | ||
|
|
||
| try (Cursor<String> cursor = redisTemplate.scan(options)) { | ||
| while (cursor.hasNext()) { | ||
| keys.add(cursor.next()); | ||
| } | ||
| } | ||
|
|
||
| if (!keys.isEmpty()) { | ||
| redisTemplate.delete(keys); | ||
| log.debug("Cache EVICT — pattern={}, deletedKeys={}", keyPattern, keys.size()); | ||
| } |
apps/commerce-api/src/main/java/com/loopers/application/product/ReadActiveProductsUseCase.java
Show resolved
Hide resolved
apps/commerce-api/src/main/java/com/loopers/application/product/ProductDetailAssembler.java
Show resolved
Hide resolved
| private Page<Product> fetchAndCacheProducts(Long brandId, ProductSortType sortType, PageSize pageSize) { | ||
| Page<Product> products = productService.getActiveProducts(brandId, sortType, pageSize); | ||
|
|
| public Page<Product> readActiveProducts(Long brandId, ProductSortType sortType, PageSize pageSize) { | ||
| String listKey = buildListKey(brandId, sortType, pageSize); | ||
| ProductIdPage idPage = cacheRepository.get(listKey, ID_PAGE_TYPE); | ||
|
|
||
| if (Objects.nonNull(idPage)) { | ||
| return resolveProductsFromIdPage(idPage); | ||
| } | ||
|
|
||
| ReentrantLock lock = locks.computeIfAbsent(listKey, k -> new ReentrantLock()); | ||
| if (tryLockWithTimeout(lock)) { | ||
| try { |
apps/commerce-api/src/main/java/com/loopers/domain/product/ProductSortType.java
Show resolved
Hide resolved
...mmerce-api/src/main/java/com/loopers/application/product/ReadActiveProductDetailUseCase.java
Show resolved
Hide resolved
| void returnsNull_whenTtlExpired() throws InterruptedException { | ||
| // arrange | ||
| String key = "test:ttl"; | ||
| cacheRepository.put(key, "expiring", Duration.ofSeconds(1)); | ||
|
|
||
| // act | ||
| Thread.sleep(1500); | ||
| String result = cacheRepository.get(key, STRING_TYPE); |
|
|
||
| // 테스트 대상 상품 ID 범위 (DB에 존재하는 범위에 맞게 조정) | ||
| const MIN_PRODUCT_ID = 1; | ||
| const MAX_PRODUCT_ID = __ENV.MAX_PRODUCT_ID ? parseInt(__ENV.MAX_PRODUCT_ID) : 1000; |
| String json = objectMapper.writeValueAsString(value); | ||
| connection.stringCommands().setEx( | ||
| key.getBytes(), ttlSupplier.get().getSeconds(), json.getBytes() | ||
| ); |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (10)
apps/commerce-api/src/main/java/com/loopers/application/product/ProductDetailAssembler.java (1)
47-54: 브랜드 누락 시 상품이 조용히 제외되어 운영 장애 시 원인 파악이 어렵다.
brands.containsKey(product.getBrandId())로 필터링하면 브랜드가 soft-delete되었거나 데이터 정합성 오류가 있는 상품이 로그 없이 결과에서 누락된다. 운영 환경에서 "상품이 검색되지 않는다"는 문의가 들어왔을 때 원인 추적이 어렵다.수정안:
- 누락된 상품이 있을 경우 WARN 레벨 로그 출력
- 모니터링 메트릭 추가 (예:
product.missing_brand.count)추가 테스트:
- 브랜드가 삭제된 상품이 포함된 목록 조회 시 로그 출력 여부 검증
- 필터링 전후 상품 개수 차이 검증
♻️ 로깅 추가 제안
+import org.slf4j.Logger; +import org.slf4j.LoggerFactory;`@Component` `@RequiredArgsConstructor` public class ProductDetailAssembler { + private static final Logger log = LoggerFactory.getLogger(ProductDetailAssembler.class); + private final BrandService brandService; private final LikeService likeService;Map<Long, Brand> brands = brandService.getActiveBrandMap(brandIds); Set<Long> likedProductIds = likeService.getLikedProductIds(userId, productIds); + List<Product> productsWithMissingBrand = products.stream() + .filter(product -> !brands.containsKey(product.getBrandId())) + .toList(); + if (!productsWithMissingBrand.isEmpty()) { + log.warn("브랜드 누락으로 제외된 상품 수: {}, 상품 ID: {}", + productsWithMissingBrand.size(), + productsWithMissingBrand.stream().map(Product::getId).toList()); + } + return products.stream() .filter(product -> brands.containsKey(product.getBrandId()))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductDetailAssembler.java` around lines 47 - 54, ProductDetailAssembler currently silently filters out products whose brandId is missing by using brands.containsKey(...) which hides data issues; change the assembly logic so that before excluding a product you emit a WARN log (include product.getId() and product.getBrandId()) and increment a monitoring metric (e.g., product.missing_brand.count) for each missing brand, then continue to exclude the product; locate the stream that maps to ProductDetail.from(...) and replace the simple filter with a step that checks brands.containsKey(product.getBrandId()), logs via your app logger at WARN with identifying fields, increments the metric counter, and only calls ProductDetail.from(...) when the brand exists.k6/product-list-test.js (1)
18-20:page=0고정이면 3페이지 캐시 전략 검증이 빠진다이번 PR은 앞 3페이지만 캐시하는데, 모든 요청이
page=0만 조회하면 시나리오별 hot key 1개만 반복하게 된다. 운영 관점에서는 12페이지 재사용, 3페이지 이후 bypass, 무효화 후 재워밍 비용이 보고서에서 빠져 결과가 과하게 좋아질 수 있다.2에서 가중 랜덤으로 분산하고 일부는 3페이지 이후로 보내 미캐시 경로도 함께 측정하는 편이 안전하다. 추가 테스트로 0, 1, 2,page를 0>2page bucket별 p95와 에러율을 분리해 확인해야 한다.Also applies to: 79-84
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6/product-list-test.js` around lines 18 - 20, The test currently fixes PAGE = 0 which prevents verifying multi-page caching behavior; replace the fixed PAGE constant with a page selection function (e.g., getPage or choosePage) that returns a weighted-random page: mostly 0–2 (with configurable weights for 0,1,2) and some fraction mapping to >2 (e.g., returns 3+). Use that function wherever PAGE was used (reference PAGE constant in k6/product-list-test.js and the repeated usages around the lines corresponding to 79-84) and attach a pageBucket tag/value to each request (values "0","1","2",">2") so metrics (p95, error rate) can be grouped per bucket; ensure the query param uses the dynamic page and add measurement splits for p95 and error-rate by this pageBucket tag.apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheKey.java (2)
21-23: 빈prefixSegments배열 전달 시 빈 prefix가 생성된다.
new CacheKey()로 호출하면 prefix가 빈 문자열이 되어,of(123)이:123을 반환한다. 의도된 동작이 아니라면 최소 1개 세그먼트를 강제하는 검증을 추가해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheKey.java` around lines 21 - 23, The CacheKey constructor CacheKey(String... prefixSegments) currently creates an empty prefix when called with no arguments (e.g. new CacheKey()), causing outputs like of(123) -> ":123"; add validation in the CacheKey(String... prefixSegments) constructor to require at least one non-empty segment (and non-null array), throwing an IllegalArgumentException with a clear message if prefixSegments is null or length == 0 (or if all segments are empty), so callers must supply at least one prefix segment; keep building the prefix with String.join(DELIMITER, prefixSegments) only after this validation.
31-37:null세그먼트가 전달될 경우 "null" 문자열로 변환되어 키 충돌 가능성이 있다.
of(null)과of("null")이 동일한 키를 생성한다. 운영 중 디버깅이 어려운 캐시 충돌 버그로 이어질 수 있다. 방어적으로 null 검증을 추가하거나, 최소한 Javadoc에 null 허용 여부를 명시하는 것을 권장한다.♻️ null 검증 추가 제안
public String of(Object... segments) { StringBuilder sb = new StringBuilder(prefix); for (Object segment : segments) { + if (segment == null) { + throw new IllegalArgumentException("Cache key segment must not be null"); + } sb.append(DELIMITER).append(segment); } return sb.toString(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheKey.java` around lines 31 - 37, CacheKey.of currently appends null segments as the literal "null", causing collisions (e.g., of(null) == of("null")). Fix by adding defensive null validation in CacheKey.of: for each segment call Objects.requireNonNull(segment, "CacheKey.of: segment must not be null") (or throw IllegalArgumentException) before appending (use existing DELIMITER and prefix), and update the CacheKey.of Javadoc to state that null segments are not allowed and will cause an exception.apps/commerce-api/src/test/java/com/loopers/support/BaseE2ETest.java (1)
26-32: Javadoc이 Redis 초기화를 반영하지 않는다.Line 15의 Javadoc에는 "각 테스트 종료 후 데이터베이스를 자동으로 초기화한다"고만 명시되어 있으나, 실제로는 Redis도 초기화한다. 문서와 구현이 일치하지 않으면 유지보수 시 혼란을 초래할 수 있다.
또한,
RedisCleanUp.truncateAll()이FLUSHALL명령어를 사용하므로, 테스트 환경이 아닌 공유 Redis 인스턴스에서 실행될 경우 전체 데이터 손실 위험이 있다. 테스트 프로파일 또는 환경 변수로 안전장치를 두는 것을 권장한다.♻️ Javadoc 수정 제안
/** * E2E 테스트의 공통 설정을 제공한다. * * <p>RANDOM_PORT 환경에서 {`@link` TestRestTemplate}을 통해 실제 HTTP 요청을 수행하며, - * 각 테스트 종료 후 데이터베이스를 자동으로 초기화한다. + * 각 테스트 종료 후 데이터베이스와 Redis를 자동으로 초기화한다. */🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/support/BaseE2ETest.java` around lines 26 - 32, Update the Javadoc on the BaseE2ETest class to state that both the database and Redis are cleared after each test (reflecting that cleanUp() calls databaseCleanUp.truncateAll() and redisCleanUp.truncateAll()), and add a safety guard so RedisCleanUp.truncateAll() only runs in a test profile or when a specific environment flag is set; either check the active Spring profile (e.g., "test") or an environment variable before invoking RedisCleanUp.truncateAll(), or alter RedisCleanUp to refuse to run FLUSHALL unless the safe test flag/profile is present.apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheRepository.java (1)
18-24: TTL 없는put(key, value)메서드는 메모리 누수 위험이 있다.TTL 없이 저장된 캐시는 명시적 삭제 전까지 영구 보존되어 Redis 메모리를 점유한다. 운영 환경에서 예기치 않은 메모리 증가로 이어질 수 있다. 이 메서드의 사용처를 제한하거나, Javadoc에 "영구 캐시가 필요한 경우에만 사용" 경고를 추가하는 것을 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheRepository.java` around lines 18 - 24, The put(String key, T value) method stores values without TTL which can cause unbounded Redis memory growth; update the Javadoc for CacheRepository.put to include a clear warning that entries are persisted permanently until explicitly deleted, advise restricting its use to cases that truly require permanent caching, and suggest preferring the TTL-based overloads or providing guidance on responsible usage (e.g., call sites should ensure explicit eviction) so reviewers can spot risky uses.apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java (1)
73-75:adjustLikeCount메서드의 사용 시나리오와 동시성 보장 범위를 명확히 해야 한다.
ProductService에서는incrementLikeCount/decrementLikeCount로 DB atomic update를 수행하는데, 이 메서드는 엔티티 필드를 직접 변경한다. 캐시 write-through 용도로 보이나, 동시 요청 시 엔티티 상태와 DB 상태가 불일치할 수 있다.메서드의 용도가 캐시 갱신을 위한 조회 후 재계산이라면, Javadoc에 "캐시 갱신 전용, 동시성 보장은 호출부 책임"임을 명시하는 것을 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java` around lines 73 - 75, The adjust is directly mutating the entity field and can diverge from the DB atomic updates in ProductService.incrementLikeCount/ProductService.decrementLikeCount; update the Product.adjustLikeCount method's Javadoc to state it is intended only for cache/local entity refresh (not for authoritative DB updates), that it does not provide concurrency or atomicity guarantees, and that callers are responsible for ensuring consistency (or call the DB atomic update methods). If this method should not be public, restrict visibility (e.g., make it package-private/private) and document that change in the Javadoc as well.apps/commerce-api/src/test/java/com/loopers/domain/product/ProductWriterTest.java (2)
95-111: 좋아요 수 갱신 테스트가 호출 순서를 검증하지 않아 회귀를 놓칠 수 있다.현재는 호출 “존재”만 확인하므로, 구현이 바뀌어 캐시 갱신이 DB 반영보다 먼저 실행돼도 테스트가 통과할 수 있다.
수정안은InOrder로increase/decrease -> getActiveProduct -> cache put순서를 강제하는 것이다.🔧 제안 diff
+import org.mockito.InOrder; +import static org.mockito.Mockito.inOrder; ... then(productService).should().increaseLikeCount(productId); then(productService).should().getActiveProduct(productId); then(cacheRepository).should().put(eq(detailKey), eq(product), any(Duration.class)); +InOrder inOrder = inOrder(productService, cacheRepository); +inOrder.verify(productService).increaseLikeCount(productId); +inOrder.verify(productService).getActiveProduct(productId); +inOrder.verify(cacheRepository).put(eq(detailKey), eq(product), any(Duration.class));추가 테스트로 증가/감소 각각에 대해 호출 순서가 어긋나면 실패하는 순서 검증 테스트를 분리해 두어야 한다.
As per coding guidelines**/*Test*.java: 단위 테스트는 실패 케이스/예외 흐름을 포함해 회귀를 조기에 감지해야 한다.Also applies to: 118-134
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/domain/product/ProductWriterTest.java` around lines 95 - 111, The test currently only verifies that increaseLikeCount, getActiveProduct, and cacheRepository.put were called, but not their order; update ProductWriterTest to assert call order using Mockito's InOrder: create InOrder(increase/decrease test) for productService and cacheRepository and verify that productService.increaseLikeCount(productId) (or decrease), then productService.getActiveProduct(productId), then cacheRepository.put(detailKey, product, any(Duration.class)) are called in that exact sequence; add a separate ordered-verification for the decrease test (the one around lines 118-134) so both increase and decrease flows fail if the implementation orders calls incorrectly.
33-135: 예외 흐름 테스트가 부족해 캐시 동기화 실패 회귀를 잡기 어렵다.현재 케이스는 대부분 정상 흐름 중심이라,
getActiveProduct또는update/delete예외 시 캐시가 잘못 갱신되는 회귀를 놓칠 수 있다.
수정안은 예외 시나리오를 추가해cacheRepository.put/evict미호출과 예외 전파를 명시적으로 검증하는 것이다.추가 테스트로 아래를 권장한다.
increaseLikeCount/decreaseLikeCount에서productService.getActiveProduct예외 발생 시cacheRepository.put미호출 검증.update에서productService.update예외 발생 시 캐시 미갱신 검증.delete에서productService.delete예외 발생 시like/캐시 후속 동작 미실행 검증.As per coding guidelines
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/domain/product/ProductWriterTest.java` around lines 33 - 135, Tests in ProductWriterTest lack negative/exception flows; add unit tests that simulate exceptions from productService and verify cacheRepository is not called and the exception propagates. Specifically: in ProductWriterTest add tests for increaseLikeCount and decreaseLikeCount where given(productService.getActiveProduct(productId)).willThrow(...) and assertThrows for productWriter.increaseLikeCount/decreaseLikeCount and then verify cacheRepository.put is never invoked; add a test for update where given(productService.update(modifyProduct)).willThrow(...) and assertThrows on productWriter.update(modifyProduct) and verify cacheRepository.put is not called; add a test for delete where given(productService.delete(productId)).willThrow(...) and assertThrows on productWriter.delete(productId) and verify no cacheRepository.evictions occur; use the existing Mockito patterns (given(...).willThrow(...), assertThrows, then(cacheRepository).shouldHaveNoInteractions() or should(never()) checks) and reference ProductWriterTest, productWriter, productService, and cacheRepository to locate where to add these tests.apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java (1)
96-103: TTL 만료 검증이 고정 sleep에 의존해 플래키하다.Line 102와 Line 243의 고정
Thread.sleep(1500)은 실행 환경 지연에 따라 간헐 실패를 만든다. 운영 파이프라인에서 재시도성 실패를 늘리는 패턴이다.
수정안은 최대 대기시간 내 폴링으로 만료를 확인하는 방식이다.🔧 제안 diff
-Thread.sleep(1500); -String result = cacheRepository.get(key, STRING_TYPE); +String result = null; +long deadline = System.nanoTime() + Duration.ofSeconds(3).toNanos(); +while (System.nanoTime() < deadline) { + result = cacheRepository.get(key, STRING_TYPE); + if (result == null) { + break; + } + Thread.sleep(50); +}추가 테스트로
@RepeatedTest를 사용해 동일 TTL 시나리오를 반복 실행하여 만료 검증의 안정성을 확인해야 한다.
As per coding guidelines**/*Test*.java: 통합 테스트는 플래키 가능성을 점검한다.Also applies to: 238-244
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java` around lines 96 - 103, The test returnsNull_whenTtlExpired is flaky because it uses a fixed Thread.sleep(1500); replace that wait with a polling loop that repeatedly calls cacheRepository.get(key, STRING_TYPE) until it returns null or a configurable timeout elapses (e.g., 2–3 seconds) to avoid environment-dependent timing; update the assertion to fail only after the timeout, and annotate the test with `@RepeatedTest` (e.g., repeat a few times) to validate stability; apply the same polling approach to the other occurrence around Thread.sleep in this test class.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductCacheConstants.java`:
- Around line 47-52: The Javadoc promises a "minimum 1 second" but
applyJitter(Duration base) can return ≤0 seconds; update applyJitter to enforce
a floor of 1 second by computing the jitter offset as now, then clamping the
resulting seconds with Math.max(1, baseSeconds + offset) (or equivalent) before
returning Duration.ofSeconds; ensure you adjust only inside applyJitter so
callers of ProductCacheConstants.applyJitter get at least a 1-second TTL.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.java`:
- Around line 202-206: The current releaseLock method (releaseLock) unlocks the
ReentrantLock and then conditionally removes it from the locks map using
lock.hasQueuedThreads(), which can remove the map entry for the same key if
another thread reacquired a new lock instance in between; change this by
stopping immediate removal from the locks map (do not call locks.remove(key,
lock) in releaseLock) or replace the map value with a reference-counted lock
holder or an expiry-based keyed-lock implementation so entries are removed
safely; update releaseLock/ReentrantLock usage accordingly and add a concurrency
test that races three threads on the same key to assert productService is
invoked exactly once and no extra lock entry is created during the race.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java`:
- Around line 112-118: ProductWriter.update() is performing
cacheRepository.put() outside the transaction started in
ProductService.update(), risking DB/ cache inconsistency; fix by making the
cache write participate in the same transaction (add `@Transactional` on
ProductWriter.update() or ensure it is invoked on a transactional proxy) or, if
keeping it non-transactional, add explicit error handling: catch cache write
failures in ProductWriter.update(), log detailed error and rethrow a runtime
exception so the outer transaction can roll back, or implement a compensating
retry/alert mechanism; reference ProductService.update, ProductWriter.update,
cacheRepository.put and the `@Transactional` annotation when making the change.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductSortType.java`:
- Around line 16-17: The current ProductSortType enum entries PRICE_ASC and
LIKE_COUNT_DESC use single-field Sorts causing unstable pagination when values
tie; update PRICE_ASC to append a secondary Sort by id (or createdAt) ascending
and update LIKE_COUNT_DESC to append a secondary Sort by id (ascending) so ties
are deterministic (e.g., use Sort.by(...).and(Sort.by(Sort.Direction.ASC, "id"))
for PRICE_ASC and Sort.by(...).and(Sort.by(Sort.Direction.ASC, "id")) for
LIKE_COUNT_DESC); also add a pagination test that creates multiple products with
identical price/likeCount and verifies no duplicates or omissions across pages.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductWriter.java`:
- Around line 58-70: Both increaseLikeCount(Long) and decreaseLikeCount(Long)
currently only call refreshCache(productId) which updates the product detail
cache but leaves the popularity-sorted list cache stale; modify these methods
(increaseLikeCount and decreaseLikeCount in ProductWriter) to also evict the
list cache by calling the cache eviction for the popularity list (e.g., evict
LIST_KEY.pattern() or the method that clears list keys) after updating counts,
so the popularity list is invalidated on like changes; update or add a
unit/integration test that preloads the popularity list cache, calls
increaseLikeCount/decreaseLikeCount, asserts the list cache was evicted, and
that a subsequent list fetch reflects the updated ordering.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.java`:
- Around line 57-154: Redis I/O exceptions are not currently handled so Redis
failures can propagate to 500s; wrap all Redis interactions in put,
put(String,Duration), get, multiGet, multiPut and evict in a broad
try/catch(RuntimeException) around the redisTemplate/connection calls (in
methods put, put(..., ttl), get, multiGet, multiPut and evict), log the
exception with context, and degrade to no-op semantics: for get return null, for
multiGet return a List of nulls preserving input key order, and for
put/multiPut/evict just log and continue (do not rethrow); keep existing
JsonProcessingException handling but add the RuntimeException catch around the
redisTemplate/connection calls (including the pipelined lambda using
ttlSupplier.get()) so cache failures do not break DB fallback.
In
`@apps/commerce-api/src/test/java/com/loopers/domain/product/ProductReaderTest.java`:
- Around line 143-167: The test
ProductReaderTest::returnsCacheAfterLockWait_whenCachePopulatedByOtherThread is
overly coupled to implementation by stubbing sequential returns from
cacheRepository.get and checking lockCount(); replace it with a real concurrent
scenario: spawn two threads (ExecutorService/CountDownLatch) that call
productReader.readActiveProducts with the same key, make the first thread
simulate a slow cache miss that triggers productService to load and populate
cache (ProductIdPage), and ensure the second thread blocks until the cache is
populated and then returns the cached result; assert productService was called
exactly once and both threads receive the same product list, and remove
assertions that rely on lockCount() or get() call ordering.
In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java`:
- Around line 33-36: redisCleanUp.truncateAll() currently calls Redis FLUSHALL
and can delete other tests' data; change the cleanup to a DB-scoped operation
and update the test teardown: add a new RedisCleanUp.truncateCurrentDb() that
uses connection.serverCommands().flushDb() (or implement a scoped SCAN + DEL by
key prefix) and replace calls to RedisCleanUp.truncateAll() in
RedisCacheRepositoryIntegrationTest@tearDown with truncateCurrentDb(); also add
an isolation test that runs two test classes using different key prefixes and
asserts that after teardown the other class's keys remain intact.
In `@k6/product-detail-test.js`:
- Around line 6-8: Remove the hard-coded defaults for LOGIN_ID and LOGIN_PW (do
not fall back to 'loopers' values) and implement a pre-test authentication +
validation step that uses the provided seeded credentials to: 1) authenticate
(using the same auth flow your test uses) and 2) call the product-detail
endpoint for a known product to assert the seeded account returns liked=true; if
either auth or the liked check fails, abort the test run immediately. Also add a
small smoke check that attempts authentication with an intentionally-bad
credential and confirms the service does NOT return a 200 success (to guard
against silent guest passthrough). Reference the existing LOGIN_ID and LOGIN_PW
variables in k6/product-detail-test.js and perform these checks in the test
setup/init path before any load scenarios execute.
In `@k6/product-list-test.js`:
- Around line 24-33: The member scenarios are silently falling back to guest
behavior because AuthInterceptor (class AuthInterceptor, the optional-path auth
branch in authenticate/handleRequest) ignores authentication failures; update
AuthInterceptor so that requests matching member scenario paths (or bearing
Authorization header) perform strict pre-auth validation and return a 401/abort
when credentials are invalid instead of allowing processing as unauthenticated;
then update the k6 scenario definitions (SCENARIO_WEIGHTS and the
listBy*AsMember scenarios in k6/product-list-test.js) to include a negative test
case that supplies invalid credentials and asserts that those requests do not
count as member successes (e.g., expect non-200 or explicit auth failure),
ensuring member-weighted traffic cannot be silently converted to guest traffic.
In `@k6/run.sh`:
- Around line 3-8: The script currently trusts the SCRIPT value used in the path
(variable SCRIPT and the k6 run invocation), which allows typos and path
traversal; change it to only accept explicit names (use a case statement to
allow exactly "product-list" or "product-detail"), reject anything else by
printing the usage and exiting 1, then build the target path from that validated
name and check the file exists (test "k6/${SCRIPT}-test.js") before running k6;
if the file is missing also print usage/error and exit 1 so invalid args or
path-escaping inputs are caught early.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductDetailAssembler.java`:
- Around line 47-54: ProductDetailAssembler currently silently filters out
products whose brandId is missing by using brands.containsKey(...) which hides
data issues; change the assembly logic so that before excluding a product you
emit a WARN log (include product.getId() and product.getBrandId()) and increment
a monitoring metric (e.g., product.missing_brand.count) for each missing brand,
then continue to exclude the product; locate the stream that maps to
ProductDetail.from(...) and replace the simple filter with a step that checks
brands.containsKey(product.getBrandId()), logs via your app logger at WARN with
identifying fields, increments the metric counter, and only calls
ProductDetail.from(...) when the brand exists.
In `@apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java`:
- Around line 73-75: The adjust is directly mutating the entity field and can
diverge from the DB atomic updates in
ProductService.incrementLikeCount/ProductService.decrementLikeCount; update the
Product.adjustLikeCount method's Javadoc to state it is intended only for
cache/local entity refresh (not for authoritative DB updates), that it does not
provide concurrency or atomicity guarantees, and that callers are responsible
for ensuring consistency (or call the DB atomic update methods). If this method
should not be public, restrict visibility (e.g., make it
package-private/private) and document that change in the Javadoc as well.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheKey.java`:
- Around line 21-23: The CacheKey constructor CacheKey(String... prefixSegments)
currently creates an empty prefix when called with no arguments (e.g. new
CacheKey()), causing outputs like of(123) -> ":123"; add validation in the
CacheKey(String... prefixSegments) constructor to require at least one non-empty
segment (and non-null array), throwing an IllegalArgumentException with a clear
message if prefixSegments is null or length == 0 (or if all segments are empty),
so callers must supply at least one prefix segment; keep building the prefix
with String.join(DELIMITER, prefixSegments) only after this validation.
- Around line 31-37: CacheKey.of currently appends null segments as the literal
"null", causing collisions (e.g., of(null) == of("null")). Fix by adding
defensive null validation in CacheKey.of: for each segment call
Objects.requireNonNull(segment, "CacheKey.of: segment must not be null") (or
throw IllegalArgumentException) before appending (use existing DELIMITER and
prefix), and update the CacheKey.of Javadoc to state that null segments are not
allowed and will cause an exception.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheRepository.java`:
- Around line 18-24: The put(String key, T value) method stores values without
TTL which can cause unbounded Redis memory growth; update the Javadoc for
CacheRepository.put to include a clear warning that entries are persisted
permanently until explicitly deleted, advise restricting its use to cases that
truly require permanent caching, and suggest preferring the TTL-based overloads
or providing guidance on responsible usage (e.g., call sites should ensure
explicit eviction) so reviewers can spot risky uses.
In
`@apps/commerce-api/src/test/java/com/loopers/domain/product/ProductWriterTest.java`:
- Around line 95-111: The test currently only verifies that increaseLikeCount,
getActiveProduct, and cacheRepository.put were called, but not their order;
update ProductWriterTest to assert call order using Mockito's InOrder: create
InOrder(increase/decrease test) for productService and cacheRepository and
verify that productService.increaseLikeCount(productId) (or decrease), then
productService.getActiveProduct(productId), then cacheRepository.put(detailKey,
product, any(Duration.class)) are called in that exact sequence; add a separate
ordered-verification for the decrease test (the one around lines 118-134) so
both increase and decrease flows fail if the implementation orders calls
incorrectly.
- Around line 33-135: Tests in ProductWriterTest lack negative/exception flows;
add unit tests that simulate exceptions from productService and verify
cacheRepository is not called and the exception propagates. Specifically: in
ProductWriterTest add tests for increaseLikeCount and decreaseLikeCount where
given(productService.getActiveProduct(productId)).willThrow(...) and
assertThrows for productWriter.increaseLikeCount/decreaseLikeCount and then
verify cacheRepository.put is never invoked; add a test for update where
given(productService.update(modifyProduct)).willThrow(...) and assertThrows on
productWriter.update(modifyProduct) and verify cacheRepository.put is not
called; add a test for delete where
given(productService.delete(productId)).willThrow(...) and assertThrows on
productWriter.delete(productId) and verify no cacheRepository.evictions occur;
use the existing Mockito patterns (given(...).willThrow(...), assertThrows,
then(cacheRepository).shouldHaveNoInteractions() or should(never()) checks) and
reference ProductWriterTest, productWriter, productService, and cacheRepository
to locate where to add these tests.
In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java`:
- Around line 96-103: The test returnsNull_whenTtlExpired is flaky because it
uses a fixed Thread.sleep(1500); replace that wait with a polling loop that
repeatedly calls cacheRepository.get(key, STRING_TYPE) until it returns null or
a configurable timeout elapses (e.g., 2–3 seconds) to avoid
environment-dependent timing; update the assertion to fail only after the
timeout, and annotate the test with `@RepeatedTest` (e.g., repeat a few times) to
validate stability; apply the same polling approach to the other occurrence
around Thread.sleep in this test class.
In `@apps/commerce-api/src/test/java/com/loopers/support/BaseE2ETest.java`:
- Around line 26-32: Update the Javadoc on the BaseE2ETest class to state that
both the database and Redis are cleared after each test (reflecting that
cleanUp() calls databaseCleanUp.truncateAll() and redisCleanUp.truncateAll()),
and add a safety guard so RedisCleanUp.truncateAll() only runs in a test profile
or when a specific environment flag is set; either check the active Spring
profile (e.g., "test") or an environment variable before invoking
RedisCleanUp.truncateAll(), or alter RedisCleanUp to refuse to run FLUSHALL
unless the safe test flag/profile is present.
In `@k6/product-list-test.js`:
- Around line 18-20: The test currently fixes PAGE = 0 which prevents verifying
multi-page caching behavior; replace the fixed PAGE constant with a page
selection function (e.g., getPage or choosePage) that returns a weighted-random
page: mostly 0–2 (with configurable weights for 0,1,2) and some fraction mapping
to >2 (e.g., returns 3+). Use that function wherever PAGE was used (reference
PAGE constant in k6/product-list-test.js and the repeated usages around the
lines corresponding to 79-84) and attach a pageBucket tag/value to each request
(values "0","1","2",">2") so metrics (p95, error rate) can be grouped per
bucket; ensure the query param uses the dynamic page and add measurement splits
for p95 and error-rate by this pageBucket tag.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b93b0554-7ac0-4fe1-956b-68808ef9d570
⛔ Files ignored due to path filters (3)
.docs/performance/performance-base.mdis excluded by!**/*.mdand included by**.docs/performance/performance-report-cache.mdis excluded by!**/*.mdand included by**.docs/performance/performance-report-index.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (28)
.gitignoreapps/commerce-api/src/main/java/com/loopers/application/like/LikeProductUseCase.javaapps/commerce-api/src/main/java/com/loopers/application/like/UnlikeProductUseCase.javaapps/commerce-api/src/main/java/com/loopers/application/product/DeleteProductUseCase.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductDetailAssembler.javaapps/commerce-api/src/main/java/com/loopers/application/product/ReadActiveProductDetailUseCase.javaapps/commerce-api/src/main/java/com/loopers/application/product/ReadActiveProductsUseCase.javaapps/commerce-api/src/main/java/com/loopers/application/product/UpdateProductUseCase.javaapps/commerce-api/src/main/java/com/loopers/domain/product/Product.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductCacheConstants.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductSortType.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductWriter.javaapps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheKey.javaapps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheRepository.javaapps/commerce-api/src/main/java/com/loopers/domain/shared/cache/CacheType.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.javaapps/commerce-api/src/test/java/com/loopers/domain/product/ProductFixture.javaapps/commerce-api/src/test/java/com/loopers/domain/product/ProductReaderTest.javaapps/commerce-api/src/test/java/com/loopers/domain/product/ProductWriterTest.javaapps/commerce-api/src/test/java/com/loopers/domain/shared/cache/CacheKeyTest.javaapps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.javaapps/commerce-api/src/test/java/com/loopers/support/BaseE2ETest.javak6/product-detail-test.jsk6/product-list-test.jsk6/run.shmodules/redis/src/testFixtures/java/com/loopers/testcontainers/RedisTestContainersConfig.java
💤 Files with no reviewable changes (1)
- modules/redis/src/testFixtures/java/com/loopers/testcontainers/RedisTestContainersConfig.java
apps/commerce-api/src/main/java/com/loopers/domain/product/ProductCacheConstants.java
Show resolved
Hide resolved
| private void releaseLock(String key, ReentrantLock lock) { | ||
| lock.unlock(); | ||
| if (!lock.hasQueuedThreads()) { | ||
| locks.remove(key, lock); | ||
| } |
There was a problem hiding this comment.
락 제거 로직이 같은 키에 서로 다른 락 인스턴스를 허용한다.
운영 영향: unlock() 이후 hasQueuedThreads()만 보고 locks.remove(key, lock)를 수행하면, 그 사이 기존 락을 다른 스레드가 다시 획득한 경우에도 엔트리가 삭제될 수 있다. 이후 세 번째 스레드는 새 락을 만들어 같은 키의 캐시 재생성 구간에 동시에 진입하므로 스탬피드 방지가 깨지고 DB 중복 조회가 발생한다.
수정안: 현재 방식의 즉시 제거를 중단하고, 참조 카운트가 있는 lock holder나 만료형 keyed-lock으로 교체해야 한다. 최소한 임시 조치로는 releaseLock()에서 map 제거를 하지 않는 편이 지금 구현보다 안전하다.
추가 테스트: 동일 키로 3개 스레드를 경쟁시켜 productService 호출 수가 1회로 유지되는지와 lock map에 새 락이 생성되지 않는지를 검증해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.java`
around lines 202 - 206, The current releaseLock method (releaseLock) unlocks the
ReentrantLock and then conditionally removes it from the locks map using
lock.hasQueuedThreads(), which can remove the map entry for the same key if
another thread reacquired a new lock instance in between; change this by
stopping immediate removal from the locks map (do not call locks.remove(key,
lock) in releaseLock) or replace the map value with a reference-counted lock
holder or an expiry-based keyed-lock implementation so entries are removed
safely; update releaseLock/ReentrantLock usage accordingly and add a concurrency
test that races three threads on the same key to assert productService is
invoked exactly once and no extra lock entry is created during the race.
| @Transactional | ||
| public void update(ModifyProduct product) { | ||
| public Product update(ModifyProduct product) { | ||
| Product entity = productRepository.findById(product.productId()) | ||
| .orElseThrow(() -> new CoreException(ErrorType.PRODUCT_NOT_FOUND)); | ||
| entity.update(product); | ||
| return entity; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# ProductWriter의 트랜잭션 어노테이션 확인
ast-grep --pattern $'class ProductWriter {
$$$
update($$$) {
$$$
}
$$$
}'
rg -n "@Transactional" --glob "*ProductWriter*" -B5 -A10Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# ProductWriter 파일 찾기
fd -t f "ProductWriter" --glob "**/*.java"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 181
🏁 Script executed:
# ProductService 파일의 전체 내용을 확인하여 ProductWriter 사용 방식 파악
find . -name "ProductService.java" -type f | head -1 | xargs wc -lRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 169
🏁 Script executed:
# ProductWriter를 import하거나 참조하는 부분 검색
rg "ProductWriter" --glob "**/*.java" -B3 -A3Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 9165
🏁 Script executed:
cat -n apps/commerce-api/src/main/java/com/loopers/domain/product/ProductWriter.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2825
🏁 Script executed:
cat -n apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 7367
ProductWriter.update()가 @Transactional 없이 실행되어 캐싱이 트랜잭션 경계 밖에서 발생한다.
ProductService.update()는 @Transactional이 적용되어 메서드 종료 시점에 트랜잭션이 커밋된다. 이후 ProductWriter.update()에서 cacheRepository.put()이 실행되는데, 이는 트랜잭션 경계 밖에서 발생한다. 따라서 캐시 쓰기에 실패하면 DB 변경은 이미 커밋된 상태로 롤백할 수 없으며, DB와 캐시 데이터 불일치가 발생할 수 있다.
ProductWriter.update()에 @Transactional을 추가하거나, 캐시 쓰기 실패에 대한 명시적 에러 처리를 추가하여 운영 시 감지 가능하도록 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java`
around lines 112 - 118, ProductWriter.update() is performing
cacheRepository.put() outside the transaction started in
ProductService.update(), risking DB/ cache inconsistency; fix by making the
cache write participate in the same transaction (add `@Transactional` on
ProductWriter.update() or ensure it is invoked on a transactional proxy) or, if
keeping it non-transactional, add explicit error handling: catch cache write
failures in ProductWriter.update(), log detailed error and rethrow a runtime
exception so the outer transaction can roll back, or implement a compensating
retry/alert mechanism; reference ProductService.update, ProductWriter.update,
cacheRepository.put and the `@Transactional` annotation when making the change.
| PRICE_ASC(Sort.by(Sort.Direction.ASC, "price.amount")), | ||
| LIKE_COUNT_DESC(Sort.by(Sort.Direction.DESC, "likeCount")); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# ProductRepository의 페이지네이션 쿼리 확인
rg -n "findAllActiveProducts|Slice<Product>" --glob "*ProductRepository*" -A10
# 실제 쿼리 생성 확인을 위한 테스트 존재 여부
rg -n "ProductSortType" --glob "*Test*.java" -B2 -A5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 12820
🏁 Script executed:
# ProductSortType.java 전체 내용 확인
cat -n apps/commerce-api/src/main/java/com/loopers/domain/product/ProductSortType.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1206
🏁 Script executed:
# Product 엔티티의 인덱스 정의 확인
rg -n "@Index|@Table" apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java -A5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 773
🏁 Script executed:
# 변경 전 코드 확인 (git diff)
git diff HEAD~1 apps/commerce-api/src/main/java/com/loopers/domain/product/ProductSortType.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 273
🏁 Script executed:
# ProductReaderTest에서 동일 가격/좋아요 케이스 테스트 여부
rg -n "PRICE_ASC|LIKE_COUNT_DESC" apps/commerce-api/src/test -B2 -A8Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 6362
동일 값을 가진 상품 존재 시 페이지네이션 결과가 비결정적이 되어 데이터 중복/누락 발생 가능하다.
페이지네이션은 정렬된 결과의 안정성(consistency)을 보장해야 한다. 현재 PRICE_ASC, LIKE_COUNT_DESC는 단일 정렬 조건만 사용하므로, 동일 가격 또는 동일 좋아요 수를 가진 상품들 사이의 순서는 DB 내부 구현에 따라 결정된다. 이로 인해 같은 페이지 요청이 매번 다른 결과를 반환할 수 있으며, 페이지를 넘길 때 같은 상품을 반복 조회하거나 특정 상품을 누락할 수 있다.
해결책: 각 정렬 조건에 id (또는 createdAt)를 2차 정렬으로 추가하여 전체 결과 세트에서 안정적인 순서를 보장한다.
PRICE_ASC(Sort.by(Sort.Direction.ASC, "price.amount").and(Sort.by(Sort.Direction.ASC, "id"))),
LIKE_COUNT_DESC(Sort.by(Sort.Direction.DESC, "likeCount").and(Sort.by(Sort.Direction.ASC, "id")))
테스트에 동일 값을 가진 상품 다중 페이지 조회 시나리오를 추가하여 페이지 간 데이터 중복/누락이 발생하지 않음을 확인한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductSortType.java`
around lines 16 - 17, The current ProductSortType enum entries PRICE_ASC and
LIKE_COUNT_DESC use single-field Sorts causing unstable pagination when values
tie; update PRICE_ASC to append a secondary Sort by id (or createdAt) ascending
and update LIKE_COUNT_DESC to append a secondary Sort by id (ascending) so ties
are deterministic (e.g., use Sort.by(...).and(Sort.by(Sort.Direction.ASC, "id"))
for PRICE_ASC and Sort.by(...).and(Sort.by(Sort.Direction.ASC, "id")) for
LIKE_COUNT_DESC); also add a pagination test that creates multiple products with
identical price/likeCount and verifies no duplicates or omissions across pages.
| public void increaseLikeCount(Long productId) { | ||
| productService.increaseLikeCount(productId); | ||
| refreshCache(productId); | ||
| } | ||
|
|
||
| /** | ||
| * 상품의 좋아요 수를 1 감소시키고, 상세 캐시에 Write-Through한다. | ||
| * | ||
| * @param productId 상품 ID | ||
| */ | ||
| public void decreaseLikeCount(Long productId) { | ||
| productService.decreaseLikeCount(productId); | ||
| refreshCache(productId); |
There was a problem hiding this comment.
좋아요 수 변경 후 목록 캐시도 함께 무효화해야 한다.
운영 영향: 현재는 상세 캐시만 갱신하므로 like_count 정렬 목록의 ID 페이지가 TTL 동안 stale해진다. 그 결과 상세의 좋아요 수와 목록 순서가 서로 달라져 인기순 API가 실제 랭킹과 다른 응답을 반환한다.
수정안: increaseLikeCount()와 decreaseLikeCount()에서 최소한 인기순 목록 캐시를, 구현 단순성을 우선하면 LIST_KEY.pattern() 전체를 함께 evict해야 한다.
추가 테스트: 인기순 목록 캐시를 선적재한 뒤 좋아요 증감 호출 시 목록 캐시가 제거되고, 재조회 결과가 변경된 순서를 반영하는지 검증해야 한다.
예시 수정안
public void increaseLikeCount(Long productId) {
productService.increaseLikeCount(productId);
+ cacheRepository.evict(LIST_KEY.pattern());
refreshCache(productId);
}
public void decreaseLikeCount(Long productId) {
productService.decreaseLikeCount(productId);
+ cacheRepository.evict(LIST_KEY.pattern());
refreshCache(productId);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductWriter.java`
around lines 58 - 70, Both increaseLikeCount(Long) and decreaseLikeCount(Long)
currently only call refreshCache(productId) which updates the product detail
cache but leaves the popularity-sorted list cache stale; modify these methods
(increaseLikeCount and decreaseLikeCount in ProductWriter) to also evict the
list cache by calling the cache eviction for the popularity list (e.g., evict
LIST_KEY.pattern() or the method that clears list keys) after updating counts,
so the popularity list is invalidated on like changes; update or add a
unit/integration test that preloads the popularity list cache, calls
increaseLikeCount/decreaseLikeCount, asserts the list cache was evicted, and
that a subsequent list fetch reflects the updated ordering.
| @DisplayName("목록 캐시 MISS + 락 대기 후 캐시 HIT이면, DB를 호출하지 않고 캐시에서 반환한다.") | ||
| @Test | ||
| void returnsCacheAfterLockWait_whenCachePopulatedByOtherThread() { | ||
| // arrange | ||
| var pageSize = new PageSize(0, 20); | ||
| var product1 = ProductFixture.createProduct(1L); | ||
| var idPage = new ProductReader.ProductIdPage(List.of(1L), false); | ||
|
|
||
| given(cacheRepository.get(anyString(), any(CacheType.class))) | ||
| .willReturn(null) | ||
| .willReturn(idPage); | ||
| given(cacheRepository.multiGet(anyList(), any(CacheType.class))) | ||
| .willReturn(List.of(product1)); | ||
|
|
||
| // act | ||
| var result = productReader.readActiveProducts(null, ProductSortType.DEFAULT, pageSize); | ||
|
|
||
| // assert | ||
| assertAll( | ||
| () -> assertThat(result.content()).containsExactly(product1), | ||
| () -> assertThat(result.hasNext()).isFalse() | ||
| ); | ||
| then(productService).shouldHaveNoInteractions(); | ||
| then(cacheRepository).should(times(2)).get(anyString(), any(CacheType.class)); | ||
| } |
There was a problem hiding this comment.
락 관련 테스트가 구현 세부사항에 과도하게 결합되어 있고 실제 경쟁 상황을 재현하지 못한다.
운영 영향: 현재 테스트는 한 스레드 안에서 get() 반환값만 순차 변경하거나 lockCount()==0만 확인하고 있어서, 실제 동시 요청에서의 대기/재확인/중복 조회 방지를 검증하지 못한다. 이 상태로는 락 정리 회귀가 있어도 테스트가 통과하고, 반대로 더 안전한 keyed-lock 구현으로 바꿀 때 정상 리팩터링을 막을 수 있다.
수정안: CountDownLatch나 ExecutorService를 사용해 첫 호출이 임계구역을 점유한 상태에서 같은 키의 후속 호출을 별도 스레드로 실행하고, lockCount() 대신 productService 호출 횟수와 응답 일관성을 검증해야 한다.
추가 테스트: 동일 키에 대한 동시 요청 2~3개를 실행해 productService 호출 횟수가 1회인지, 후속 호출이 캐시 값으로 귀결되는지 검증해야 한다.
Also applies to: 190-246, 288-307
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/domain/product/ProductReaderTest.java`
around lines 143 - 167, The test
ProductReaderTest::returnsCacheAfterLockWait_whenCachePopulatedByOtherThread is
overly coupled to implementation by stubbing sequential returns from
cacheRepository.get and checking lockCount(); replace it with a real concurrent
scenario: spawn two threads (ExecutorService/CountDownLatch) that call
productReader.readActiveProducts with the same key, make the first thread
simulate a slow cache miss that triggers productService to load and populate
cache (ProductIdPage), and ensure the second thread blocks until the cache is
populated and then returns the cached result; assert productService was called
exactly once and both threads receive the same product list, and remove
assertions that rely on lockCount() or get() call ordering.
| @AfterEach | ||
| void tearDown() { | ||
| redisCleanUp.truncateAll(); | ||
| } |
There was a problem hiding this comment.
flushAll 기반 정리는 테스트 격리를 깨뜨린다.
Line 35의 redisCleanUp.truncateAll()은 modules/redis/src/testFixtures/java/com/loopers/utils/RedisCleanUp.java에서 Redis flushAll()을 호출하므로, 같은 Redis 인스턴스를 쓰는 다른 테스트 데이터까지 삭제할 수 있다. CI 병렬 실행 시 플래키 장애를 유발한다.
수정안은 테스트 전용 DB 분리 후 flushDb()를 사용하거나, 테스트 키 프리픽스 기반 SCAN + DEL로 정리 범위를 제한하는 방식이다.
🔧 제안 diff
`@AfterEach`
void tearDown() {
- redisCleanUp.truncateAll();
+ redisCleanUp.truncateCurrentDb();
}// modules/redis/src/testFixtures/java/com/loopers/utils/RedisCleanUp.java
public void truncateCurrentDb() {
try (RedisConnection connection = redisConnectionFactory.getConnection()) {
connection.serverCommands().flushDb();
}
}추가 테스트로, 서로 다른 키 프리픽스를 사용하는 두 테스트 클래스가 동일 Redis를 공유해도 teardown 이후 상대 클래스 키가 유지되는지 검증하는 격리 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java`
around lines 33 - 36, redisCleanUp.truncateAll() currently calls Redis FLUSHALL
and can delete other tests' data; change the cleanup to a DB-scoped operation
and update the test teardown: add a new RedisCleanUp.truncateCurrentDb() that
uses connection.serverCommands().flushDb() (or implement a scoped SCAN + DEL by
key prefix) and replace calls to RedisCleanUp.truncateAll() in
RedisCacheRepositoryIntegrationTest@tearDown with truncateCurrentDb(); also add
an isolation test that runs two test classes using different key prefixes and
asserts that after teardown the other class's keys remain intact.
| const LOGIN_ID = __ENV.LOGIN_ID || 'loopers'; | ||
| const LOGIN_PW = __ENV.LOGIN_PW || 'loopersloopers'; | ||
|
|
There was a problem hiding this comment.
인증 실패가 guest 성공으로 집계되어 상세 부하가 왜곡된다
apps/commerce-api/src/main/java/com/loopers/interfaces/api/auth/AuthInterceptor.java:85-105는 optional 경로에서 로그인 실패를 예외 없이 guest로 통과시킨다. 그런데 이 스크립트는 기본 자격증명을 넣고도 200만 확인하므로, 계정 미시드나 비밀번호 불일치가 생기면 전체 상세 부하가 guest 경로로 바뀐 채 성공으로 집계된다. 운영 관점에서는 회원별 좋아요 조회 비용이 빠진 낙관적 수치가 만들어진다. 기본 자격증명은 제거하고, 부하 시작 전에 시드된 회원 계정과 회원 전용 검증 조건을 확인한 뒤 실패 시 즉시 중단해야 한다. 예를 들면 특정 상품에 대해 liked=true가 나와야 하는 계정을 고정해 검증하는 방식이 안전하다. 추가 테스트로 잘못된 자격증명을 넣었을 때 스크립트가 200 성공으로 진행되지 않는지 smoke 검증을 넣어야 한다.
Also applies to: 39-45, 62-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k6/product-detail-test.js` around lines 6 - 8, Remove the hard-coded defaults
for LOGIN_ID and LOGIN_PW (do not fall back to 'loopers' values) and implement a
pre-test authentication + validation step that uses the provided seeded
credentials to: 1) authenticate (using the same auth flow your test uses) and 2)
call the product-detail endpoint for a known product to assert the seeded
account returns liked=true; if either auth or the liked check fails, abort the
test run immediately. Also add a small smoke check that attempts authentication
with an intentionally-bad credential and confirms the service does NOT return a
200 success (to guard against silent guest passthrough). Reference the existing
LOGIN_ID and LOGIN_PW variables in k6/product-detail-test.js and perform these
checks in the test setup/init path before any load scenarios execute.
| const SCENARIO_WEIGHTS = { | ||
| listByLikesAsGuest: 0.20, // 좋아요순 · 비회원 | ||
| listByLikesAsMember: 0.15, // 좋아요순 · 회원 | ||
| listByLatestAsGuest: 0.15, // 최신순 · 비회원 | ||
| listByLatestAsMember: 0.10, // 최신순 · 회원 | ||
| listByPriceAsGuest: 0.10, // 가격순 · 비회원 | ||
| listByPriceAsMember: 0.05, // 가격순 · 회원 | ||
| listByBrandLikesAsGuest: 0.15, // 브랜드+좋아요순 · 비회원 | ||
| listByBrandLikesAsMember: 0.10, // 브랜드+좋아요순 · 회원 | ||
| }; |
There was a problem hiding this comment.
회원 시나리오 40%가 guest 성공으로 바뀔 수 있다
apps/commerce-api/src/main/java/com/loopers/interfaces/api/auth/AuthInterceptor.java:85-105는 optional 경로의 인증 실패를 무시한다. 현재 회원 시나리오는 헤더만 추가하고 200만 확인하므로, 자격증명이 틀리면 가중치의 40%가 조용히 guest 목록 조회로 치환된다. 운영 관점에서는 회원/비회원 혼합 캐시 패턴과 좋아요 여부 계산 비용이 틀어진다. 기본 자격증명에 의존하지 말고 member 시나리오는 사전 인증 검증에 실패하면 즉시 중단하도록 바꾸는 편이 안전하다. 추가 테스트로 잘못된 로그인 값을 넣었을 때 member 시나리오가 성공 건수로 집계되지 않는지 확인해야 한다.
Also applies to: 62-68, 79-84
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k6/product-list-test.js` around lines 24 - 33, The member scenarios are
silently falling back to guest behavior because AuthInterceptor (class
AuthInterceptor, the optional-path auth branch in authenticate/handleRequest)
ignores authentication failures; update AuthInterceptor so that requests
matching member scenario paths (or bearing Authorization header) perform strict
pre-auth validation and return a 401/abort when credentials are invalid instead
of allowing processing as unauthenticated; then update the k6 scenario
definitions (SCENARIO_WEIGHTS and the listBy*AsMember scenarios in
k6/product-list-test.js) to include a negative test case that supplies invalid
credentials and asserts that those requests do not count as member successes
(e.g., expect non-200 or explicit auth failure), ensuring member-weighted
traffic cannot be silently converted to guest traffic.
| SCRIPT="${1:?Usage: ./k6/run.sh <product-list|product-detail> [k6 options...]}" | ||
| shift | ||
|
|
||
| K6_WEB_DASHBOARD=true \ | ||
| K6_WEB_DASHBOARD_EXPORT="k6/${SCRIPT}-report.html" \ | ||
| k6 run "k6/${SCRIPT}-test.js" "$@" |
There was a problem hiding this comment.
허용된 스크립트 이름만 명시적으로 받는 편이 안전하다
SCRIPT를 그대로 경로에 삽입하므로 오타는 file not found로 늦게 드러나고, ../... 같은 값이면 의도한 k6/ 밖의 파일도 대상으로 삼을 수 있다. 운영용 러너는 입력 오류를 초기에 차단해야 하므로 case로 product-list|product-detail만 허용하고 대상 파일 존재 여부를 먼저 확인하는 편이 안전하다. 추가 테스트로 잘못된 인자와 경로 이탈 인자를 각각 넣었을 때 usage와 함께 1로 종료되는지 확인해야 한다.
패치 예시
SCRIPT="${1:?Usage: ./k6/run.sh <product-list|product-detail> [k6 options...]}"
+case "$SCRIPT" in
+ product-list|product-detail) ;;
+ *)
+ echo "Usage: ./k6/run.sh <product-list|product-detail> [k6 options...]" >&2
+ exit 1
+ ;;
+esac
+[ -f "k6/${SCRIPT}-test.js" ] || {
+ echo "Missing test script: k6/${SCRIPT}-test.js" >&2
+ exit 1
+}
shift
K6_WEB_DASHBOARD=true \
K6_WEB_DASHBOARD_EXPORT="k6/${SCRIPT}-report.html" \
k6 run "k6/${SCRIPT}-test.js" "$@"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SCRIPT="${1:?Usage: ./k6/run.sh <product-list|product-detail> [k6 options...]}" | |
| shift | |
| K6_WEB_DASHBOARD=true \ | |
| K6_WEB_DASHBOARD_EXPORT="k6/${SCRIPT}-report.html" \ | |
| k6 run "k6/${SCRIPT}-test.js" "$@" | |
| SCRIPT="${1:?Usage: ./k6/run.sh <product-list|product-detail> [k6 options...]}" | |
| case "$SCRIPT" in | |
| product-list|product-detail) ;; | |
| *) | |
| echo "Usage: ./k6/run.sh <product-list|product-detail> [k6 options...]" >&2 | |
| exit 1 | |
| ;; | |
| esac | |
| [ -f "k6/${SCRIPT}-test.js" ] || { | |
| echo "Missing test script: k6/${SCRIPT}-test.js" >&2 | |
| exit 1 | |
| } | |
| shift | |
| K6_WEB_DASHBOARD=true \ | |
| K6_WEB_DASHBOARD_EXPORT="k6/${SCRIPT}-report.html" \ | |
| k6 run "k6/${SCRIPT}-test.js" "$@" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k6/run.sh` around lines 3 - 8, The script currently trusts the SCRIPT value
used in the path (variable SCRIPT and the k6 run invocation), which allows typos
and path traversal; change it to only accept explicit names (use a case
statement to allow exactly "product-list" or "product-detail"), reject anything
else by printing the usage and exiting 1, then build the target path from that
validated name and check the file exists (test "k6/${SCRIPT}-test.js") before
running k6; if the file is missing also print usage/error and exit 1 so invalid
args or path-escaping inputs are caught early.
— UTF-8 인코딩 명시, SCAN evict 배치 삭제 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
— 진입부에서 DEFAULT로 resolve Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
— early return으로 바로 DB 조회 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java (1)
34-37:⚠️ Potential issue | 🟠 Major
truncateAll()정리는 테스트 격리를 깨뜨릴 수 있으므로 범위를 축소해야 한다.운영 관점에서 CI 병렬 실행 시 전역 삭제는 무관한 테스트 데이터까지 제거해 플래키 실패를 유발한다.
수정안은 teardown을 DB 스코프 정리(flushDb) 또는 테스트 키 프리픽스 기반SCAN + DEL로 제한하는 방식이다.
추가 테스트로 서로 다른 프리픽스를 쓰는 두 테스트가 같은 Redis를 공유할 때, 한쪽 teardown 이후 다른 쪽 키가 유지되는 격리 검증이 필요하다.🔧 제안 diff
`@AfterEach` void tearDown() { - redisCleanUp.truncateAll(); + redisCleanUp.truncateCurrentDb(); }#!/bin/bash set -euo pipefail TEST_FILE="$(fd RedisCacheRepositoryIntegrationTest.java | head -n 1)" CLEANUP_FILE="$(fd RedisCleanUp.java | head -n 1)" echo "[target] $TEST_FILE" echo "[target] $CLEANUP_FILE" echo "[1] teardown에서 호출 메서드 확인" rg -n "truncateAll|truncateCurrentDb" "$TEST_FILE" -C2 echo "[2] RedisCleanUp 구현에서 flush 범위 확인" rg -n "flushAll|flushDb|SCAN|DEL|truncateAll|truncateCurrentDb" "$CLEANUP_FILE" -C3 echo "[expected]" echo "- 테스트는 전역 삭제가 아닌 DB/프리픽스 스코프 삭제를 호출해야 한다."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java` around lines 34 - 37, The test teardown in RedisCacheRepositoryIntegrationTest currently calls redisCleanUp.truncateAll(), which performs a global flush and can break test isolation in parallel CI; change the teardown to call a scoped cleanup method (e.g., redisCleanUp.flushDb() or redisCleanUp.truncateWithPrefix(testPrefix)) and update RedisCleanUp to implement DB-scoped flush (FLUSHDB) or key-scoped removal using SCAN + DEL for the given test key prefix; also add/adjust a test that uses two different prefixes to assert isolation (one test's teardown does not remove the other's keys).apps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.java (1)
207-211:⚠️ Potential issue | 🔴 Critical락 해제 직후 map 제거 로직이 같은 키의 중복 락 생성을 허용한다.
운영 관점에서
unlock()후hasQueuedThreads()로 제거하면, 기존 락을 다른 스레드가 이미 획득한 순간에도 map 엔트리가 제거될 수 있어 같은 키로 새 락이 생성되고 캐시 재생성 구간이 중복 실행된다. 스탬피드 방지가 깨지는 치명적 경쟁 조건이다.
수정안은 즉시 제거를 중단하고(최소 조치), 필요하면 참조 카운트 또는 만료 기반 keyed-lock으로 안전하게 정리하는 방식이다.
추가 테스트로 동일 키에 3개 스레드를 경쟁시켜productService호출 횟수가 1회로 유지되는지와 lock map에서 새 락 인스턴스가 생성되지 않는지를 검증해야 한다.🔧 최소 안전 조치 diff
private void releaseLock(String key, ReentrantLock lock) { lock.unlock(); - if (!lock.hasQueuedThreads()) { - locks.remove(key, lock); - } }#!/bin/bash set -euo pipefail READER_FILE="$(fd ProductReader.java | head -n 1)" TEST_FILE="$(fd ProductReaderTest.java | head -n 1)" echo "[target] $READER_FILE" rg -n "releaseLock|hasQueuedThreads|locks.remove|tryLock\\(" "$READER_FILE" -C3 echo "[target] $TEST_FILE" rg -n "CountDownLatch|ExecutorService|concurrent|same key|lockCount|readActiveProduct|readActiveProducts" "$TEST_FILE" -C3 || true echo "[expected]" echo "- releaseLock에서 즉시 remove 로직이 없어야 한다." echo "- 동일 키 동시성 경쟁 테스트가 존재해야 한다."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.java` around lines 207 - 211, In ProductReader, stop removing the ReentrantLock from locks immediately after lock.unlock() in releaseLock (remove the locks.remove(key, lock) call as a minimum); instead implement safe cleanup (preferably a reference-counted wrapper or scheduled/TTL eviction for keyed locks) so that once a thread unlocks the same lock instance cannot be spuriously removed while another thread may already hold it; update releaseLock to decrement a reference count or skip immediate removal and only remove when refcount==0 or via expiration. Add a concurrency test in ProductReaderTest that races three threads against the same key (use CountDownLatch and ExecutorService) and assert productService is invoked exactly once and that the locks map does not create a new Lock instance for the same key during the contention window to verify stampede prevention.apps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.java (1)
57-77:⚠️ Potential issue | 🔴 CriticalRedis I/O 예외를 캐시 미스·no-op로 다운그레이드해야 한다.
운영 관점에서 Redis 연결 오류/타임아웃이 발생하면 현재 구현은
RuntimeException이 상위로 전파되어 조회/쓰기 API가 500으로 실패할 수 있다. 보조 저장소 장애가 핵심 경로 장애로 전파되는 구조다.
수정안은put/get/multiGet/multiPut/evict의 Redis 호출 구간을catch (RuntimeException e)로 감싸고,get은null,multiGet은 입력 키 순서/길이를 유지한null리스트,put/multiPut/evict는 로그 후 no-op로 처리하는 방식이다.
추가 테스트로 각 메서드에서RedisTemplate이RuntimeException을 던질 때, 조회는 DB fallback이 동작하고 쓰기는 비즈니스 성공을 유지하는지 검증이 필요하다.🔧 제안 diff
@@ public <T> void put(String key, T value) { try { String json = objectMapper.writeValueAsString(value); redisTemplate.opsForValue().set(key, json); log.debug("Cache PUT — key={}", key); } catch (JsonProcessingException e) { log.warn("캐시 직렬화 실패, key={}", key, e); + } catch (RuntimeException e) { + log.warn("캐시 저장 실패(다운그레이드), key={}", key, e); } } @@ public <T> T get(String key, CacheType<T> type) { - String json = redisTemplate.opsForValue().get(key); + final String json; + try { + json = redisTemplate.opsForValue().get(key); + } catch (RuntimeException e) { + log.warn("캐시 조회 실패(다운그레이드), key={}", key, e); + return null; + } @@ } catch (JsonProcessingException e) { log.warn("캐시 역직렬화 실패, key={}", key, e); redisTemplate.delete(key); return null; + } catch (RuntimeException e) { + log.warn("캐시 조회 실패(다운그레이드), key={}", key, e); + return null; } } @@ public <T> List<T> multiGet(List<String> keys, CacheType<T> type) { @@ - List<String> jsonList = redisTemplate.opsForValue().multiGet(keys); + final List<String> jsonList; + try { + jsonList = redisTemplate.opsForValue().multiGet(keys); + } catch (RuntimeException e) { + log.warn("캐시 다건 조회 실패(다운그레이드), keys={}", keys.size(), e); + return new ArrayList<>(Collections.nCopies(keys.size(), null)); + } @@ public <T> void multiPut(Map<String, T> entries, Supplier<Duration> ttlSupplier) { @@ - redisTemplate.executePipelined((RedisConnection connection) -> { + try { + redisTemplate.executePipelined((RedisConnection connection) -> { @@ - return null; - }); + return null; + }); + } catch (RuntimeException e) { + log.warn("캐시 다건 저장 실패(다운그레이드), keys={}", entries.size(), e); + return; + } @@ public void evict(String keyPattern) { + try { @@ - if (totalDeleted > 0) { - log.debug("Cache EVICT — pattern={}, deletedKeys={}", keyPattern, totalDeleted); - } + if (totalDeleted > 0) { + log.debug("Cache EVICT — pattern={}, deletedKeys={}", keyPattern, totalDeleted); + } + } catch (RuntimeException e) { + log.warn("캐시 삭제 실패(다운그레이드), pattern={}", keyPattern, e); + } }#!/bin/bash set -euo pipefail FILE="$(fd RedisCacheRepository.java | head -n 1)" echo "[target] $FILE" echo "[1] Redis 호출 지점 확인" rg -n "opsForValue\\(\\)\\.set|opsForValue\\(\\)\\.get|multiGet\\(|executePipelined\\(|scan\\(|delete\\(" "$FILE" -C2 echo "[2] RuntimeException 처리 여부 확인" rg -n "catch \\(RuntimeException" "$FILE" -C2 || true echo "[expected]" echo "- 각 Redis 호출 메서드 주변에 RuntimeException 다운그레이드 처리 코드가 보여야 한다."Also applies to: 80-111, 114-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.java` around lines 57 - 77, The Redis calls in RedisCacheRepository (notably put(String,T) and put(String,T,Duration) and similarly get/multiGet/multiPut/evict methods) must not let RuntimeException from RedisTemplate bubble up; wrap the redisTemplate.opsForValue().set / .get and other RedisTemplate calls in catch (RuntimeException e) blocks, on put/multiPut/evict log the error and no-op (return void), on get return null, on multiGet return a List of nulls preserving input key order/length, and ensure existing JsonProcessingException handling remains; add unit tests that mock RedisTemplate to throw RuntimeException and assert reads fall back (nulls) and writes are no-ops.
🧹 Nitpick comments (1)
apps/commerce-api/build.gradle.kts (1)
26-28: Awaitility 버전 선언을 gradle.properties로 중앙화하고 Transitive 취약점을 점검해야 한다.현재
awaitility:4.2.2는 하드코딩으로 선언되어 있으나, 같은 파일 내springdoc,archunit등은gradle.properties를 통해 중앙화 관리되고 있다. 버전 관리 방식의 불일치로 인해 향후 의존성 업그레이드 시 운영 누락 위험이 증가한다.gradle.properties의 기존 패턴에 맞추어awaitilityVersion=4.2.2를 추가하고 이 파일에서는 참조만 사용하기를 권장한다.추가로 Awaitility 4.2.2의 Transitive 의존성인
assertj-core:3.21.0은 CVE-2026-24400 (XXE)에 영향을 받는다. 현재 gradle.properties에서 명시적으로 AssertJ 버전을 관리하지 않으므로, AssertJ를 3.27.7 이상으로 명시적 업그레이드하거나 XML 파싱 로직에서isXmlEqualTo사용을 회피하는 것이 권장된다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/build.gradle.kts` around lines 26 - 28, Replace the hardcoded Awaitility version with a centralized property and mitigate the transitive AssertJ CVE: add awaitilityVersion=4.2.2 to gradle.properties and change the dependency in apps/commerce-api/build.gradle.kts to reference that property (replace the literal "org.awaitility:awaitility:4.2.2" used in the testImplementation line with the property), then add assertjVersion=3.27.7 to gradle.properties and either add an explicit testImplementation for "org.assertj:assertj-core:$assertjVersion" or add a dependency constraint/forced version to override the transitive assertj-core to >=3.27.7; alternatively, if upgrading is not possible, avoid using AssertJ XML APIs like isXmlEqualTo in tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.java`:
- Around line 108-110: multiGet currently maps jsonList via stream and leaves
corrupted JSON in Redis causing repeated deserialization failures; change
multiGet to an index-based loop over keys/jsonList, call
this.<T>deserializeOrNull(json, javaType) for each item, and if deserialization
returns null (or throws), invoke redisTemplate.delete(keys.get(i)) immediately
and skip/record null for that position, collecting the successful
deserializations into the result list; also apply the same change to the other
multi-get block referenced (lines ~167-176) and add a unit/integration test that
injects a corrupted JSON key, calls multiGet twice, and asserts the key is
deleted after the first call so the second call produces no deserialization
errors.
---
Duplicate comments:
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.java`:
- Around line 207-211: In ProductReader, stop removing the ReentrantLock from
locks immediately after lock.unlock() in releaseLock (remove the
locks.remove(key, lock) call as a minimum); instead implement safe cleanup
(preferably a reference-counted wrapper or scheduled/TTL eviction for keyed
locks) so that once a thread unlocks the same lock instance cannot be spuriously
removed while another thread may already hold it; update releaseLock to
decrement a reference count or skip immediate removal and only remove when
refcount==0 or via expiration. Add a concurrency test in ProductReaderTest that
races three threads against the same key (use CountDownLatch and
ExecutorService) and assert productService is invoked exactly once and that the
locks map does not create a new Lock instance for the same key during the
contention window to verify stampede prevention.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.java`:
- Around line 57-77: The Redis calls in RedisCacheRepository (notably
put(String,T) and put(String,T,Duration) and similarly
get/multiGet/multiPut/evict methods) must not let RuntimeException from
RedisTemplate bubble up; wrap the redisTemplate.opsForValue().set / .get and
other RedisTemplate calls in catch (RuntimeException e) blocks, on
put/multiPut/evict log the error and no-op (return void), on get return null, on
multiGet return a List of nulls preserving input key order/length, and ensure
existing JsonProcessingException handling remains; add unit tests that mock
RedisTemplate to throw RuntimeException and assert reads fall back (nulls) and
writes are no-ops.
In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java`:
- Around line 34-37: The test teardown in RedisCacheRepositoryIntegrationTest
currently calls redisCleanUp.truncateAll(), which performs a global flush and
can break test isolation in parallel CI; change the teardown to call a scoped
cleanup method (e.g., redisCleanUp.flushDb() or
redisCleanUp.truncateWithPrefix(testPrefix)) and update RedisCleanUp to
implement DB-scoped flush (FLUSHDB) or key-scoped removal using SCAN + DEL for
the given test key prefix; also add/adjust a test that uses two different
prefixes to assert isolation (one test's teardown does not remove the other's
keys).
---
Nitpick comments:
In `@apps/commerce-api/build.gradle.kts`:
- Around line 26-28: Replace the hardcoded Awaitility version with a centralized
property and mitigate the transitive AssertJ CVE: add awaitilityVersion=4.2.2 to
gradle.properties and change the dependency in
apps/commerce-api/build.gradle.kts to reference that property (replace the
literal "org.awaitility:awaitility:4.2.2" used in the testImplementation line
with the property), then add assertjVersion=3.27.7 to gradle.properties and
either add an explicit testImplementation for
"org.assertj:assertj-core:$assertjVersion" or add a dependency constraint/forced
version to override the transitive assertj-core to >=3.27.7; alternatively, if
upgrading is not possible, avoid using AssertJ XML APIs like isXmlEqualTo in
tests.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 94230bc4-4591-4f80-9ff5-5319a087d997
📒 Files selected for processing (6)
apps/commerce-api/build.gradle.ktsapps/commerce-api/src/main/java/com/loopers/domain/product/ProductCacheConstants.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductReader.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.javaapps/commerce-api/src/test/java/com/loopers/domain/product/ProductReaderTest.javaapps/commerce-api/src/test/java/com/loopers/infrastructure/shared/cache/RedisCacheRepositoryIntegrationTest.java
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/commerce-api/src/test/java/com/loopers/domain/product/ProductReaderTest.java
| return jsonList.stream() | ||
| .map(json -> this.<T>deserializeOrNull(json, javaType)) | ||
| .toList(); |
There was a problem hiding this comment.
multiGet 역직렬화 실패 키를 즉시 정리해야 캐시 오염이 반복되지 않는다.
운영 관점에서 손상된 JSON이 multiGet에 남아 있으면 같은 키에서 역직렬화 경고와 DB fallback이 TTL 만료 전까지 반복되어 로그 소음과 불필요한 부하가 누적된다.
수정안은 multiGet을 인덱스 기반 루프로 바꾸고, 역직렬화 실패 시 해당 키를 redisTemplate.delete(keys.get(i))로 즉시 정리하는 방식이다.
추가 테스트로 손상된 JSON 키를 주입한 뒤 multiGet을 2회 호출했을 때 첫 호출 후 키가 제거되어 두 번째 호출에서는 손상 데이터 파싱이 재발하지 않는지 검증이 필요하다.
Also applies to: 167-176
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/shared/cache/RedisCacheRepository.java`
around lines 108 - 110, multiGet currently maps jsonList via stream and leaves
corrupted JSON in Redis causing repeated deserialization failures; change
multiGet to an index-based loop over keys/jsonList, call
this.<T>deserializeOrNull(json, javaType) for each item, and if deserialization
returns null (or throws), invoke redisTemplate.delete(keys.get(i)) immediately
and skip/record null for that position, collecting the successful
deserializations into the result list; also apply the same change to the other
multi-get block referenced (lines ~167-176) and add a unit/integration test that
injects a corrupted JSON key, calls multiGet twice, and asserts the key is
deleted after the first call so the second call produces no deserialization
errors.
📌 Summary
🧭 Context & Decision
1. 인덱스 설계 — 정렬 조건별 6개 인덱스
문제 정의
선택지와 결정
추가한 인덱스 6개:
왜 DESC 인덱스를 명시했는가?
MySQL 8.0+에서 ASC 인덱스에 대해
ORDER BY col DESC를 실행하면 Backward Index Scan이 발생한다. 단일 스레드에서는 성능 차이가 미미하지만, 동시성이 높은 환경에서는 B-Tree 페이지 래치의 비대칭성(backward scan은 래치 경합이 더 심함)으로 처리량이 최대 44%까지 감소할 수 있다. 좋아요순·최신순 정렬이 대부분 DESC이므로 Forward Scan을 유도하기 위해 DESC 인덱스를 명시했다.왜 브랜드 복합 인덱스 3개를 추가했는가?
성능 개선
단일 쿼리:
부하테스트 (200VU):
2. Redis 캐시 전략 — ID 리스트 캐싱 + 스탬피드 방지
문제 정의
선택지와 결정
2-1. 범용 캐시 저장소 추상화 — DIP 적용
특정 캐시 구현에 의존하지 않도록 도메인 레이어에
CacheRepository인터페이스를 정의하고, 인프라스트럭처 레이어에서 Redis 구현체를 제공한다.도메인(ProductReader/ProductWriter)이
CacheRepository인터페이스에만 의존하므로, Redis를 다른 구현으로 교체해도 도메인 코드 변경이 없다.2-2. 통캐싱 vs ID 리스트 캐싱
목록 API 응답 전체를 하나의 캐시에 저장하는 통캐싱과, ID 리스트만 목록 캐시에 저장하고 상품 데이터는 개별 상세 캐시에서 조회하는 ID 리스트 캐싱을 비교했다.
혼합 부하 테스트(목록 80% + 상세 20%) 결과, 상세 캐시 재사용 효과로 ID 리스트 방식이 19% 우세:
실제 사용자 패턴(목록 → 상세 클릭)에서 상세 캐시 재사용이 핵심 이득이고, 스탬피드 취약성은 Lock으로 해결 가능하므로 ID 리스트를 채택했다.
2-3. 캐시 키 설계와 TTL 정책
2-4. 스탬피드(Thundering Herd) 방지 — ReentrantLock + TTL Jitter
ID 리스트 방식은 캐시 MISS 시 Redis 명령이 21+회 발생한다(ID 리스트 GET 1회 + 상세 MGET 1회 + 상세 SET N회 + 목록 SET 1회). TTL 만료 시 200VU가 동시에 MISS하면 Redis ops/sec가 969까지 치솟아 성능이 급락했다.
해결 1: ReentrantLock 기반 Double-Check Locking
캐시 키 단위로 Lock을 잡아 1개 스레드만 DB 조회 + 캐시 저장을 수행한다. 나머지 스레드는 Lock 획득 후 캐시를 재확인하여 이미 저장된 경우 바로 반환한다.
Lock 적용 후 Redis ops/sec가 85% 감소하고, RPS는 42% 개선되었다:
해결 2: TTL Jitter (±10%)
동시에 캐싱된 키들이 동시에 만료되면 스탬피드 규모가 커진다. TTL에 ±10% 랜덤 오프셋을 적용하여 만료 시점을 분산한다.
2-5. 캐시 무효화 전략 — ProductWriter
상품 CUD와 좋아요 변경 시 캐시를 이벤트별로 구분하여 처리한다. 목록 캐시에는 ID만 저장되어 있으므로, 상품 데이터 변경(수정, 좋아요)은 상세 캐시만 갱신하면 된다.
삭제 시 목록 캐시를 패턴(
product:list:*)으로 evict할 때,KEYS명령 대신SCAN을 사용한다.KEYS는 Redis를 블로킹하여 운영 환경에서 위험하므로,SCAN으로 점진적으로 키를 찾아 삭제한다:3. ProductReader/ProductWriter 분리 — 캐시 관심사 격리
문제 정의
선택지와 결정
캐시의 존재를 UseCase가 알 필요가 없도록, 읽기와 쓰기를 각각 전담하는 도메인 서비스를 분리했다.
Before: UseCase가 캐시를 직접 관리하는 구조 (가정)
문제점:
After: ProductReader가 읽기를 캡슐화
트레이드오프
@DomainService)로 분류되어 있어, 캐시를 도메인 관심사로 볼 것인지에 대한 논의 여지가 있음🏗️ Design Overview
변경 범위
신규 추가
ProductReaderProductWriterProductCacheConstantsProductDetailAssemblerCacheRepositoryCacheKey/CacheTypeRedisCacheRepository제거/대체
ProductService에서 캐시 관련 로직 →ProductReader/ProductWriter로 분리주요 컴포넌트 책임
ProductReaderProductWriterProductServiceProductDetailAssemblerCacheRepositoryRedisCacheRepository🔁 Flow Diagram
상품 목록 조회 (캐시 + 스탬피드 방지)
sequenceDiagram autonumber participant Client participant UseCase as ReadActiveProductsUseCase participant Reader as ProductReader participant Redis participant Lock as ReentrantLock participant Service as ProductService participant DB Client->>UseCase: GET /v1/products?sort=LIKE_COUNT_DESC UseCase->>Reader: readActiveProducts(brandId, sortType, pageSize) Reader->>Redis: GET product:list:v1:all:LIKE_COUNT_DESC:0:20 alt Cache HIT (ID 리스트) Redis-->>Reader: [id1, id2, ..., id20] + hasNext Reader->>Redis: MGET product:detail:v1:{id1..id20} Redis-->>Reader: [product1, product2, ...] else Cache MISS Redis-->>Reader: null Reader->>Lock: tryLock(listKey, 3초) Lock-->>Reader: acquired Reader->>Redis: GET (double-check) Redis-->>Reader: null Reader->>Service: getActiveProducts() Service->>DB: SELECT ... ORDER BY like_count DESC LIMIT 20 DB-->>Service: products Reader->>Redis: MSET product:detail:v1:{id} × 20 (TTL 5min ± jitter) Reader->>Redis: SET product:list:v1:... (TTL 1min ± jitter) end Reader-->>UseCase: Page<Product> UseCase-->>Client: 200 OK (브랜드 + 좋아요 여부 조합)좋아요 등록 (DB 동기화 + Write-Through 캐시)
sequenceDiagram autonumber participant Client participant UseCase as LikeProductUseCase participant LikeService participant Writer as ProductWriter participant Service as ProductService participant DB participant Redis Client->>UseCase: POST /v1/products/{id}/likes UseCase->>Service: validateActiveProductExists(productId) UseCase->>LikeService: like(userId, productId) LikeService->>DB: EXISTS (중복 체크) LikeService->>DB: INSERT INTO likes alt 새로 생성됨 (true) UseCase->>Writer: increaseLikeCount(productId) Writer->>Service: increaseLikeCount(productId) Service->>DB: UPDATE SET like_count = like_count + 1 Writer->>Service: getActiveProduct(productId) Service->>DB: SELECT (최신 데이터) Writer->>Redis: SET product:detail:v1:{id} (Write-Through) end UseCase-->>Client: 200 OK✅ Checklist
🔖 Index
❤️ Structure
⚡ Cache
🤖 Generated with Claude Code
변경 목적 및 맥락: 50만 건 상품 데이터와 200VU 부하에서 목록/상세 API의 저조한 RPS와 높은 에러율을 해결하기 위해 인덱스 추가 → 좋아요 비정규화(도메인 원자 업데이트) → Redis 캐싱의 3단계로 조회 성능을 정상화(혼합 부하 RPS 682/s·에러율 0% 보고).
핵심 변경점: Product 엔티티에 like_count DESC, created_at DESC, price ASC 및 brand_id 복합 인덱스 포함 총 6개 추가; ProductReader(읽기, ID 리스트 + 상세 캐시, per-key ReentrantLock double-check)·ProductWriter(수정/삭제/좋아요 변경 시 DB 조작 후 상세 캐시 갱신/무효화) 도입; CacheRepository 추상화 및 RedisCacheRepository 구현 추가.
캐시/정책 요약: 목록은 ID 리스트(+hasNext)를 앞 3페이지만(MAX_CACHEABLE_PAGE = 2 → 페이지 인덱스 0..2) TTL 기본 1분, 상세는 개별키 TTL 기본 5분, TTL에 ±10% jitter 및 키 단위 ReentrantLock(LOCK_TIMEOUT_SECONDS=3s)으로 스탬피드 방지; 목록 캐시 MISS 시 상세 일괄 multiGet 후 DB fetch 및 multiPut.
리스크·주의사항: 현재 락은 JVM 로컬 ReentrantLock이라 다중 인스턴스(배포) 환경에서는 분산 락 필요성 검토가 필요하고(확인 필요), 인덱스 추가는 대량 데이터에서 마이그레이션 시간/잠재적 잠금 영향이 있음; 목록은 앞 페이지만 캐싱하므로 뒤쪽 페이지는 계속 DB 조회가 발생함.
테스트·검증 방법: ProductReaderTest/ProductWriterTest/RedisCacheRepositoryIntegrationTest/CacheKeyTest로 캐시 HIT/MISS·락 경합·무효화 검증, k6 스크립트(k6/product-list-test.js, product-detail-test.js)로 8개 시나리오·4단계 램프 프로파일로 부하검증 및 TTL/evict 동작 확인.
확인 질문: 다중 서버 환경에서 락을 분산 락으로 교체할 계획이 있나요(예: RedLock 도입 여부) 및 인덱스 배포 시점에 대한 운영 가이드가 있으면 알려주세요?