Skip to content

[volume-5] 5주차 인덱스 설계와 조회 최적화, 정규화 전략, 캐시 도입 - 김지혜#214

Open
zeexzeex wants to merge 20 commits intoLoopers-dev-lab:zeexzeexfrom
zeexzeex:volume-5
Open

[volume-5] 5주차 인덱스 설계와 조회 최적화, 정규화 전략, 캐시 도입 - 김지혜#214
zeexzeex wants to merge 20 commits intoLoopers-dev-lab:zeexzeexfrom
zeexzeex:volume-5

Conversation

@zeexzeex
Copy link

@zeexzeex zeexzeex commented Mar 13, 2026

📌 Summary

  • 배경: 실제 트래픽 시나리오(상품 목록/상세, 좋아요/장바구니/주문 조회)에 대해 docs/performance/phase-benchmark-results.md, docs/performance-optimization-roadmap.md에서 측정한 결과, 일부 조회 쿼리에서 풀스캔·과도한 조인·캐시 미사용으로 인한 응답 지연과 CPU 사용량 증가가 확인되었습니다.
  • 목표:
    • (1) 조회 패턴 기반 인덱스 설계와 쿼리 리라이팅으로 주요 읽기 API의 평균/99퍼센타일 응답 시간을 단축하고,
    • (2) 테이블·엔티티 정규화/비정규화 전략을 재정비해 조회·쓰기 비용을 균형 있게 맞추며,
    • (3) Redis 기반 캐시(ProductCacheService 등)를 도입해 읽기 부하를 DB에서 분산하는 것입니다.
  • 결과: 인덱스 추가·변경과 쿼리 최적화로 핵심 조회(상품 목록/상세, 좋아요 수, 주문 히스토리 등)의 실행 계획이 정리되고, 정규화 전략 보완으로 중복 데이터·불필요 조인을 줄였습니다. 또한 조회 중심 도메인에 캐시를 도입·적용해 워크로드별 TPS와 응답 시간이 개선되었고, before/after 벤치마크는 docs/performance/phase-benchmark-results.mddocs/performance/benchmark-100k.md,docs/performance/benchmark-200k.md,docs/performance/benchmark-500k.md,docs/performance/benchmark-1m.md에 정리했습니다.

🧭 Context & Decision

문제 정의

  • 현재 동작/제약:
    • 일부 조회 쿼리가 적절한 인덱스 없이 WHERE/ORDER BY 컬럼에 대해 풀스캔·Filesort가 발생하고 있습니다. (벤치마크/실행 계획 분석 근거: docs/performance/phase-benchmark-results.md)
    • 정규화가 덜 된 영역은 데이터 중복·조인 비용을 키우고, 과도한 정규화 영역은 단순 조회에도 다단계 조인이 필요해 응답 시간을 악화시키고 있습니다.
    • 캐시가 적용되지 않은 읽기 다수 엔드포인트는 모두 DB에 직접 부하를 주고 있어, 동시 접속 수 증가 시 DB가 병목이 되는 것으로 분석되었습니다.
  • 성공 기준:
    • 주요 조회 API(상품 목록/상세, 좋아요 수 포함, 장바구니/주문 요약 등)의 P95 응답 시간 감소 및 QPS 상승(벤치마크 스크립트: docs/load-test/run-all-k6-phase.sh 기준).
    • 쿼리 실행 계획 상 불필요한 풀스캔 제거 및 인덱스 Range Scan/Index Only Scan 비중 증가.
    • 캐시 도입 이후에도 정합성·무효화 전략이 설계 문서(docs/performance-optimization-roadmap.md, docs/performance/performance-risk-inspection.md)와 일치하고, 실패 시 안전하게 DB 조회로 폴백할 것.

선택지와 결정 – 인덱스·정규화·캐시 전략

인덱스 설계

  • 변경 코드 (PLP likes_desc 쿼리 리라이팅)

    -- AS-IS: JOIN + GROUP BY + ORDER BY COUNT
    SELECT p.*
    FROM product p
    LEFT JOIN likes l ON p.id = l.product_id
    WHERE p.deleted_at IS NULL AND p.brand_id = ?
    GROUP BY p.id
    ORDER BY COUNT(l.id) DESC
    LIMIT ? OFFSET ?;
    
    -- TO-BE: Read Model(product_stats.like_count) 정렬
    SELECT p.*
    FROM product p
    JOIN product_stats ps ON p.id = ps.product_id
    WHERE p.deleted_at IS NULL AND p.brand_id = ?
    ORDER BY ps.like_count DESC, p.id DESC
    LIMIT ? OFFSET ?;
  • 대안

    • A: PK/단일 컬럼 인덱스만 유지, 쿼리 튜닝 최소화
    • B: PLP latest/price용 (deleted_at, brand_id, created_at/price) 복합 인덱스 추가
    • C: 주문 목록용 (user_id, status, ordered_at DESC) 인덱스로 범위 스캔 최적화
    • D: Keyset Pagination 등 API 스펙을 바꾸는 구조적 변경
  • 결정 & 이유: 이번 과제에서는 B + C만 적용했습니다. A만 두면 phase-benchmark-results.md에서 드러난 PLP 딥페이징·주문 목록 OFFSET 병목을 해소할 수 없고, D는 API 스펙까지 바꾸는 큰 변경이라 5주차 범위를 넘는다고 판단했습니다. B/C 조합은 실행 계획에서 Range Scan/Index Scan 비중을 높이면서도 API 스펙을 유지할 수 있어, 비용 대비 효과가 가장 좋다고 생각했습니다.

정규화/비정규화

  • 변경 코드 (product_stats.like_count 아토믹 업데이트)

    // LikeService.addLike (요지)
    if (likeRepository.existsByUserIdAndProductId(userId, productId)) {
        return;
    }
    likeRepository.save(new LikeModel(userId, productId));
    productStatsRepository.incrementLikeCount(productId); // like_count = like_count + 1
  • 대안

    • A: 좋아요·카운트류까지 모두 정규화하고 JOIN + GROUP BY로 조회 유지
    • B: product.like_count 단일 비정규화로 쓰기·읽기를 한 테이블에 집중
    • C: product_stats 수직 분리 테이블에 like_count 등 고QPS 카운터를 모으고, PLP likes_desc/PDP에서 조인 사용
  • 결정 & 이유: 좋아요 쓰기 Row lock 경합(§2.1)과 PLP likes_desc의 JOIN+GROUP BY 병목(§3.2.1)을 동시에 줄이기 위해 C(수직 분리 + 비정규화) 를 선택했습니다. A는 읽기 쿼리 구조를 그대로 두어 성능 개선 효과가 작고, B는 product 한 테이블에 읽기/쓰기 락이 모두 몰리는 구조라 “핫 상품 쓰기 병목” 리스크가 큽니다. C는 쓰기 병목을 product_stats로 분리하면서도 PLP/PDP가 같은 like_count를 재사용해 읽기·쓰기를 함께 최적화할 수 있다는 점에서 균형이 가장 좋았습니다.

캐시 도입

  • 변경 코드 (PDP 캐시 적용)

    // ProductFacade.getProductDetail (요지)
    public ProductDetailInfo getProductDetail(Long productId) {
        return productCacheService.getProductDetail(productId, () -> {
            ProductModel product = productService.getById(productId);
            BrandModel brand = brandService.getById(product.getBrandId());
            long likeCount = likeService.getLikeCount(productId);
            return ProductDetailInfo.from(product, brand, likeCount);
        });
    }
  • 대안

    • A: 모든 조회를 인덱스만으로 처리, 캐시 미적용
    • B: PLP 1페이지(조건 조합별)만 Redis 캐시로 흡수
    • C: PDP용 Read Model + Redis 캐시로 핫 상품 상세 부하 분산
    • D: 주문 목록/유저 기반 데이터까지 광범위 캐시
  • 결정 & 이유: “캐시는 SOT가 아니라 부하 차단용” 원칙에 따라, B(PLP 1페이지 캐시)C(PDP 캐시) 를 선택했습니다. A만 사용하면 PLP/PDP의 핫 구간이 모두 DB로 향해, performance-optimization-roadmap.md가 지적한 SOT 과부하 리스크를 그대로 안고 갑니다. 반대로 D처럼 주문/유저 기반까지 넓게 캐시하면 정합성·무효화 복잡도에 비해 캐시 히트율(파레토 10:90, 유저별 데이터 특성)이 낮다는 근거에 어긋납니다. B/C는 “트래픽 상위, 읽기 중심 엔드포인트만 얇게 캐시”해 작은 복잡도로 DB 부하를 크게 줄일 수 있는 절충안이라 5주차 스코프에 가장 적합하다고 판단했습니다.


🏗️ Design Overview

  • 변경 범위
    • 영향 모듈/도메인: 상품·브랜드·좋아요·주문 조회 쿼리 및 관련 서비스/리포지토리, 성능 측정·로드 테스트 스크립트, 성능 문서(docs/performance/*, docs/performance-optimization-roadmap.md).
    • 신규/주요 변경:
      • 실제 사용 패턴을 반영한 인덱스 추가/수정 및 일부 쿼리 리라이팅.
      • 조회 경로를 단순화하기 위한 정규화/비정규화 전략 재정비 및 도메인 모델/서비스 레벨 조정.
      • Redis 기반 캐시 도입 및 ProductCacheService 등 조회 전용 서비스와 연계, 캐시 적중/미스·무효화 전략 구현.
      • docs/performance/phase-benchmark-results.md에 전/후 벤치마크 수치 및 실행 계획 비교를 반영.
    • 테스트/검증:
      • k6 기반 부하 테스트(docs/load-test/run-all-k6-phase.sh)로 인덱스/캐시 적용 전·후의 응답 시간/에러율을 비교.
      • 단위/통합 테스트에서 캐시 적중/미스·무효화 시나리오와 DB 폴백 경로를 검증.

🙋 리뷰 포인트

  • Q1. 인덱스 설계 기준: 실제 프로덕션에서 인덱스를 설계·조정할 때, 어느 정도 수준의 풀스캔 비중/쿼리 지연(P95/P99) 지표를 기준으로 인덱스 추가·변경을 결정하시는지(운영 경험상 가이드라인이 있다면 공유 요청드립니다).
  • Q2. 정규화 vs 비정규화 트레이드오프: PR에서는 조회 중심 도메인 일부에 비정규화/스냅샷 필드를 허용했습니다. 실무에서 정규화 원칙을 유지하면서도 조회 성능 요구가 강할 때, 어느 수준의 중복까지 허용하는지(예: 특정 도메인에서 허용 가능한 중복/드리프트 범위).
  • Q3. 캐시 일관성 전략: 현재는 TTL + 명시적 무효화를 혼합했습니다. 트래픽·데이터 특성에 따라 캐시 일관성/성능 사이의 균형을 어떻게 잡는지(예: write-through, write-behind, 캐시-어사이드 패턴 선택 기준 및 운영 경험) 의견을 듣고 싶습니다.

변경 목적

상품 목록/상세·좋아요·주문 조회에서 관찰된 풀스캔·과도한 조인·캐시 미사용으로 인한 P95/P99 응답 지연과 DB/CPU 부하를 완화하기 위해 인덱스·쿼리 리라이팅, 통계 테이블 분리, Redis 캐시 도입 및 트랜잭션·동시성 개선을 적용합니다.

핵심 변경점

product_stats 엔티티/저장소를 도입해 좋아요 카운트를 수직 분리하고 원자적 증감 쿼리로 처리하며 PLP likes_desc 정렬을 product_stats.like_count 기반으로 리라이트하고 관련 복합 인덱스(product, orders 등)를 추가했습니다. PDP(개별)와 PLP(페이지0)에 Redis 기반 ProductCacheService를 추가(TTL+지터, JSON 직렬화, evictDetail/evictList, Redis 장애 시 DB 폴백)하고, Like/Coupon 처리에서 REQUIRES_NEW 분리·재시도·afterCommit 무효화로 트랜잭션 일관성을 강화했습니다. k6 기반 부하 테스트 스크립트와 대용량 시드(ProductDataSeeder)도 포함됩니다.

리스크/주의사항

기존 likes 집계의 product_stats 마이그레이션 전략(일괄 vs 온라인 증분)과 데이터 초기 정합성 절차가 코드/문서에 명확히 포함되어 있지 않아 운영 리스크가 존재합니다. PLP 캐시 무효화 범위(전키 전역 evict vs 특정 정렬/브랜드 대상)로 인한 일시적 정합성/성능 영향, product_stats 생성·삭제 시 경합(특히 createIfAbsent/insert-ignore 충돌) 및 Redis 장애 시 동작(무효화 타이밍, afterCommit 보장)에 유의해야 합니다.

테스트/검증 방법

ProductDataSeeder로 대량 데이터를 시드한 뒤 k6(PLP/PDP/likes-write/orders)로 전/후 벤치마크(p95/p99/avg/max, 에러율) 및 EXPLAIN 비교(풀스캔 제거, 인덱스 스캔 증가)를 수행하고, 캐시 히트율·무효화(afterCommit 동작)·Redis 실패 폴백 경로와 product_stats vs likes 테이블 정합성(샘플링)을 검증합니다. 확인 질문: 기존 likes → product_stats 마이그레이션 계획(절차·타이밍·롤백 전략)이 있나요? PLP 캐시 무효화는 운영 시 전체 키(evictList)로 할지, 정렬/브랜드별로 제한할지 어떻게 결정하시나요?

zeexzeex added 13 commits March 9, 2026 20:44
- OrderFacade에서 리트라이 루프·self 주입 제거, placeOrder 단일 @transactional로 단순화
- CouponService.validateAndUse에서 1회 재시도·백오프 처리, validateAndUseInNewTransaction(REQUIRES_NEW) 분리
- OrderFacadeTest는 오케스트레이션 검증으로 변경, CouponServiceRetryTest 추가
- OrderFacade에서 리트라이 루프·self 주입 제거, placeOrder 단일 @transactional로 단순화
- CouponService.validateAndUse에서 1회 재시도·백오프 처리, validateAndUseInNewTransaction(REQUIRES_NEW) 분리
- OrderFacadeTest는 오케스트레이션 검증으로 변경, CouponServiceRetryTest 추가
- ProductCacheService, ProductListCacheDto 추가
- ProductFacade: PDP/PLP 캐시 조회·미스 시 DB 후 캐시 저장
- LikeFacade: addLike/removeLike 시 PDP 캐시 무효화
- OrderModel: (user_id, ordered_at) 복합 인덱스
- ProductModel: (deleted_at, brand_id, created_at), (deleted_at, brand_id, price)
- ProductStatsModel: (like_count, product_id)
- ensureProductStatsExists(REQUIRES_NEW)로 product_stats 생성 분리
- doAddLike에서는 likes INSERT + incrementLikeCount만 수행
- 데드락/락 타임아웃 시 지수 백오프 재시도, CONFLICT만 허용 검증
- run-all-k6-phase.sh 추가 (phase1~3 PLP/PDP/likes_write)
- k6-orders-list.js: ALLOW_NON_200, 주석·사용법 정리
- phase-benchmark-results.md (Phase 1~4 규모별 결과)
- performance-optimization-roadmap.md, performance-risk-inspection.md
- benchmark-100k|200k|500k|1m.md, like-concurrency-deadlock-alternatives.md
@zeexzeex zeexzeex self-assigned this Mar 13, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 13, 2026

📝 Walkthrough

Walkthrough

제품 캐싱(PLP/PDP)과 product_stats 기반 좋아요 집계를 도입하고, 쿠폰 사용에 REQUIRES_NEW 재시도 로직을 추가했으며 주문의 쿠폰 관련 재시도 로직을 제거함. DB 인덱스 추가, 대용량 데이터 시더 및 k6 로드 테스트 스크립트가 포함됨.

Changes

Cohort / File(s) Summary
버전 관리
\.gitignore
docs/ 항목 제거로 문서 디렉토리가 버전 관리 대상이 됨.
제품 캐시 구현
apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java, apps/commerce-api/src/main/java/com/loopers/application/product/ProductListCacheDto.java
Redis 기반 PLP/PDP 캐시 클래스 및 DTO 추가(키 설계, TTL+지터, 직렬화/역직렬화, evict 메서드).
애플리케이션 캐시 연동
apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java, apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
ProductCacheService 주입 및 캐시 조회/저장 로직 추가, 상품 변경/좋아요 시 커밋 후 캐시 무효화 스케줄링(runAfterCommit).
product_stats 도메인·리포지토리
apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsModel.java, apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsRepository.java, apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsJpaRepository.java, apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsRepositoryImpl.java
product_stats 엔티티, JPA 리포지토리(원자적 증감 쿼리 포함), 도메인 레포지토리 구현체 추가로 like_count 원자적 증감 및 조회 제공.
좋아요 서비스 리팩터 및 동시성
apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java, apps/commerce-api/src/test/.../LikeServiceTest.java, apps/commerce-api/src/test/.../LikeConcurrencyIntegrationTest.java
product_stats 연동(행 생성 보장), REQUIRES_NEW 분리 트랜잭션(doAddLike/ensureProductStatsExists), 데드락/락타임아웃 감지 후 지수 백오프 재시도 추가 및 테스트 보강.
주문 흐름 단순화
apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java, apps/commerce-api/src/test/.../OrderFacadeTest.java
쿠폰 포함 주문의 OptimisticLock 기반 재시도 로직과 자기 참조 프록시 제거, 쿠폰 미사용 오버로드 추가 및 테스트 단순화.
쿠폰 서비스 재시도(트랜잭션 전파)
apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponService.java, apps/commerce-api/src/test/.../CouponServiceRetryTest.java, apps/commerce-api/src/test/.../CouponPropagationIntegrationTest.java
validateAndUse에서 REQUIRES_NEW로 재시도(최대 2회) 구현 및 단위/통합 테스트 추가로 낙관적 잠금 복구 검증.
상품 서비스 연계 변경
apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java
상품 등록 시 product_stats.createIfAbsent 호출, 삭제 시 product_stats 정리 호출 추가.
쿼리/정렬 조정
apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
Like 조인 대신 ProductStats 조인으로 교체, likes_desc 정렬을 product_stats.like_count 기준으로 전환 및 집계 로직 간소화.
DB 인덱스 추가
apps/commerce-api/src/main/java/com/loopers/domain/order/OrderModel.java, apps/commerce-api/src/main/java/com/loopers/domain/product/ProductModel.java
orders(user_id, ordered_at) 인덱스 및 products PLP 관련 복합 인덱스 추가로 조회 성능 보완.
대용량 시더
apps/commerce-api/src/main/java/com/loopers/support/perf/ProductDataSeeder.java
브랜드/제품 대량 삽입용 시더 추가(배치/트랜잭션 제어, 커밋 유닛, 가격·생성일 분포 등).
로드 테스트 도구
docs/load-test/*
k6 스크립트(PDP/PLP/likes-write/orders) 및 로컬 설정·오케스트레이션 스크립트 추가로 부하·성능 테스트 환경 제공.
캐시 테스트
apps/commerce-api/src/test/java/com/loopers/application/product/ProductCacheServiceTest.java
Redis 실패 경로를 모킹하여 캐시 실패 시 예외 비전파 및 빈 Optional 반환 동작 검증.
테스트 조정(다수)
apps/commerce-api/src/test/...
Like/Order/Product/Coupon 관련 테스트를 product_stats·캐시·트랜잭션 변경에 맞춰 수정 또는 신규 테스트 추가.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant LikeFacade as "LikeFacade\n(runs after commit)"
    participant LikeService as "LikeService\n(doAddLike/ensureProductStatsExists)"
    participant ProductStatsRepo as "ProductStatsRepository"
    participant ProductCache as "ProductCacheService"

    Client->>LikeFacade: addLike(userId, productId)
    LikeFacade->>LikeService: ensureProductStatsExists(productId)
    LikeService->>ProductStatsRepo: createIfAbsent(productId) [REQUIRES_NEW]
    ProductStatsRepo-->>LikeService: OK
    LikeService->>LikeService: doAddLike(userId, productId) [REQUIRES_NEW, retry on deadlock]
    LikeService->>ProductStatsRepo: incrementLikeCount(productId)
    ProductStatsRepo-->>LikeService: updated
    Note right of LikeFacade: 트랜잭션 커밋 후 실행 예약
    LikeFacade->>ProductCache: evictDetail(productId) / evictList()
    ProductCache-->>LikeFacade: evicted
    LikeFacade-->>Client: 201 Created
Loading
sequenceDiagram
    actor Client
    participant ProductFacade as "ProductFacade"
    participant ProductCache as "ProductCacheService"
    participant ProductService as "ProductService"
    participant ProductStatsRepo as "ProductStatsRepository"

    Client->>ProductFacade: getProductDetail(productId)
    ProductFacade->>ProductCache: getDetail(productId)
    alt cache hit
        ProductCache-->>ProductFacade: cached detail
    else cache miss
        ProductFacade->>ProductService: getProductById(productId)
        ProductService->>ProductStatsRepo: getLikeCountFromStats(productId)
        ProductStatsRepo-->>ProductService: likeCount
        ProductService-->>ProductFacade: ProductDetailInfo
        ProductFacade->>ProductCache: putDetail(productId, info)
        ProductCache-->>ProductFacade: stored
    end
    ProductFacade-->>Client: ProductDetailInfo
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~55 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 18.75% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed PR 제목이 주요 변경 사항을 명확하게 반영한다. 인덱스 설계, 조회 최적화, 정규화 전략, 캐시 도입이 모두 포함되어 있고 변경 범위를 적절히 요약했다.
Description check ✅ Passed PR 설명이 요구 템플릿의 대부분 섹션을 충실하게 작성했다. 배경, 목표, 결과를 명확히 했고, 문제 정의, 선택지와 결정, 설계 개요, 리뷰 포인트를 상세히 기술했으나 Flow Diagram(Mermaid) 섹션이 없다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java (1)

117-126: 🛠️ Refactor suggestion | 🟠 Major

removeLike 테스트의 예외 처리 일관성 부재

첫 번째 테스트(concurrency_multipleUsersLikeSameProduct)는 CoreException 기반 예외 처리와 unexpected exception 추적을 도입했으나, concurrency_multipleUsersRemoveLike 테스트는 여전히 generic Exception을 무시(ignore)하고 있다. 이로 인해:

  1. 데드락/타임아웃 등 예상치 못한 예외 발생 시 테스트가 false positive로 통과한다.
  2. 두 테스트 간 검증 수준 불일치로 향후 유지보수 시 혼란이 발생한다.

동일한 예외 처리 패턴을 적용하여 테스트 신뢰성을 확보해야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java`
around lines 117 - 126, The concurrency_multipleUsersRemoveLike test is
swallowing all Exceptions in the thread, unlike
concurrency_multipleUsersLikeSameProduct which distinguishes CoreException and
tracks unexpected errors; update the Runnable around likeService.removeLike(uid,
productId) to catch CoreException (or the domain-specific checked exception used
by likeService) and increment/removeCount on success, but capture any other
unexpected Throwable into a shared atomic/collection (e.g., AtomicInteger
unexpectedErrorCount or AtomicReference<Throwable> unexpectedError) so the test
can assert none occurred, and ensure done.countDown() is still called in
finally; reference likeService.removeLike, concurrency_multipleUsersRemoveLike,
removeCount, start and done when locating the change.
🧹 Nitpick comments (14)
apps/commerce-api/src/main/java/com/loopers/domain/product/ProductModel.java (1)

15-24: 인덱스 문서화와 함께 likes_desc 정렬 지원 여부를 명확히 해야 한다.

PR 목표에 "PLP likes_desc를 product_stats.like_count 기반 정렬로 전환"이 언급되어 있다. 현재 ProductModel에는 latest, price 정렬용 인덱스만 정의되어 있으므로, likes_desc 정렬이 product_stats 테이블의 인덱스에 의존한다면 해당 테이블의 인덱스 설계도 함께 검토가 필요하다.

JavaDoc에 "likes_desc는 product_stats 테이블 참조"와 같은 설명을 추가하면 운영 시 인덱스 관리 범위를 명확히 파악할 수 있다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductModel.java`
around lines 15 - 24, Update the ProductModel JavaDoc to explicitly state that
the likes_desc PLP sorting uses product_stats.like_count (i.e., sorting is
performed from the product_stats table) and therefore index management must
include product_stats; reference the ProductModel class and its existing PLP
index comments and add a note like "likes_desc: relies on
product_stats.like_count — ensure appropriate index exists on product_stats" and
also add a reviewer TODO to verify/update the product_stats table indexes (e.g.,
on product_id + like_count) so maintainers know to manage indexes across both
tables.
apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponServiceRetryTest.java (2)

57-59: 중복 단언 제거 필요.

assertThrows(CoreException.class, ...) 이후 Line 58의 isInstanceOf(CoreException.class) 검증은 중복이다.

수정안
         assertThat(calls.get()).isEqualTo(2);
-        assertThat(ex).isInstanceOf(CoreException.class);
         assertThat(ex.getMessage()).isEqualTo("잠시 후 다시 시도해 주세요.");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponServiceRetryTest.java`
around lines 57 - 59, Remove the redundant type assertion: since you already use
assertThrows(CoreException.class, ...) to obtain ex, delete the line asserting
assertThat(ex).isInstanceOf(CoreException.class); keep the calls.get() count
assertion and the message assertion (assertThat(ex.getMessage()).isEqualTo(...))
so only one type check (via assertThrows) remains; locate the assertions around
the variable ex and the calls.get() check in CouponServiceRetryTest.

48-60: InterruptedException 경로 테스트 누락.

validateAndUse의 Line 81-84에서 Thread.sleepInterruptedException 발생 시 INTERNAL_ERROR를 던지는 경로가 테스트되지 않았다. 경계 케이스 커버리지를 위해 추가하는 것이 좋다.

테스트 케이스 예시
`@Test`
`@DisplayName`("validateAndUse는 백오프 대기 중 인터럽트 시 INTERNAL_ERROR를 던진다.")
void validateAndUse_whenInterruptedDuringBackoff_thenThrowsInternalError() {
    AtomicInteger calls = new AtomicInteger();
    CouponService sut = new TestableCouponService(
            couponTemplateRepository, issuedCouponRepository, calls, null) {
        `@Override`
        CouponDiscount validateAndUseInNewTransaction(Long issuedCouponId,
                                                      Long userId,
                                                      BigDecimal orderAmountBeforeDiscount) {
            calls.getAndIncrement();
            Thread.currentThread().interrupt(); // 인터럽트 플래그 설정
            throw new OptimisticLockException("version conflict");
        }
    };

    CoreException ex = assertThrows(CoreException.class,
            () -> sut.validateAndUse(1L, 1L, BigDecimal.valueOf(10000)));

    assertThat(ex.getMessage()).isEqualTo("재시도 대기 중 중단되었습니다.");
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponServiceRetryTest.java`
around lines 48 - 60, Add a unit test for validateAndUse to cover the
InterruptedException path: create a TestableCouponService and override
validateAndUseInNewTransaction(...) to increment the AtomicInteger, call
Thread.currentThread().interrupt() and then throw an OptimisticLockException so
the outer validateAndUse's retry/backoff hits Thread.sleep and receives an
InterruptedException; assert that validateAndUse throws a CoreException with
message "재시도 대기 중 중단되었습니다." and that the expected call count was made.
apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponService.java (1)

79-84: Thread.sleep 사용 시 요청 스레드 블로킹 명시 필요.

50ms 백오프는 짧아서 허용 가능하나, 요청 처리 스레드가 블로킹된다. 고부하 시 스레드 풀 고갈 가능성이 있으므로 Javadoc에 이 트레이드오프를 명시하는 것이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponService.java`
around lines 79 - 84, The try/catch that calls Thread.sleep(backoffMs) in
CouponService blocks the request thread (even if backoffMs is small), so add a
clear Javadoc comment on the enclosing method (or on the CouponService
class/method that contains the Thread.sleep and InterruptedException handling)
that documents the blocking behavior, the chosen default backoffMs (50ms) and
the tradeoff/risk of thread-pool exhaustion under high load; mention that
callers should avoid long-running synchronous retries or switch to
non-blocking/async retry if needed.
apps/commerce-api/src/test/java/com/loopers/application/order/OrderFacadeTest.java (1)

64-66: Nested 클래스명이 현재 동작과 불일치한다.

PlaceOrderWithCouponRetry는 재시도 로직이 제거된 후에도 "Retry" 접미사를 유지하고 있다. 현재 동작인 단일 패스 흐름을 반영하여 PlaceOrderWithCoupon으로 변경하는 것이 명확하다.

수정안
-    `@DisplayName`("placeOrder (쿠폰 적용) 시")
+    `@DisplayName`("placeOrder (쿠폰 적용) 시")
     `@Nested`
-    class PlaceOrderWithCouponRetry {
+    class PlaceOrderWithCoupon {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/application/order/OrderFacadeTest.java`
around lines 64 - 66, Rename the nested test class PlaceOrderWithCouponRetry to
PlaceOrderWithCoupon to reflect current behavior; update the class declaration
(the nested class annotated with `@Nested` and `@DisplayName`("placeOrder (쿠폰 적용)
시")) and any internal or external references (test class name usages, imports,
or inner-class instantiations) to the new identifier so the test compiles and
the DisplayName still matches expected description.
apps/commerce-api/src/main/java/com/loopers/application/product/ProductListCacheDto.java (1)

8-12: Record의 List 필드에 대한 방어적 복사 고려

캐시 DTO의 content 필드가 외부에서 전달된 mutable List를 그대로 참조한다. 호출부에서 리스트 변경 시 캐시 데이터 오염 가능성이 있다. 실제로 Page.getContent()는 unmodifiable list를 반환하지 않는 경우가 있어 putList 호출 이후 원본 리스트 변경 시 캐시 불일치가 발생할 수 있다.

♻️ 방어적 복사 적용 예시
 public record ProductListCacheDto(
         long totalElements,
         int number,
         int size,
         List<ProductListItemInfo> content) {
+    public ProductListCacheDto {
+        content = content != null ? List.copyOf(content) : List.of();
+    }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductListCacheDto.java`
around lines 8 - 12, ProductListCacheDto currently stores the incoming mutable
List<ProductListItemInfo> directly in the record field `content`, risking cache
corruption if the caller mutates the list; update the record to defensively
copy/unwrap the list in its canonical constructor (or provide a static factory)
so `content` holds an immutable copy (e.g., use List.copyOf(...) or new
ArrayList<>(...) wrapped as unmodifiable) to prevent external mutations from
affecting the cached DTO.
apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java (1)

27-37: 트랜잭션 롤백 시 캐시 불일치 가능성

addLike(), removeLike() 메서드가 @Transactional 내에서 캐시 무효화(evictDetail)를 수행한다. 트랜잭션이 롤백되면 DB 상태는 원복되지만 캐시는 이미 삭제된 상태로 남아, 다음 조회 시 DB에서 읽어와 다시 캐시하므로 정합성 문제는 자연 복구된다.

그러나 캐시 eviction 중 Redis 예외 발생 시 비즈니스 트랜잭션까지 롤백될 수 있다. 캐시 무효화 실패가 좋아요 처리 실패로 이어지면 안 된다.

권장 개선:

  1. TransactionSynchronizationManager.registerSynchronization()으로 커밋 후 eviction 수행
  2. 또는 evictDetail 호출을 try-catch로 감싸 Redis 예외 격리
♻️ 커밋 후 캐시 무효화 예시
`@Transactional`
public LikeInfo addLike(Long userId, Long productId) {
    LikeModel like = likeService.addLike(userId, productId);
    TransactionSynchronizationManager.registerSynchronization(
        new TransactionSynchronization() {
            `@Override`
            public void afterCommit() {
                productCacheService.evictDetail(productId);
            }
        }
    );
    return LikeInfo.from(like);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 27 - 37, addLike(...) and removeLike(...) currently call
productCacheService.evictDetail(productId) inside the `@Transactional` boundary
which can cause the business transaction to fail if the cache eviction (Redis)
throws; change eviction to run after successful commit or isolate cache
failures: register a TransactionSynchronization via
TransactionSynchronizationManager.registerSynchronization(...) to perform
evictDetail(productId) in afterCommit(), or wrap evictDetail(...) in a try-catch
that logs Redis errors and swallows them so cache exceptions do not rollback the
likeService.addLike/removeLike transaction.
apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsModel.java (2)

45-47: @Getter와 getLikeCount() 메서드가 중복 정의되어 있다.

클래스에 @Getter가 선언되어 있으므로 Lombok이 getLikeCount()를 자동 생성한다. 수동 정의는 제거해도 된다.

제안된 수정
-    public long getLikeCount() {
-        return likeCount;
-    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsModel.java`
around lines 45 - 47, Remove the manually defined getLikeCount() method from the
ProductStatsModel class because the `@Getter` Lombok annotation already generates
a getter for the likeCount field; locate the getLikeCount() method in
ProductStatsModel and delete it (also scan for any other manually defined
getters for fields already covered by `@Getter` to avoid duplicates).

18-24: equals/hashCode 미구현 상태이다.

JPA 엔티티에서 productId를 식별자로 사용하므로, 컬렉션 연산(Set, Map 등)에서 동일성 비교 시 문제가 발생할 수 있다. 현재 사용 패턴에서는 Optional이나 단일 조회가 대부분이므로 즉각적인 문제는 없으나, 향후 확장 시 식별자 기반 equals/hashCode 구현을 고려할 수 있다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsModel.java`
around lines 18 - 24, ProductStatsModel currently lacks equals/hashCode which
can break identity in collections; implement equals(Object) and hashCode() on
ProductStatsModel using the identifier field productId (compare by productId
only, null-safe) so entity equality is based on the JPA identifier; ensure
equals checks instance/type and uses Objects.equals(productId, other.productId)
and hashCode returns Objects.hashCode(productId) to keep consistency with JPA
usage.
apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java (2)

141-142: deleteProduct에서도 동일한 커밋 전 무효화 이슈가 존재한다.

updateProduct와 동일하게 트랜잭션 커밋 전 캐시 무효화가 수행된다. 일관된 처리를 위해 캐시 무효화 로직을 통합하여 커밋 후 처리하도록 리팩터링하는 것을 권장한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 141 - 142, The deleteProduct method in ProductFacade currently
calls productCacheService.evictDetail(id) and productCacheService.evictList()
before the transaction commits (same as updateProduct), causing stale cache
issues; change the cache eviction to occur after successful commit by
centralizing the eviction logic (e.g., extract evictDetail/evictList invocations
into a shared method and invoke it from a post-commit hook) using
TransactionSynchronizationManager.registerSynchronization or an
`@TransactionalEventListener` so both updateProduct and deleteProduct perform
eviction only after commit.

133-134: 캐시 무효화가 트랜잭션 커밋 전에 수행되어 롤백 시 불일치 발생 가능하다.

evictDetailevictList@Transactional 내부에서 호출되므로, 트랜잭션 롤백 시 DB는 변경되지 않으나 캐시는 이미 무효화된 상태가 된다. 이후 요청에서 DB 조회 후 캐시에 다시 저장되므로 데이터 일관성은 유지되나, 불필요한 캐시 미스가 발생한다.

TransactionSynchronizationManager.registerSynchronization을 사용하여 커밋 후 무효화하거나, @TransactionalEventListener(phase = AFTER_COMMIT)를 활용하는 방안을 고려할 수 있다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 133 - 134, ProductFacade currently calls
productCacheService.evictDetail(id) and productCacheService.evictList() inside a
transactional flow which can cause cache invalidation before commit; change this
so invalidation runs only after successful commit by registering a post-commit
callback (e.g., use TransactionSynchronizationManager.registerSynchronization
and perform evictDetail/evictList in afterCommit) or emit an event handled by a
method annotated with `@TransactionalEventListener`(phase = AFTER_COMMIT) that
calls productCacheService.evictDetail and evictList; modify the code paths that
currently call those methods directly so they instead register the
synchronization or publish the event.
apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsJpaRepository.java (1)

15-17: @Modifying 쿼리 후 1차 캐시 동기화 고려가 필요하다.

@Modifying 쿼리는 영속성 컨텍스트를 우회하여 DB를 직접 수정한다. 동일 트랜잭션 내에서 해당 엔티티를 다시 조회하면 1차 캐시의 오래된 데이터가 반환될 수 있다.

현재 사용 패턴에서는 incrementLikeCount 후 동일 트랜잭션 내 ProductStatsModel 조회가 없어 보이므로 즉각적인 문제는 없으나, clearAutomatically = true 추가를 고려할 수 있다.

제안된 수정
-    `@Modifying`
+    `@Modifying`(clearAutomatically = true)
     `@Query`(value = "UPDATE product_stats SET like_count = like_count + 1 WHERE product_id = :productId", nativeQuery = true)
     void incrementLikeCount(`@Param`("productId") Long productId);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsJpaRepository.java`
around lines 15 - 17, The `@Modifying` query in ProductStatsJpaRepository (method
incrementLikeCount) bypasses the persistence context and can leave stale
entities in the first-level cache; update the `@Modifying` annotation to include
clearAutomatically = true so the persistence context is cleared after the native
update (or alternatively call EntityManager.clear() after the update) to ensure
subsequent within-transaction reads of ProductStatsModel reflect DB changes.
apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java (2)

137-146: removeLike에는 재시도 로직이 없어 addLike와 비대칭적이다.

addLike는 데드락/락 타임아웃 시 최대 10회 재시도하나, removeLike는 단일 트랜잭션으로 재시도 없이 실패한다. 동일한 동시성 이슈가 발생할 수 있으므로 일관된 재시도 전략 적용을 고려해야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java`
around lines 137 - 146, removeLike in LikeService lacks the retry loop addLike
uses, causing asymmetry for deadlock/lock-timeout scenarios; modify removeLike
to mirror addLike's behavior by retrying the transactional delete and
productStatsRepository.decrementLikeCount up to the same max attempts (e.g., 10)
with small backoff, catching the same DB concurrency exceptions used in addLike
(e.g., Deadlock/Pessimistic/LockTimeout or DataAccessException variants) and
rethrowing a CoreException only after exhausting retries; ensure each retry
executes within a new transaction boundary (reuse the existing `@Transactional`
method or extract a private transactional helper invoked per attempt) and
preserve the current validation, not-found handling, and final exception
semantics.

112-132: 문자열 기반 예외 판별은 취약하다.

메시지에 "deadlock", "lock" 등의 키워드를 포함하는지 확인하는 방식은 false positive/negative 위험이 있다. 예를 들어 "unlock" 메시지나 다른 컨텍스트의 "lock" 문자열이 포함될 수 있다.

SQLExceptiongetErrorCode() 또는 getSQLState()를 사용하면 더 정확한 판별이 가능하다.

제안된 수정
 private static boolean isDeadlockOrLockTimeout(DataAccessException e) {
+    Throwable cause = e.getCause();
+    while (cause != null) {
+        if (cause instanceof java.sql.SQLException sqlEx) {
+            int errorCode = sqlEx.getErrorCode();
+            // MySQL: 1213=Deadlock, 1205=Lock wait timeout
+            if (errorCode == 1213 || errorCode == 1205) {
+                return true;
+            }
+            String sqlState = sqlEx.getSQLState();
+            // SQLSTATE 40001 = Serialization failure (deadlock)
+            if ("40001".equals(sqlState)) {
+                return true;
+            }
+        }
+        cause = cause.getCause();
+    }
-    String msg = e.getMessage();
-    if (msg != null) {
-        String lower = msg.toLowerCase();
-        if (lower.contains("deadlock") || lower.contains("1213") || lower.contains("1205") || lower.contains("40001") || lower.contains("lock")) {
-            return true;
-        }
-    }
-    // ... rest of the method
     return false;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java`
around lines 112 - 132, isDeadlockOrLockTimeout currently relies on string
matching which is brittle; change it to unwrap the DataAccessException causes to
find any SQLException (or subclasses) and check SQLException.getErrorCode() and
SQLException.getSQLState() against known deadlock/lock-timeout codes (e.g.,
MySQL 1213, 1205 and SQLState "40001"/other DB-specific codes) instead of
matching generic substrings like "lock"; only fall back to message inspection if
no SQLException is found, and when doing so avoid naive "lock" matches (e.g.,
require word boundaries or negative checks like "unlock"). Ensure you update the
logic in isDeadlockOrLockTimeout to traverse causes, inspect SQLException
instances, and return true based on errorCode/SQLState matches.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java`:
- Around line 33-35: The Javadoc on OrderFacade claiming "single transaction"
conflicts with CouponService.validateAndUseInNewTransaction using REQUIRES_NEW;
update the Javadoc on OrderFacade to reflect the actual transaction boundaries
(e.g., note that coupon validation/usage runs in a separate REQUIRES_NEW
transaction) or, if a true single transaction is required, change the
propagation on CouponService.validateAndUseInNewTransaction to support the outer
transaction (e.g., REQUIRED) and adjust related logic; reference OrderFacade and
CouponService.validateAndUseInNewTransaction when making the change.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java`:
- Around line 56-58: In ProductCacheService, don't silently swallow
JsonProcessingException; update each catch block (the ones handling
JsonProcessingException in methods such as getList and the other deserialization
points in ProductCacheService) to log a WARN-level message via the class logger
including context (cache key or method name) and the exception (e.g.,
logger.warn("Failed to deserialize cached product for key {}", key, e)) before
returning Optional.empty(), so callers retain the same behavior but operators
can diagnose serialization/deserialization failures.
- Around line 75-86: Catch Redis connection/data access exceptions around all
Redis operations in ProductCacheService so Redis failures don't bubble up: wrap
redisTemplate.opsForValue().get(key) in getDetail() (and similarly in
getList()), redisTemplate.delete(...) in evictList()/evictDetail(), and the
write calls in putList()/putDetail() with a try/catch for
RedisConnectionFailureException/DataAccessException and log the error, returning
Optional.empty() for getters and silently returning (void) for evict/put so DB
fallback proceeds; also preserve existing JsonProcessingException handling for
objectMapper calls. Add integration tests simulating Redis down to assert
getters return empty and evict/put do not throw.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponService.java`:
- Around line 68-88: validateAndUse currently calls
validateAndUseInNewTransaction within the same CouponService instance so Spring
AOP proxying (and REQUIRES_NEW propagation) is bypassed; fix by invoking the
proxied method instead — either add self-injection (inject `@Lazy` CouponService
self and call self.validateAndUseInNewTransaction(...)) or move the
`@Transactional`(propagation = Propagation.REQUIRES_NEW) logic into a separate
helper service (e.g., CouponUsageHelper.useInNewTransaction(...)) and call that
from validateAndUse; also update tests (CouponServiceRetryTest) to
integration-style tests that exercise OrderFacade transaction boundaries so you
can assert parent transaction is not rolled back on OptimisticLockException.

In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java`:
- Around line 56-60: Add a DB UNIQUE constraint on likes (UNIQUE(user_id,
product_id)) and update the service to handle constraint violations: ensure
addLike/doAddLike catch DataIntegrityViolationException (or root cause
SQLIntegrityConstraintViolationException) and translate it to a CoreException
with ErrorType.CONFLICT ("이미 좋아요한 상품입니다.") so concurrent inserts return
CONFLICT; also consider adding SQL error code 1062 handling to
isDeadlockOrLockTimeout if you intend to special-case UNIQUE violations for
retry logic, and keep existing productService.findByIdAndNotDeleted and
likeRepository.existsByUserIdAndProductId checks but rely on the DB constraint
plus the exception translation to prevent race-condition duplicates.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java`:
- Around line 77-92: The method findNotDeletedOrderByLikesDesc currently orders
by productStatsModel.likeCount.coalesce(0L).desc() then productModel.id.asc(),
which forces a filesort because product_id ordering doesn't match the
product_stats index; either remove the secondary sort if UX allows or update DB
indexing to a composite index matching the ORDER BY (like_count DESC, product_id
ASC) to avoid filesort, and ensure the COUNT query used for total (the
queryFactory.select(productModel.count()) block) uses the same join/where shape
as the fetch query (include the LEFT JOIN on productStatsModel or otherwise
align conditions) so totals stay consistent; after changes, add a
unit/integration test for findNotDeletedOrderByLikesDesc that verifies paging
behavior and that query plans/no-filesort expectation hold.

In
`@apps/commerce-api/src/main/java/com/loopers/support/perf/ProductDataSeeder.java`:
- Around line 66-67: The seeder currently hardcodes brand_id ranges which can
break referential integrity; change seedBrands() to return the list/set of
created Brand IDs (or fetch existing created IDs) and modify seedProducts(int
productCount, List<Long> brandIds) / seedProducts(...) to pick brand_id values
by cycling/randomly from that returned list instead of using fixed 1..500;
update calls where seedProducts is invoked to pass the returned brandIds from
seedBrands(); finally add an integration test that seeds into a DB with
pre-existing data and asserts every persisted product.brand_id is contained in
the set returned by seedBrands() to validate the seeding integrity.
- Around line 76-85: The current getProductCount(ApplicationArguments args)
silently falls back to DEFAULT_PRODUCT_COUNT on parse errors — change it to
validate and fail fast: parse the option (or system property) inside
getProductCount, and if Integer.parseInt throws NumberFormatException rethrow an
IllegalArgumentException (or a domain-specific exception) that preserves the
original exception as the cause; also validate the parsed value is > 0 and throw
similarly for zero or negative inputs instead of returning
DEFAULT_PRODUCT_COUNT; update callers/tests to expect exceptions for inputs
"abc", "-1", "0" and a normal result for "1000"; keep logging separate from the
thrown exception (log contextual info where appropriate, but do not swallow the
cause).

In
`@apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java`:
- Around line 48-49: 현재 테스트들은 productCacheService.getDetail()/getList()를
Optional.empty()로만 stub하여 캐시 miss 경로만 검증하므로 캐시 hit 경로 테스트를 추가하세요:
productCacheService.getDetail(PRODUCT_ID)와 getList(요청파라미터)를
Optional.of(cachedValue)로 stub하여 productFacade.getProductDetail(...)와
getProductList(...)가 캐시 데이터를 그대로 반환하는지(assert present/equal) 확인하고, DB 호출을 하지 않는지
verify(productService, never()).findByIdAndNotDeleted(...) 및
verify(brandService, never()).findByIdAndNotDeleted(...) 처리를 추가하며(선택) 캐시 miss 시
putDetail/putList가 호출되는지도 검증하도록 테스트를 작성하세요.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java`:
- Around line 87-88: The assertion calls assertThat(...) then isEqualTo(...)
before as(...), so the description won't display on failure; change the call
order on the assertion using assertThat(successCount.get() +
conflictCount.get()).as("모든 요청은 성공 또는 CONFLICT로 처리되어야 함").isEqualTo(userCount)
so as() is invoked before isEqualTo() (referencing assertThat, as(),
isEqualTo(), successCount, conflictCount, userCount).
- Around line 92-95: In LikeConcurrencyIntegrationTest, the AssertJ descriptions
are attached after the assertion (assertThat(...).isEqualTo(...).as(...)), so
the messages won't be applied; update both assertions comparing likeCount with
likeCountFromStats and likeCount with successCount.get() to call .as("...") (or
.describedAs) immediately after assertThat(...) and before .isEqualTo(...),
ensuring the descriptive message is part of the assertion chain.

In `@docs/load-test/k6-likes-write.js`:
- Around line 50-52: The check currently treating HTTP 400 as success hides
client-error regressions; update the response checks so the POST check labeled
"like POST 201 or 409" only returns success for r.status === 201 || r.status ===
409 (remove 400), and update the corresponding DELETE check to accept only 204
|| 404. Additionally add a negative test case that sends an intentionally
malformed payload to assert that 400 responses are counted as failures (so 400s
are detected separately). Locate and modify the check callbacks named "like POST
201 or 409" and the DELETE check (lines shown in the diff) and add a distinct
test that verifies malformed payload -> 400 is recorded as failure.

In `@docs/load-test/k6-order-detail.js`:
- Around line 20-21: Validate the parsed MIN_ORDER_ID and MAX_ORDER_ID
immediately after parseInt: check that neither is NaN, both are integers (use
Number.isInteger on the parsed values), and that MIN_ORDER_ID <= MAX_ORDER_ID;
if any check fails, log a clear error including the raw env input and abort the
k6 run (use k6's fail()/exit mechanism) so the load test doesn't generate
invalid 4xx traffic. Update the same validation for the other env-derived
constants referenced around lines 33-35, and add unit/manual checks for
MIN_ORDER_ID=abc and MIN_ORDER_ID=100 with MAX_ORDER_ID=10 to confirm the script
fails fast.

In `@docs/load-test/k6-orders-list.js`:
- Around line 30-33: Validate and sanitize the load-test env vars: ensure PAGE
and SIZE are parsed as integers (use parseInt) and reject or clamp
non-numeric/negative values (e.g., treat NaN or negative as default 0/20 or fail
the scenario), and URL-encode time strings START and END before appending to the
request URL; update the k6 setup that reads PAGE, SIZE, START, END to perform
these checks and encoding and add explicit test cases for PAGE=abc, SIZE=-1,
START=2026-01-01T00:00:00+09:00 to assert the constructed request URL and
expected 4xx/2xx status codes.

In `@docs/load-test/k6-product-pdp.js`:
- Around line 17-18: Normalize or validate the MIN_ID/MAX_ID inputs after
parsing: detect when parseInt(__ENV.MIN_PRODUCT_ID...) yields a value greater
than parseInt(__ENV.MAX_PRODUCT_ID...) and either swap them (normalize) or throw
a fatal error to stop execution; update the logic around MIN_ID and MAX_ID (the
constants produced by parseInt) to perform this check and handle the
reversed-range case, and add a test/step to ensure running with
MIN_PRODUCT_ID=100 and MAX_PRODUCT_ID=1 causes the script to either exit with an
error or proceed with the normalized range depending on chosen behavior.

In `@docs/load-test/k6-product-plp.js`:
- Around line 20-27: The current constants SORT and PAGE are computed once at
load time causing fixed traffic; move their evaluation into the per-iteration
entry point (the exported default function) so each VU/iteration recalculates
them. Inside the default function, compute local variables (e.g., const sort =
__ENV.SORT && SORTS.includes(__ENV.SORT) ? __ENV.SORT :
SORTS[Math.floor(Math.random()*SORTS.length)]; and const page = __ENV.PAGE !==
undefined ? parseInt(__ENV.PAGE,10) : Math.floor(Math.random()*3);) and replace
usages of SORT and PAGE with these local variables; after the change run the
scenario and inspect k6 output or server access logs to confirm varied sort/page
combinations over iterations.

In `@docs/load-test/local-auth-setup.sh`:
- Around line 22-29: The script currently uses a fixed email which can
misclassify POST results; update the user creation request (the curl POST that
sets RES) to use a unique email derived from LOGIN_ID (e.g.,
"${LOGIN_ID}@example.com") instead of a constant, and change the response
handling around RES: on CONFLICT (or "이미 존재") do not treat it as a blind
success—call the user-existence endpoint (using BASE_URL and LOGIN_ID, e.g., a
GET to /api/v1/users or a user-specific GET) to confirm the account for LOGIN_ID
actually exists and only proceed if that verification succeeds; if verification
fails, fail the script (non-zero exit) and surface the full RES payload in the
error log so failed creation is visible.

In `@docs/load-test/run-all-k6-for-scale.sh`:
- Around line 30-35: The current parsing for err and duration tokens (variables
err, avg, p95, p99, max) is fragile: change the err extraction to accept
variable dots/spaces (e.g., use a sed capture like
's/.*http_req_failed[.[:space:]]*: ([0-9.]+)%.*/\1/') and update the duration
parsing to capture both value and unit for each token (avg, p(95), p(99), max),
then normalize units by converting µs to ms (divide numeric µs values by 1000
and keep ms as-is) before assigning to avg/p95/p99/max so all outputs use
consistent ms units; verify with the provided sample k6 outputs to ensure both
variable-dot err lines and mixed µs/ms durations are parsed and normalized
correctly.

In `@docs/load-test/run-all-k6-phase.sh`:
- Around line 25-28: The script currently swallows k6 exit codes by using "||
true" when capturing output into the out variable (e.g. out=$(k6 run ... ) ||
true), which lets failed runs still append results; change this so the k6 exit
code is preserved: run k6 without "|| true", capture its exit status ($?), and
if non‑zero immediately print an error to stderr, remove temporary files (e.g.
RESULTS_TMP and MD_BLOCK) and exit non‑zero to prevent appending to
phase-benchmark-results.md; apply the same pattern to the other k6 invocation
sites (the blocks referenced around lines 41-44, 57-60, 95) and ensure any run_*
wrapper checks and propagates the k6 exit code before continuing.

---

Outside diff comments:
In
`@apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java`:
- Around line 117-126: The concurrency_multipleUsersRemoveLike test is
swallowing all Exceptions in the thread, unlike
concurrency_multipleUsersLikeSameProduct which distinguishes CoreException and
tracks unexpected errors; update the Runnable around likeService.removeLike(uid,
productId) to catch CoreException (or the domain-specific checked exception used
by likeService) and increment/removeCount on success, but capture any other
unexpected Throwable into a shared atomic/collection (e.g., AtomicInteger
unexpectedErrorCount or AtomicReference<Throwable> unexpectedError) so the test
can assert none occurred, and ensure done.countDown() is still called in
finally; reference likeService.removeLike, concurrency_multipleUsersRemoveLike,
removeCount, start and done when locating the change.

---

Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`:
- Around line 27-37: addLike(...) and removeLike(...) currently call
productCacheService.evictDetail(productId) inside the `@Transactional` boundary
which can cause the business transaction to fail if the cache eviction (Redis)
throws; change eviction to run after successful commit or isolate cache
failures: register a TransactionSynchronization via
TransactionSynchronizationManager.registerSynchronization(...) to perform
evictDetail(productId) in afterCommit(), or wrap evictDetail(...) in a try-catch
that logs Redis errors and swallows them so cache exceptions do not rollback the
likeService.addLike/removeLike transaction.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 141-142: The deleteProduct method in ProductFacade currently calls
productCacheService.evictDetail(id) and productCacheService.evictList() before
the transaction commits (same as updateProduct), causing stale cache issues;
change the cache eviction to occur after successful commit by centralizing the
eviction logic (e.g., extract evictDetail/evictList invocations into a shared
method and invoke it from a post-commit hook) using
TransactionSynchronizationManager.registerSynchronization or an
`@TransactionalEventListener` so both updateProduct and deleteProduct perform
eviction only after commit.
- Around line 133-134: ProductFacade currently calls
productCacheService.evictDetail(id) and productCacheService.evictList() inside a
transactional flow which can cause cache invalidation before commit; change this
so invalidation runs only after successful commit by registering a post-commit
callback (e.g., use TransactionSynchronizationManager.registerSynchronization
and perform evictDetail/evictList in afterCommit) or emit an event handled by a
method annotated with `@TransactionalEventListener`(phase = AFTER_COMMIT) that
calls productCacheService.evictDetail and evictList; modify the code paths that
currently call those methods directly so they instead register the
synchronization or publish the event.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductListCacheDto.java`:
- Around line 8-12: ProductListCacheDto currently stores the incoming mutable
List<ProductListItemInfo> directly in the record field `content`, risking cache
corruption if the caller mutates the list; update the record to defensively
copy/unwrap the list in its canonical constructor (or provide a static factory)
so `content` holds an immutable copy (e.g., use List.copyOf(...) or new
ArrayList<>(...) wrapped as unmodifiable) to prevent external mutations from
affecting the cached DTO.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponService.java`:
- Around line 79-84: The try/catch that calls Thread.sleep(backoffMs) in
CouponService blocks the request thread (even if backoffMs is small), so add a
clear Javadoc comment on the enclosing method (or on the CouponService
class/method that contains the Thread.sleep and InterruptedException handling)
that documents the blocking behavior, the chosen default backoffMs (50ms) and
the tradeoff/risk of thread-pool exhaustion under high load; mention that
callers should avoid long-running synchronous retries or switch to
non-blocking/async retry if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java`:
- Around line 137-146: removeLike in LikeService lacks the retry loop addLike
uses, causing asymmetry for deadlock/lock-timeout scenarios; modify removeLike
to mirror addLike's behavior by retrying the transactional delete and
productStatsRepository.decrementLikeCount up to the same max attempts (e.g., 10)
with small backoff, catching the same DB concurrency exceptions used in addLike
(e.g., Deadlock/Pessimistic/LockTimeout or DataAccessException variants) and
rethrowing a CoreException only after exhausting retries; ensure each retry
executes within a new transaction boundary (reuse the existing `@Transactional`
method or extract a private transactional helper invoked per attempt) and
preserve the current validation, not-found handling, and final exception
semantics.
- Around line 112-132: isDeadlockOrLockTimeout currently relies on string
matching which is brittle; change it to unwrap the DataAccessException causes to
find any SQLException (or subclasses) and check SQLException.getErrorCode() and
SQLException.getSQLState() against known deadlock/lock-timeout codes (e.g.,
MySQL 1213, 1205 and SQLState "40001"/other DB-specific codes) instead of
matching generic substrings like "lock"; only fall back to message inspection if
no SQLException is found, and when doing so avoid naive "lock" matches (e.g.,
require word boundaries or negative checks like "unlock"). Ensure you update the
logic in isDeadlockOrLockTimeout to traverse causes, inspect SQLException
instances, and return true based on errorCode/SQLState matches.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductModel.java`:
- Around line 15-24: Update the ProductModel JavaDoc to explicitly state that
the likes_desc PLP sorting uses product_stats.like_count (i.e., sorting is
performed from the product_stats table) and therefore index management must
include product_stats; reference the ProductModel class and its existing PLP
index comments and add a note like "likes_desc: relies on
product_stats.like_count — ensure appropriate index exists on product_stats" and
also add a reviewer TODO to verify/update the product_stats table indexes (e.g.,
on product_id + like_count) so maintainers know to manage indexes across both
tables.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsModel.java`:
- Around line 45-47: Remove the manually defined getLikeCount() method from the
ProductStatsModel class because the `@Getter` Lombok annotation already generates
a getter for the likeCount field; locate the getLikeCount() method in
ProductStatsModel and delete it (also scan for any other manually defined
getters for fields already covered by `@Getter` to avoid duplicates).
- Around line 18-24: ProductStatsModel currently lacks equals/hashCode which can
break identity in collections; implement equals(Object) and hashCode() on
ProductStatsModel using the identifier field productId (compare by productId
only, null-safe) so entity equality is based on the JPA identifier; ensure
equals checks instance/type and uses Objects.equals(productId, other.productId)
and hashCode returns Objects.hashCode(productId) to keep consistency with JPA
usage.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsJpaRepository.java`:
- Around line 15-17: The `@Modifying` query in ProductStatsJpaRepository (method
incrementLikeCount) bypasses the persistence context and can leave stale
entities in the first-level cache; update the `@Modifying` annotation to include
clearAutomatically = true so the persistence context is cleared after the native
update (or alternatively call EntityManager.clear() after the update) to ensure
subsequent within-transaction reads of ProductStatsModel reflect DB changes.

In
`@apps/commerce-api/src/test/java/com/loopers/application/order/OrderFacadeTest.java`:
- Around line 64-66: Rename the nested test class PlaceOrderWithCouponRetry to
PlaceOrderWithCoupon to reflect current behavior; update the class declaration
(the nested class annotated with `@Nested` and `@DisplayName`("placeOrder (쿠폰 적용)
시")) and any internal or external references (test class name usages, imports,
or inner-class instantiations) to the new identifier so the test compiles and
the DisplayName still matches expected description.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponServiceRetryTest.java`:
- Around line 57-59: Remove the redundant type assertion: since you already use
assertThrows(CoreException.class, ...) to obtain ex, delete the line asserting
assertThat(ex).isInstanceOf(CoreException.class); keep the calls.get() count
assertion and the message assertion (assertThat(ex.getMessage()).isEqualTo(...))
so only one type check (via assertThrows) remains; locate the assertions around
the variable ex and the calls.get() check in CouponServiceRetryTest.
- Around line 48-60: Add a unit test for validateAndUse to cover the
InterruptedException path: create a TestableCouponService and override
validateAndUseInNewTransaction(...) to increment the AtomicInteger, call
Thread.currentThread().interrupt() and then throw an OptimisticLockException so
the outer validateAndUse's retry/backoff hits Thread.sleep and receives an
InterruptedException; assert that validateAndUse throws a CoreException with
message "재시도 대기 중 중단되었습니다." and that the expected call count was made.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ad138158-65b3-47e7-a6ff-01d9449879e6

📥 Commits

Reviewing files that changed from the base of the PR and between deec7bc and 90be076.

⛔ Files ignored due to path filters (9)
  • .docs/design/05-transaction-query.md is excluded by !**/*.md and included by **
  • docs/performance-improvement-as-is-to-be.md is excluded by !**/*.md and included by **
  • docs/performance-optimization-roadmap.md is excluded by !**/*.md and included by **
  • docs/performance-risk-inspection.md is excluded by !**/*.md and included by **
  • docs/performance/README.md is excluded by !**/*.md and included by **
  • docs/performance/benchmark-100k.md is excluded by !**/*.md and included by **
  • docs/performance/benchmark-1m.md is excluded by !**/*.md and included by **
  • docs/performance/benchmark-200k.md is excluded by !**/*.md and included by **
  • docs/performance/benchmark-500k.md is excluded by !**/*.md and included by **
📒 Files selected for processing (31)
  • .gitignore
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductListCacheDto.java
  • apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/order/OrderModel.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductModel.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsModel.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsRepositoryImpl.java
  • apps/commerce-api/src/main/java/com/loopers/support/perf/ProductDataSeeder.java
  • apps/commerce-api/src/test/java/com/loopers/application/order/OrderFacadeTest.java
  • apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponServiceRetryTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/product/ProductServiceTest.java
  • docs/load-test/k6-likes-write.js
  • docs/load-test/k6-order-detail.js
  • docs/load-test/k6-orders-list.js
  • docs/load-test/k6-product-pdp.js
  • docs/load-test/k6-product-plp.js
  • docs/load-test/local-auth-setup.sh
  • docs/load-test/run-all-k6-for-scale.sh
  • docs/load-test/run-all-k6-phase.sh
💤 Files with no reviewable changes (1)
  • .gitignore

Comment on lines +33 to +35
/**
* 주문 접수. 단일 트랜잭션: 재고 락 선점 → 검증·스냅샷·차감 → (쿠폰 시) 검증·사용 → 주문 생성.
*/
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Javadoc과 트랜잭션 설계 의도 불일치.

Javadoc에 "단일 트랜잭션"이라고 명시되어 있으나, CouponService.validateAndUseInNewTransactionREQUIRES_NEW로 별도 트랜잭션을 의도한다. CouponService의 REQUIRES_NEW 이슈가 수정되면 이 문서가 부정확해진다.

설계 의도에 맞게 Javadoc을 수정하거나, 실제로 단일 트랜잭션이 의도라면 CouponService 쪽 전파 전략을 재검토해야 한다.

Javadoc 수정안 (REQUIRES_NEW가 의도인 경우)
-    /** 주문 유스케이스 조율. 트랜잭션 경계, 도메인 → OrderInfo 변환. */
+    /** 
+     * 주문 유스케이스 조율. 
+     * 재고 락 선점/검증/차감은 주 트랜잭션에서, 쿠폰 사용은 별도 트랜잭션(REQUIRES_NEW)에서 처리.
+     * 도메인 → OrderInfo 변환.
+     */
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java`
around lines 33 - 35, The Javadoc on OrderFacade claiming "single transaction"
conflicts with CouponService.validateAndUseInNewTransaction using REQUIRES_NEW;
update the Javadoc on OrderFacade to reflect the actual transaction boundaries
(e.g., note that coupon validation/usage runs in a separate REQUIRES_NEW
transaction) or, if a true single transaction is required, change the
propagation on CouponService.validateAndUseInNewTransaction to support the outer
transaction (e.g., REQUIRED) and adjust related logic; reference OrderFacade and
CouponService.validateAndUseInNewTransaction when making the change.

Comment on lines +56 to +58
} catch (JsonProcessingException e) {
return Optional.empty();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

JSON 처리 예외 무시 시 로깅 부재

JsonProcessingException 발생 시 예외를 무시하고 있어 캐시 직렬화/역직렬화 실패 원인 파악이 불가능하다. 운영 중 DTO 구조 변경, ObjectMapper 설정 불일치 등으로 인한 장애 시 디버깅이 어렵다.

최소한 WARN 레벨 로깅을 추가하여 장애 원인 추적이 가능하도록 해야 한다.

🔍 로깅 추가 예시 (getList)
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;

 `@Service`
 public class ProductCacheService {
+    private static final Logger log = LoggerFactory.getLogger(ProductCacheService.class);

     // ... 생략 ...

         } catch (JsonProcessingException e) {
+            log.warn("PLP 캐시 역직렬화 실패: key={}", key, e);
             return Optional.empty();
         }

Also applies to: 71-72, 83-84, 93-94

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java`
around lines 56 - 58, In ProductCacheService, don't silently swallow
JsonProcessingException; update each catch block (the ones handling
JsonProcessingException in methods such as getList and the other deserialization
points in ProductCacheService) to log a WARN-level message via the class logger
including context (cache key or method name) and the exception (e.g.,
logger.warn("Failed to deserialize cached product for key {}", key, e)) before
returning Optional.empty(), so callers retain the same behavior but operators
can diagnose serialization/deserialization failures.

Comment on lines 56 to 60
productService.findByIdAndNotDeleted(productId)
.orElseThrow(() -> new CoreException(ErrorType.NOT_FOUND, "상품을 찾을 수 없습니다: " + productId));
.orElseThrow(() -> new CoreException(ErrorType.NOT_FOUND, "상품을 찾을 수 없습니다: " + productId));
if (likeRepository.existsByUserIdAndProductId(userId, productId)) {
throw new CoreException(ErrorType.CONFLICT, "이미 좋아요한 상품입니다.");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# likes 테이블 UNIQUE 제약조건 확인
rg -n "unique|UNIQUE" --type=java -g '**/Like*.java' -g '**/schema*.sql' -A 2 -B 2

# LikeModel 엔티티 정의 확인
ast-grep --pattern 'class LikeModel {
  $$$
}'

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

#!/bin/bash
# 파일 구조 파악 및 Like 관련 파일 찾기
find . -type f -name "*Like*.java" | head -20
find . -type f -name "*.sql" | grep -i like | head -10

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1509


🏁 Script executed:

#!/bin/bash
# LikeService.java 전체 내용 확인
echo "=== LikeService.java ==="
cat -n ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java

echo -e "\n=== LikeModel.java ==="
cat -n ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeModel.java

echo -e "\n=== LikeRepository.java ==="
cat -n ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeRepository.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 11825


🏁 Script executed:

#!/bin/bash
# LikeJpaRepository에서 UNIQUE 제약조건 확인
echo "=== LikeJpaRepository.java ==="
cat -n ./apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeJpaRepository.java

# DB 스키마 파일 찾기
echo -e "\n=== Database Schema Files ==="
find . -type f -name "*.sql" | grep -i like

# Migration 파일에서 UNIQUE 확인
find . -type f -name "*.sql" | head -20 | xargs grep -l "likes" 2>/dev/null

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1265


🏁 Script executed:

#!/bin/bash
# 스키마/마이그레이션 파일 찾기
find . -type f \( -name "*.sql" -o -name "*.xml" -o -name "schema*" \) 2>/dev/null | grep -i -E "(flyway|migration|schema|db)" | head -20

# 혹은 resources 디렉토리 구조 확인
find ./apps/commerce-api/src -type d -name resources

# LikeRepositoryImpl 확인
echo "=== LikeRepositoryImpl.java ==="
cat -n ./apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeRepositoryImpl.java

# 동시성 테스트 확인
echo -e "\n=== LikeConcurrencyIntegrationTest.java ==="
wc -l ./apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java
head -100 ./apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 6866


🏁 Script executed:

#!/bin/bash
# resources 폴더의 스키마/마이그레이션 파일 찾기
find ./apps/commerce-api/src/main/resources -type f \( -name "*.sql" -o -name "*.xml" \) 2>/dev/null | head -20

# db 관련 폴더 찾기
find ./apps/commerce-api -type d \( -name db -o -name migration -o -name flyway \) 2>/dev/null

# BaseEntity 확인
echo "=== BaseEntity.java ==="
find . -type f -name "BaseEntity.java" | head -1 | xargs cat -n

# 전체 resources 폴더 구조
echo -e "\n=== Resources Structure ==="
find ./apps/commerce-api/src/main/resources -type f | head -30

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2399


🏁 Script executed:

#!/bin/bash
# application.yml에서 DB 설정 확인
echo "=== application.yml ==="
cat ./apps/commerce-api/src/main/resources/application.yml

# UNIQUE 제약조건 관련 코드 전체 검색
echo -e "\n=== Searching for UNIQUE constraint ==="
rg -i "unique|duplicate|1062|integrity" --type=java ./apps/commerce-api/src/main/java/com/loopers/domain/like/ -A 2 -B 2

# LikeService의 save 호출 부분과 예외 처리 재검토
echo -e "\n=== Checking Exception handling in doAddLike ==="
sed -n '103,110p' ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java

# 테스트에서 UNIQUE 제약조건 관련 내용
echo -e "\n=== Checking test for duplicate like scenario ==="
grep -n "duplicate\|이미\|CONFLICT" ./apps/commerce-api/src/test/java/com/loopers/domain/like/LikeConcurrencyIntegrationTest.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1831


🏁 Script executed:

#!/bin/bash
# DB 마이그레이션 또는 초기화 스크립트 찾기
find . -type f \( -name "*.sql" -o -name "V*.sql" -o -name "*.properties" \) 2>/dev/null | grep -E "(db|migration|schema|flyway)" | head -20

# 전체 경로에서 SQL 파일 찾기 (모든 위치)
find . -type f -name "*.sql" 2>/dev/null | head -20

# test resources에서도 확인
find ./apps/commerce-api/src/test/resources -type f 2>/dev/null

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

#!/bin/bash
# JPA 자동 스키마 생성 설정 확인
grep -r "jpa\|hibernate\|ddl-auto" ./apps/commerce-api/src/main/resources/ 2>/dev/null

# test 설정에서 H2 관련 확인
cat ./apps/commerce-api/src/test/resources/application-test.yml 2>/dev/null || echo "파일 없음"
find ./apps/commerce-api/src/test/resources -name "*.yml" -o -name "*.yaml"

# 테스트에서 setUp 또는 초기화 코드 확인
grep -r "@Sql\|@BeforeEach\|schema" ./apps/commerce-api/src/test/java/com/loopers/domain/like/ 2>/dev/null | head -20

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 347


🏁 Script executed:

#!/bin/bash
# jpa.yml 확인 - UNIQUE 제약조건 자동 생성 설정
cat ./apps/commerce-api/src/main/resources/jpa.yml

# LikeModel에 `@UniqueConstraint` 있는지 다시 확인 (테이블 수준)
echo -e "\n=== LikeModel `@Table` annotation ==="
grep -A 5 "@Table" ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeModel.java

# 테이블 정의 전체 재확인
echo -e "\n=== LikeModel Full Content ==="
cat ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeModel.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1696


🏁 Script executed:

#!/bin/bash
# jpa.yml을 검색하는 더 광범위한 방법
find . -name "jpa.yml" -o -name "jpa.yaml" 2>/dev/null

# application 프로필 폴더 구조 확인
ls -la ./apps/commerce-api/src/main/resources/ 2>/dev/null | grep -E "application|yml"

# 혹은 다른 위치의 설정 파일
find ./apps/commerce-api -name "*.yml" -o -name "*.yaml" 2>/dev/null | head -20

# Hibernate DDL 자동 생성 관련 설정이 속성으로 있는지 확인
grep -r "hibernate\|ddl" . --include="*.properties" --include="*.yml" --include="*.yaml" 2>/dev/null | head -10

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 726


🏁 Script executed:

#!/bin/bash
# jpa.yml 전체 확인
echo "=== jpa.yml ==="
cat ./modules/jpa/src/main/resources/jpa.yml

# LikeModel에 정말 UNIQUE 제약이 없는지 한번 더 확인 (전체 파일)
echo -e "\n=== Checking for `@UniqueConstraint` in LikeModel ==="
grep -i "unique\|constraint" ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeModel.java

# 혹시 INDEX나 다른 애노테이션은 없는지 확인
echo -e "\n=== All annotations in LikeModel ==="
grep "^@" ./apps/commerce-api/src/main/java/com/loopers/domain/like/LikeModel.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2416


UNIQUE 제약조건을 DB에 추가하고 제약 위반에 대한 예외 처리를 구현해야 한다.

addLike 메서드에 @Transactional이 없으므로, existsByUserIdAndProductId 검사와 doAddLike 호출 사이에 레이스 컨디션이 발생할 수 있다. 두 동시 요청이 검사 단계를 통과한 후 각각 INSERT를 시도할 수 있으며, likes 테이블에 UNIQUE(user_id, product_id) 제약조건이 없으면 중복 레코드가 생성된다.

수정안:

  1. DB 스키마에 UNIQUE 제약조건 추가: ALTER TABLE likes ADD UNIQUE KEY uk_user_product (user_id, product_id);
  2. doAddLike에서 DataIntegrityViolationException 처리 추가하여 동시 요청 중 먼저 성공한 요청만 반영되고, 후속 요청은 CONFLICT 예외 반환
  3. 재시도 로직의 isDeadlockOrLockTimeout 메서드에 UNIQUE 제약 위반 코드(1062) 추가 검토 (단, 재시도 가능성이 낮으므로 별도 처리 권장)

추가 테스트:
LikeConcurrencyIntegrationTest에서 동시성 상황에서 중복 INSERT 시도 시나리오가 이미 검증되고 있으나, DB 제약 추가 후 재확인 필수.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java`
around lines 56 - 60, Add a DB UNIQUE constraint on likes (UNIQUE(user_id,
product_id)) and update the service to handle constraint violations: ensure
addLike/doAddLike catch DataIntegrityViolationException (or root cause
SQLIntegrityConstraintViolationException) and translate it to a CoreException
with ErrorType.CONFLICT ("이미 좋아요한 상품입니다.") so concurrent inserts return
CONFLICT; also consider adding SQL error code 1062 handling to
isDeadlockOrLockTimeout if you intend to special-case UNIQUE violations for
retry logic, and keep existing productService.findByIdAndNotDeleted and
likeRepository.existsByUserIdAndProductId checks but rely on the DB constraint
plus the exception translation to prevent race-condition duplicates.

Comment on lines +20 to +27
const SORT =
__ENV.SORT && SORTS.includes(__ENV.SORT)
? __ENV.SORT
: SORTS[Math.floor(Math.random() * SORTS.length)];

// 페이지: 미지정 시 0~2 중 랜덤 (딥 페이징은 PAGE=50 등 별도 시나리오에서 측정)
const PAGE =
__ENV.PAGE !== undefined ? parseInt(__ENV.PAGE, 10) : Math.floor(Math.random() * 3);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

현재 구현은 혼합 트래픽이 아니라 고정 트래픽으로 실행된다.

운영 관점에서 정렬/페이지가 실행 시작 시 1회만 결정되면 특정 인덱스/캐시 경로만 과대표집되어 실제 트래픽 재현성이 떨어진다.
수정안으로 SORTPAGEdefault 함수 내부에서 반복마다 계산해야 한다.
추가 테스트로 k6 출력 또는 서버 접근 로그에서 정렬/페이지 조합이 반복 중 다양하게 분포하는지 확인이 필요하다.

수정 예시
-const SORT =
-  __ENV.SORT && SORTS.includes(__ENV.SORT)
-    ? __ENV.SORT
-    : SORTS[Math.floor(Math.random() * SORTS.length)];
-
-const PAGE =
-  __ENV.PAGE !== undefined ? parseInt(__ENV.PAGE, 10) : Math.floor(Math.random() * 3);
+const FIXED_SORT = __ENV.SORT && SORTS.includes(__ENV.SORT) ? __ENV.SORT : null;
+const FIXED_PAGE = __ENV.PAGE !== undefined ? parseInt(__ENV.PAGE, 10) : null;
@@
 export default function () {
-  const url = `${BASE_URL}/api/v1/products?sort=${SORT}&page=${PAGE}&size=20`;
+  const sort = FIXED_SORT ?? SORTS[Math.floor(Math.random() * SORTS.length)];
+  const page = FIXED_PAGE ?? Math.floor(Math.random() * 3);
+  const url = `${BASE_URL}/api/v1/products?sort=${sort}&page=${page}&size=20`;

Also applies to: 40-40

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/load-test/k6-product-plp.js` around lines 20 - 27, The current constants
SORT and PAGE are computed once at load time causing fixed traffic; move their
evaluation into the per-iteration entry point (the exported default function) so
each VU/iteration recalculates them. Inside the default function, compute local
variables (e.g., const sort = __ENV.SORT && SORTS.includes(__ENV.SORT) ?
__ENV.SORT : SORTS[Math.floor(Math.random()*SORTS.length)]; and const page =
__ENV.PAGE !== undefined ? parseInt(__ENV.PAGE,10) :
Math.floor(Math.random()*3);) and replace usages of SORT and PAGE with these
local variables; after the change run the scenario and inspect k6 output or
server access logs to confirm varied sort/page combinations over iterations.

Comment on lines +22 to +29
RES=$(curl -s -X POST "${BASE_URL}/api/v1/users" \
-H "Content-Type: application/json" \
-d "{\"userId\":\"${LOGIN_ID}\",\"password\":\"SecurePass1!\",\"email\":\"perf@example.com\",\"birthDate\":\"1990-01-15\",\"gender\":\"MALE\"}")
if echo "$RES" | grep -q '"result":"SUCCESS"'; then
echo "회원 가입 성공: $LOGIN_ID"
elif echo "$RES" | grep -q 'CONFLICT\|이미 존재'; then
echo "이미 존재하는 유저: $LOGIN_ID (계속 진행)"
else
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

고정 이메일로 인해 다른 LOGIN_ID 실행 시 잘못된 성공 분기가 발생할 수 있다.

운영 관점에서 사용자 생성 실패를 성공처럼 처리하면 이후 부하 테스트가 인증/권한 오류 상태로 수행되어 결과 해석이 틀어진다.
수정안으로 LOGIN_ID 기반 고유 이메일을 사용하고, 충돌 시에는 LOGIN_ID 기준으로 실제 사용자 존재를 재검증하는 절차가 필요하다.
추가 테스트로 서로 다른 LOGIN_ID 2개를 연속 실행했을 때 두 계정 모두 정상 생성/검증되는지 확인이 필요하다.

수정 예시
+EMAIL="${LOGIN_ID}@perf.local"
 RES=$(curl -s -X POST "${BASE_URL}/api/v1/users" \
   -H "Content-Type: application/json" \
-  -d "{\"userId\":\"${LOGIN_ID}\",\"password\":\"SecurePass1!\",\"email\":\"perf@example.com\",\"birthDate\":\"1990-01-15\",\"gender\":\"MALE\"}")
+  -d "{\"userId\":\"${LOGIN_ID}\",\"password\":\"SecurePass1!\",\"email\":\"${EMAIL}\",\"birthDate\":\"1990-01-15\",\"gender\":\"MALE\"}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RES=$(curl -s -X POST "${BASE_URL}/api/v1/users" \
-H "Content-Type: application/json" \
-d "{\"userId\":\"${LOGIN_ID}\",\"password\":\"SecurePass1!\",\"email\":\"perf@example.com\",\"birthDate\":\"1990-01-15\",\"gender\":\"MALE\"}")
if echo "$RES" | grep -q '"result":"SUCCESS"'; then
echo "회원 가입 성공: $LOGIN_ID"
elif echo "$RES" | grep -q 'CONFLICT\|이미 존재'; then
echo "이미 존재하는 유저: $LOGIN_ID (계속 진행)"
else
EMAIL="${LOGIN_ID}@perf.local"
RES=$(curl -s -X POST "${BASE_URL}/api/v1/users" \
-H "Content-Type: application/json" \
-d "{\"userId\":\"${LOGIN_ID}\",\"password\":\"SecurePass1!\",\"email\":\"${EMAIL}\",\"birthDate\":\"1990-01-15\",\"gender\":\"MALE\"}")
if echo "$RES" | grep -q '"result":"SUCCESS"'; then
echo "회원 가입 성공: $LOGIN_ID"
elif echo "$RES" | grep -q 'CONFLICT\|이미 존재'; then
echo "이미 존재하는 유저: $LOGIN_ID (계속 진행)"
else
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/load-test/local-auth-setup.sh` around lines 22 - 29, The script
currently uses a fixed email which can misclassify POST results; update the user
creation request (the curl POST that sets RES) to use a unique email derived
from LOGIN_ID (e.g., "${LOGIN_ID}@example.com") instead of a constant, and
change the response handling around RES: on CONFLICT (or "이미 존재") do not treat
it as a blind success—call the user-existence endpoint (using BASE_URL and
LOGIN_ID, e.g., a GET to /api/v1/users or a user-specific GET) to confirm the
account for LOGIN_ID actually exists and only proceed if that verification
succeeds; if verification fails, fail the script (non-zero exit) and surface the
full RES payload in the error log so failed creation is visible.

Comment on lines +30 to +35
avg=$(echo "$out" | grep -oE "avg=[0-9.]+(ms|µs)" | head -1 | sed -E 's/avg=([0-9.]+)(ms|µs)/\1/' | head -1)
p95=$(echo "$out" | grep -oE "p\\(95\\)=[0-9.]+(ms|µs)" | head -1 | sed -E 's/p\\(95\\)=([0-9.]+)(ms|µs)/\1/' | head -1)
p99=$(echo "$out" | grep -oE "p\\(99\\)=[0-9.]+(ms|µs)" | head -1 | sed -E 's/p\\(99\\)=([0-9.]+)(ms|µs)/\1/' | head -1)
max=$(echo "$out" | grep -oE "max=[0-9.]+(ms|µs)" | head -1 | sed -E 's/max=([0-9.]+)(ms|µs)/\1/' | head -1)
err=$(echo "$out" | grep -oE "http_req_failed......: [0-9.]+%" | head -1 | grep -oE "[0-9.]+" | head -1)
echo "PLP $label avg=${avg} p95=${p95} p99=${p99} max=${max} err=${err}%"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

현재 파서가 에러율/지연시간 단위를 안정적으로 수집하지 못한다.

운영 관점에서 파싱 누락 또는 단위 오해석이 발생하면 배포 판단에 사용하는 성능 리포트가 오염된다.
수정안으로 http_req_failed는 가변 점 개수 패턴으로 추출하고, µsms로 변환해 일관된 단위로 저장해야 한다.
추가 테스트로 샘플 k6 출력(점 개수 가변, µs/ms 혼재) 입력에서 파싱 결과가 기대값과 일치하는지 검증이 필요하다.

#!/usr/bin/env bash
set -euo pipefail

sample_err='http_req_failed................: 0.31%  9 out of 2900'
sample_dur='http_req_duration..............: avg=850µs p(95)=2.4ms p(99)=4.1ms max=9.0ms'

echo "[1] current err regex"
echo "$sample_err" | grep -oE "http_req_failed......: [0-9.]+%" || echo "NO_MATCH"

echo "[2] robust err extraction"
echo "$sample_err" | sed -nE 's/.*http_req_failed[.[:space:]]*: ([0-9.]+)%.*/\1/p'

echo "[3] duration token extraction with units"
echo "$sample_dur" | grep -oE "(avg|p\\(95\\)|p\\(99\\)|max)=[0-9.]+(ms|µs)"

Also applies to: 47-52

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/load-test/run-all-k6-for-scale.sh` around lines 30 - 35, The current
parsing for err and duration tokens (variables err, avg, p95, p99, max) is
fragile: change the err extraction to accept variable dots/spaces (e.g., use a
sed capture like 's/.*http_req_failed[.[:space:]]*: ([0-9.]+)%.*/\1/') and
update the duration parsing to capture both value and unit for each token (avg,
p(95), p(99), max), then normalize units by converting µs to ms (divide numeric
µs values by 1000 and keep ms as-is) before assigning to avg/p95/p99/max so all
outputs use consistent ms units; verify with the provided sample k6 outputs to
ensure both variable-dot err lines and mixed µs/ms durations are parsed and
normalized correctly.

Comment on lines +25 to +28
out=$(k6 run --summary-trend-stats="avg,p(95),p(99),max" \
-e "SORT=$sort" -e "PAGE=$page" -e "BASE_URL=$BASE_URL" \
--vus 50 --duration "${DURATION}s" \
docs/load-test/k6-product-plp.js 2>&1) || true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

k6 실패를 무시한 채 결과 문서를 갱신하면 벤치마크 이력이 오염된다.

운영 관점에서 실행 실패를 성공처럼 기록하면 phase 간 성능 비교와 회귀 탐지가 불가능해진다.
수정안으로 각 시나리오의 k6 종료코드를 전파하고, 하나라도 실패하면 Markdown append를 중단해야 한다.
추가 테스트로 BASE_URL을 의도적으로 잘못 설정했을 때 스크립트가 비정상 종료하고 phase-benchmark-results.md가 변경되지 않는지 확인이 필요하다.

수정 예시
-  out=$(k6 run --summary-trend-stats="avg,p(95),p(99),max" \
+  out=$(k6 run --summary-trend-stats="avg,p(95),p(99),max" \
     -e "SORT=$sort" -e "PAGE=$page" -e "BASE_URL=$BASE_URL" \
     --vus 50 --duration "${DURATION}s" \
-    docs/load-test/k6-product-plp.js 2>&1) || true
+    docs/load-test/k6-product-plp.js 2>&1)
@@
-run_plp latest 0 >> "$RESULTS_TMP"
+run_plp latest 0 >> "$RESULTS_TMP"
 run_plp price_asc 0 >> "$RESULTS_TMP"
 run_plp likes_desc 0 >> "$RESULTS_TMP"
 run_plp latest 50 >> "$RESULTS_TMP"
 run_pdp "$SCALE" >> "$RESULTS_TMP"
 run_likes_write "$SCALE" >> "$RESULTS_TMP"
@@
-cat "$MD_BLOCK" >> "$RESULTS_MD"
+cat "$MD_BLOCK" >> "$RESULTS_MD"
# 권장: run_* 호출 블록 직후
# if [ $? -ne 0 ]; then
#   echo "k6 실행 실패로 결과 append를 중단한다." >&2
#   rm -f "$RESULTS_TMP" "$MD_BLOCK"
#   exit 1
# fi

Also applies to: 41-44, 57-60, 95-95

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/load-test/run-all-k6-phase.sh` around lines 25 - 28, The script
currently swallows k6 exit codes by using "|| true" when capturing output into
the out variable (e.g. out=$(k6 run ... ) || true), which lets failed runs still
append results; change this so the k6 exit code is preserved: run k6 without "||
true", capture its exit status ($?), and if non‑zero immediately print an error
to stderr, remove temporary files (e.g. RESULTS_TMP and MD_BLOCK) and exit
non‑zero to prevent appending to phase-benchmark-results.md; apply the same
pattern to the other k6 invocation sites (the blocks referenced around lines
41-44, 57-60, 95) and ensure any run_* wrapper checks and propagates the k6 exit
code before continuing.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`:
- Around line 30-32: The current LikeFacade code calls
productCacheService.evictList() on every addLike/removeLike which
indiscriminately clears all product list caches and will crush cache hit-rate
under high user-driven traffic; update LikeFacade.addLike and
LikeFacade.removeLike to replace the global evictList() call with a targeted
invalidation strategy such as introducing
productCacheService.evictListByLikeChange(productId, brandId) to only evict
affected sorting/brand keys, or implement a debounce/batching mechanism (e.g.,
enqueue like-change events and flush invalidations in intervals) or rely on
longer TTLs instead of immediate full eviction, while keeping
productCacheService.evictDetail(productId) as-is; also add tests that verify
brand isolation (liking product in brand A does not evict brand B caches) and
measure cache hit-rate/DB QPS under concurrent like traffic to ensure the chosen
approach meets performance goals.
- Around line 27-33: Move cache eviction out of the `@Transactional` method body
and register it to run after successful commit, and ensure any Redis failures
are caught and do not abort the business write: in LikeFacade.addLike (and
mirror the same change in ProductFacade.updateProduct/deleteProduct) remove
direct calls to ProductCacheService.evictDetail/evictList from inside the
transaction and instead register a TransactionSynchronization (e.g.,
TransactionSynchronizationManager.registerSynchronization with an afterCommit()
callback) that calls productCacheService.evictDetail(productId) and
productCacheService.evictList() inside a try/catch that logs errors but does not
rethrow; update ProductCacheService usage accordingly so Redis exceptions are
isolated and add integration tests covering (1) no eviction on rollback, (2)
write commits when ProductCacheService throws, and (3) no stale re-cache race
during commit.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d435fe30-4986-4be0-9437-4df876770e33

📥 Commits

Reviewing files that changed from the base of the PR and between 90be076 and 445b006.

📒 Files selected for processing (4)
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductStatsRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductStatsRepositoryImpl.java

Comment on lines 27 to 33
@Transactional
public LikeInfo addLike(Long userId, Long productId) {
LikeModel like = likeService.addLike(userId, productId);
// 좋아요 수 변경 시 PDP + PLP 1페이지 캐시 모두 무효화해 정렬/카운트 일시 불일치 최소화
productCacheService.evictDetail(productId);
productCacheService.evictList();
return LikeInfo.from(like);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# afterCommit 사용 여부와 트랜잭션 내부 직접 캐시 호출 여부를 확인한다.
rg -n -C2 'TransactionSynchronizationManager|afterCommit|TransactionalEventListener' apps/commerce-api/src/main/java
rg -n -C2 '@Transactional|evictDetail\(|evictList\(' apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 925


🏁 Script executed:

# ProductCacheService 구현과 예외 처리 확인
rg -n 'class ProductCacheService|evictDetail|evictList' apps/commerce-api/src/main/java -A5 | head -50

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 5358


🏁 Script executed:

# 다른 Facade 클래스에서 캐시 무효화 패턴 확인
rg -n '@Transactional' apps/commerce-api/src/main/java/com/loopers/application --files-with-matches | xargs -I {} sh -c 'echo "=== {} ===" && rg -n "evict|cache\.clear|Cache" {} | head -10'

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1747


🏁 Script executed:

# afterCommit, TransactionalEventListener, TransactionSynchronization 사용 여부
rg -rn 'afterCommit|TransactionSynchronization|TransactionalEventListener' apps/commerce-api/src/main/java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

# ProductService에서 afterCommit 패턴 확인 (learnings에서 언급된 파일)
rg -n 'evictProductListAfterCommit|TransactionSynchronization' apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java -A10

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 208


🏁 Script executed:

# ProductCacheService 전체 내용 확인
cat apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4324


🏁 Script executed:

# 캐시 실패 시나리오에 대한 테스트 확인
fd -t f 'test|Test' apps/commerce-api/src/test --exec grep -l 'ProductCacheService\|LikeFacade\|evict' {} \;

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 263


🏁 Script executed:

# 캐시 실패 시나리오 테스트 확인
rg -n 'Redis|Exception|timeout|failure' apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

# 추가로 learnings에서 언급된 ProductService 검색
find apps/commerce-api -name '*ProductService*.java' -type f

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 340


🏁 Script executed:

# domain layer ProductService에서 afterCommit 패턴 확인
rg -n 'evictProductListAfterCommit|afterCommit|TransactionSynchronization' apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java -A10

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

# ProductFacadeTest 내용 확인 - 캐시 실패 시나리오가 있는지 확인
head -100 apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4308


캐시 무효화를 트랜잭션 내부에서 직접 호출하면 Redis 장애 시 데이터 쓰기까지 실패한다.

현재 LikeFacade와 ProductFacade는 모두 동일한 구조로 @Transactional 내부에서 productCacheService.evictDetail/evictList()를 호출한다. ProductCacheService의 메서드들은 예외 처리 없이 redisTemplate 연산을 직접 실행하므로, Redis 연결 실패나 타임아웃 시 RuntimeException이 전파되어 좋아요/상품 쓰기 트랜잭션이 롤백된다. 또한 커밋 전 캐시 삭제로 인해 동시 조회가 구값을 재캐시할 수 있다.

무효화를 afterCommit으로 이동하고 캐시 실패는 비즈니스 쓰기를 막지 않도록 예외를 격리하는 것이 필요하다. 또한 ProductFacade의 updateProduct/deleteProduct 메서드도 동일한 패턴을 가지므로 함께 수정해야 한다.

추가 테스트로 (1) 롤백 발생 시 캐시 무효화가 실행되지 않는지, (2) ProductCacheService 예외 발생 시 좋아요/상품 데이터는 커밋되는지, (3) 커밋 전 조회 경쟁 상황에서 stale 재캐시가 발생하지 않는지를 통합 테스트로 검증하는 것이 필요하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 27 - 33, Move cache eviction out of the `@Transactional` method body
and register it to run after successful commit, and ensure any Redis failures
are caught and do not abort the business write: in LikeFacade.addLike (and
mirror the same change in ProductFacade.updateProduct/deleteProduct) remove
direct calls to ProductCacheService.evictDetail/evictList from inside the
transaction and instead register a TransactionSynchronization (e.g.,
TransactionSynchronizationManager.registerSynchronization with an afterCommit()
callback) that calls productCacheService.evictDetail(productId) and
productCacheService.evictList() inside a try/catch that logs errors but does not
rethrow; update ProductCacheService usage accordingly so Redis exceptions are
isolated and add integration tests covering (1) no eviction on rollback, (2)
write commits when ProductCacheService throws, and (3) no stale re-cache race
during commit.

Comment on lines +30 to +32
// 좋아요 수 변경 시 PDP + PLP 1페이지 캐시 모두 무효화해 정렬/카운트 일시 불일치 최소화
productCacheService.evictDetail(productId);
productCacheService.evictList();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# evictList 구현이 전역 패턴 삭제인지, LikeFacade 핫패스에서 호출되는지 확인한다.
rg -n -C2 'void evictList\(|redisTemplate\.keys\(|LIST_PREFIX \+ "\*"' apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java
rg -n -C2 'addLike\(|removeLike\(|evictList\(' apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 947


사용자 핫패스에서는 선택적 무효화 또는 디바운싱 전략이 필요하다

현재 구현은 매번 addLike()/removeLike() 호출 시 evictList()를 통해 와일드카드 패턴으로 모든 목록 캐시를 삭제한다. 좋아요 이벤트는 관리자 수정과 달리 사용자 상호작용 기반 핫패스이므로 빈도가 매우 높을 수 있다. 이 경우 캐시 히트율이 급락하고 Redis/DB 폴백 트래픽이 급증하여 성능 저하를 초래한다.

수정안으로 다음 중 하나를 권장한다:

  1. 무효화 범위 축소: evictListByLikeChange(productId, brandId) 같은 API로 영향받는 정렬 키만 무효화
  2. 디바운싱: 좋아요 이벤트 무효화를 배치 처리하거나 일정 시간 구간 내 중복 삭제를 제거
  3. TTL 활용: evictList() 호출 대신 더 긴 TTL로 자동 만료에 의존

추가로 (1) 브랜드 A 상품 좋아요 시 브랜드 B 목록 캐시가 불필요하게 삭제되지 않는지, (2) 동시 좋아요 혼합 트래픽에서 캐시 히트율과 DB QPS가 목표치 내인지 검증하는 테스트를 추가해야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 30 - 32, The current LikeFacade code calls
productCacheService.evictList() on every addLike/removeLike which
indiscriminately clears all product list caches and will crush cache hit-rate
under high user-driven traffic; update LikeFacade.addLike and
LikeFacade.removeLike to replace the global evictList() call with a targeted
invalidation strategy such as introducing
productCacheService.evictListByLikeChange(productId, brandId) to only evict
affected sorting/brand keys, or implement a debounce/batching mechanism (e.g.,
enqueue like-change events and flush invalidations in intervals) or rely on
longer TTLs instead of immediate full eviction, while keeping
productCacheService.evictDetail(productId) as-is; also add tests that verify
brand isolation (liking product in brand A does not evict brand B caches) and
measure cache hit-rate/DB QPS under concurrent like traffic to ensure the chosen
approach meets performance goals.

- 트랜잭션 내 evict 호출을 afterCommit 콜백으로 옮겨 Redis 장애 시 쓰기 롤백을 막고, 롤백된 트랜잭션에서 캐시가 지워지지 않도록 조정
- MIN_PRODUCT_ID/MAX_PRODUCT_ID 역전 또는 비숫자 입력 시 스크립트 초기화 단계에서 에러를 던져 종료하고, ID 생성은 정규화된 range(MIN..MAX) 기반으로만 수행하도록 수정
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java (1)

60-64: ⚠️ Potential issue | 🟠 Major

상품 등록 후 PLP 캐시 무효화가 누락되어 데이터 신선도가 떨어진다

운영 관점에서 registerProduct 성공 직후에도 목록 캐시가 남아 있으면 최대 TTL 구간 동안 신규 상품이 목록에 노출되지 않아 사용자 관측 지연이 발생한다. updateProduct/deleteProduct와 동일하게 커밋 이후 evictList()를 호출해 일관성을 맞추는 것이 필요하다. 추가 테스트로 커밋 성공 시 evictList() 1회 호출, 롤백 시 0회 호출을 검증해야 한다.

수정 예시
 `@Transactional`
 public ProductInfo registerProduct(Long brandId, String name, BigDecimal price, int stockQuantity) {
     ProductModel product = productService.registerProduct(brandId, name, price, stockQuantity);
+    runAfterCommit(productCacheService::evictList);
     return ProductInfo.from(product);
 }
Based on learnings: "PRODUCT_LIST 캐시 무효화는 admin write 시 cache.clear() 전체 무효화를 의도적으로 사용하며 TTL 1분과 낮은 쓰기 빈도를 전제로 한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 60 - 64, registerProduct currently returns after creating the
ProductModel without invalidating the PRODUCT_LIST cache; change it to evict the
list only after the successful commit (follow the same pattern used in
updateProduct/deleteProduct) by registering an after-commit callback that calls
evictList() rather than calling evictList() inside the transactional work.
Locate ProductFacade.registerProduct and arrange the evictList() invocation to
run only on transaction commit (e.g.,
TransactionSynchronizationManager/afterCommit or `@TransactionalEventListener`
equivalent) so new products become visible, and add tests asserting evictList()
is called once on commit and zero times when the transaction rolls back.
🧹 Nitpick comments (3)
apps/commerce-api/src/test/java/com/loopers/application/product/ProductCacheServiceTest.java (1)

56-79: 예외 미전파 테스트에 명시적 단언/상호작용 검증을 추가해야 한다

운영 관점에서 현재 테스트는 호출만 하고 끝나므로, 내부 로직이 조기 반환되거나 Redis 호출이 누락되어도 통과할 수 있어 장애 회귀를 놓칠 수 있다. assertThatCode(...).doesNotThrowAnyException()로 의도를 고정하고 verify(ops).set(...)까지 확인하는 방식으로 보강하는 것이 안전하다. 추가로 ObjectMapper 직렬화 실패 시에도 예외 미전파가 유지되는 케이스를 별도 테스트로 추가하는 것이 좋다.

테스트 보강 예시
 import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatCode;
 import static org.mockito.ArgumentMatchers.anyString;
 import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
@@
     void putList_whenRedisFails_shouldNotThrow() throws Exception {
         ValueOperations<String, String> ops = mock(ValueOperations.class);
         when(redisTemplate.opsForValue()).thenReturn(ops);
         doThrow(new RuntimeException("redis down")).when(ops).set(anyString(), anyString(), org.mockito.Mockito.any());
@@
-        productCacheService.putList(null, "latest", 20, page);
+        assertThatCode(() -> productCacheService.putList(null, "latest", 20, page))
+                .doesNotThrowAnyException();
+        verify(redisTemplate).opsForValue();
+        verify(ops).set(anyString(), anyString(), org.mockito.Mockito.any());
     }
@@
     void putDetail_whenRedisFails_shouldNotThrow() throws Exception {
         ValueOperations<String, String> ops = mock(ValueOperations.class);
         when(redisTemplate.opsForValue()).thenReturn(ops);
         doThrow(new RuntimeException("redis down")).when(ops).set(anyString(), anyString(), org.mockito.Mockito.any());
@@
-        productCacheService.putDetail(1L, mock(ProductDetailInfo.class));
+        assertThatCode(() -> productCacheService.putDetail(1L, mock(ProductDetailInfo.class)))
+                .doesNotThrowAnyException();
+        verify(redisTemplate).opsForValue();
+        verify(ops).set(anyString(), anyString(), org.mockito.Mockito.any());
     }
As per coding guidelines `**/*Test*.java`: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/application/product/ProductCacheServiceTest.java`
around lines 56 - 79, Add explicit non-throwing assertions and interaction
verifications to the two tests in ProductCacheServiceTest: wrap the calls to
productCacheService.putList(...) and productCacheService.putDetail(...) with
assertThatCode(...).doesNotThrowAnyException() and add verify(ops).set(...)
(with appropriate argument matchers) to ensure Redis was invoked; also add a
separate test that mocks objectMapper.writeValueAsString(...) to throw and
assert that putList/putDetail still does not propagate the exception (again
verifying ops.set was or was not called as expected). Reference
redisTemplate.opsForValue(), ValueOperations ops mock,
objectMapper.writeValueAsString(...), and the
productCacheService.putList/putDetail methods when making these changes.
apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java (1)

106-107: 캐시 키용 sortParam은 정규화된 값으로 통일하는 편이 안전하다

운영 관점에서 캐시 키가 원문 파라미터 기준이면 같은 의미의 요청이 키 단편화를 일으켜 히트율이 떨어지고 DB 부하가 증가한다. Line 112에서 파싱한 ProductSortOrder 기반 canonical 문자열로 getList/putList 키를 통일하는 것을 권장한다. 추가 테스트로 "latest", "LATEST"(또는 alias)가 동일 캐시 키를 사용해 두 번째 호출에서 캐시 히트하는지 검증하는 것이 좋다.

Also applies to: 140-140

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 106 - 107, The cache key currently uses the raw sortParam which can
fragment keys; update ProductFacade to canonicalize sortParam by using the
parsed ProductSortOrder's canonical string for calls to
productCacheService.getList and putList (replace raw sortParam with
ProductSortOrder.toCanonical or equivalent) so keys are consistent across
case/alias variants; ensure the same canonical value is passed wherever you
build cache keys (getList/putList) and add tests calling with "latest", "LATEST"
and any aliases to assert the second call hits the cache.
docs/load-test/run-all-k6-phase.sh (1)

95-121: 결과 수집 및 마크다운 생성 로직이 적절하다.

set -e로 인해 k6 실패 시 line 120의 append 전에 스크립트가 종료되어 결과 문서 오염이 방지된다.

운영 관점의 경미한 개선점: 스크립트 중간 종료 시 임시 파일이 남을 수 있다. trap 명령으로 EXIT 시그널에 cleanup을 등록하면 더 견고해진다.

선택적 개선안
 RESULTS_MD="${ROOT}/docs/performance/phase-benchmark-results.md"
 cd "$ROOT"
+RESULTS_TMP=""
+MD_BLOCK=""
+cleanup() {
+  rm -f "$RESULTS_TMP" "$MD_BLOCK"
+}
+trap cleanup EXIT
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/load-test/run-all-k6-phase.sh` around lines 95 - 121, Add a cleanup trap
so temporary files like RESULTS_TMP and MD_BLOCK are removed if the script exits
early: after creating or assigning RESULTS_TMP and MD_BLOCK in
run-all-k6-phase.sh (or near the top of the phase block), register a trap on
EXIT to rm -f "$RESULTS_TMP" "$MD_BLOCK" (and any other temp vars) and unset the
trap after successful removal at the end; ensure the trap executes even when set
-e causes early exit so temp files are never left behind.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`:
- Around line 53-60: removeLike currently runs in the caller transaction
(default REQUIRED) while addLike uses a REQUIRES_NEW transactional helper with
deadlock-retry, causing inconsistent behavior under contention; refactor
removeLike by extracting the DB mutation into a new doRemoveLike(Long userId,
Long productId) annotated with `@Transactional`(propagation =
Propagation.REQUIRES_NEW) and move the call to likeService.removeLike's DB
operations into that method, then wrap the REQUIRES_NEW call in the same
deadlock-retry loop used by doAddLike (same retry count and backoff) so remove
operations are retried on deadlock; keep the runAfterCommit(...) cache-eviction
invocation outside the new REQUIRES_NEW method so cache is evicted only after
commit, and document the propagation and retry behavior similarly to addLike.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponPropagationIntegrationTest.java`:
- Around line 78-100: The test currently only verifies calls == 2 and coupon
status, which can miss parent-transaction rollbacks; update the test around
couponService.validateAndUseInNewTransaction and orderFacade.placeOrder so it
also asserts that the parent order committed: capture and assert the order ID or
returned result from orderFacade.placeOrder (or assert order count increase /
fetch the created order) and verify that the persisted order exists and is in
the expected state along with issuedCouponId being USED; additionally replace
the artificial OptimisticLockException thrown by the test double with a real DB
concurrency setup (or simulate an actual competing update) so the first failure
is a real version conflict and ensures REQUIRES_NEW commits independently of the
parent transaction.

---

Outside diff comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 60-64: registerProduct currently returns after creating the
ProductModel without invalidating the PRODUCT_LIST cache; change it to evict the
list only after the successful commit (follow the same pattern used in
updateProduct/deleteProduct) by registering an after-commit callback that calls
evictList() rather than calling evictList() inside the transactional work.
Locate ProductFacade.registerProduct and arrange the evictList() invocation to
run only on transaction commit (e.g.,
TransactionSynchronizationManager/afterCommit or `@TransactionalEventListener`
equivalent) so new products become visible, and add tests asserting evictList()
is called once on commit and zero times when the transaction rolls back.

---

Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 106-107: The cache key currently uses the raw sortParam which can
fragment keys; update ProductFacade to canonicalize sortParam by using the
parsed ProductSortOrder's canonical string for calls to
productCacheService.getList and putList (replace raw sortParam with
ProductSortOrder.toCanonical or equivalent) so keys are consistent across
case/alias variants; ensure the same canonical value is passed wherever you
build cache keys (getList/putList) and add tests calling with "latest", "LATEST"
and any aliases to assert the second call hits the cache.

In
`@apps/commerce-api/src/test/java/com/loopers/application/product/ProductCacheServiceTest.java`:
- Around line 56-79: Add explicit non-throwing assertions and interaction
verifications to the two tests in ProductCacheServiceTest: wrap the calls to
productCacheService.putList(...) and productCacheService.putDetail(...) with
assertThatCode(...).doesNotThrowAnyException() and add verify(ops).set(...)
(with appropriate argument matchers) to ensure Redis was invoked; also add a
separate test that mocks objectMapper.writeValueAsString(...) to throw and
assert that putList/putDetail still does not propagate the exception (again
verifying ops.set was or was not called as expected). Reference
redisTemplate.opsForValue(), ValueOperations ops mock,
objectMapper.writeValueAsString(...), and the
productCacheService.putList/putDetail methods when making these changes.

In `@docs/load-test/run-all-k6-phase.sh`:
- Around line 95-121: Add a cleanup trap so temporary files like RESULTS_TMP and
MD_BLOCK are removed if the script exits early: after creating or assigning
RESULTS_TMP and MD_BLOCK in run-all-k6-phase.sh (or near the top of the phase
block), register a trap on EXIT to rm -f "$RESULTS_TMP" "$MD_BLOCK" (and any
other temp vars) and unset the trap after successful removal at the end; ensure
the trap executes even when set -e causes early exit so temp files are never
left behind.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a40a6f15-2c19-42e4-952b-d2d0f7518f77

📥 Commits

Reviewing files that changed from the base of the PR and between 445b006 and 5b0c462.

📒 Files selected for processing (10)
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java
  • apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponService.java
  • apps/commerce-api/src/test/java/com/loopers/application/product/ProductCacheServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponPropagationIntegrationTest.java
  • docs/load-test/k6-likes-write.js
  • docs/load-test/k6-product-pdp.js
  • docs/load-test/local-auth-setup.sh
  • docs/load-test/run-all-k6-phase.sh
🚧 Files skipped from review as they are similar to previous changes (3)
  • docs/load-test/k6-product-pdp.js
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductCacheService.java
  • docs/load-test/local-auth-setup.sh

Comment on lines 53 to 60
@Transactional
public void removeLike(Long userId, Long productId) {
likeService.removeLike(userId, productId);
runAfterCommit(() -> {
productCacheService.evictDetail(productId);
productCacheService.evictList();
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# addLike와 removeLike의 트랜잭션 전파 차이 확인
rg -n 'public.*addLike|public.*removeLike|doAddLike|REQUIRES_NEW|Propagation' apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java -B1 -A3

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2100


🏁 Script executed:

#!/bin/bash
# removeLike 전체 메서드 구현 확인
rg -n 'public void removeLike' apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java -A20

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 942


🏁 Script executed:

#!/bin/bash
# 다른 삭제 관련 메서드들의 트랜잭션 설정 확인
rg -n '@Transactional|public void delete|public void remove' apps/commerce-api/src/main/java/com/loopers/domain/like/ -B1 -A2 | head -60

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4298


removeLike 메서드의 트랜잭션 전파 전략을 addLike와 통일해야 한다.

현재 구현:

  • addLikedoAddLike(): @Transactional(propagation = Propagation.REQUIRES_NEW) 사용, 데드락 재시도 로직 포함
  • removeLike(): 기본 @Transactional (REQUIRED) 사용, 데드락 처리 없음

문제점:

  • 두 메서드 모두 likes 테이블과 product_stats 테이블을 수정하므로 동일한 데드락 위험에 노출되어 있다.
  • addLike는 문서화된 REQUIRES_NEW 전략으로 외부 트랜잭션 실패와 무관하게 데이터를 커밋하고 데드락 시 재시도하지만, removeLike는 재시도 없이 실패한다.
  • 운영 환경에서 높은 동시성 상황에 addLike는 성공하고 removeLike는 실패할 수 있어 불일치가 발생한다.

수정 방안:

  • removeLikedoRemoveLike() 메서드로 분리하고 REQUIRES_NEW 적용
  • addLike의 데드락 재시도 로직을 참고하여 removeLike에도 동일하게 적용
  • 트랜잭션 전파 전략과 재시도 로직을 문서화

테스트:

  • 높은 동시성에서 좋아요 추가/삭제 동시 실행 시 데드락 시나리오 검증
  • 외부 트랜잭션 실패 시 removeLike의 데이터 일관성 확인
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 53 - 60, removeLike currently runs in the caller transaction
(default REQUIRED) while addLike uses a REQUIRES_NEW transactional helper with
deadlock-retry, causing inconsistent behavior under contention; refactor
removeLike by extracting the DB mutation into a new doRemoveLike(Long userId,
Long productId) annotated with `@Transactional`(propagation =
Propagation.REQUIRES_NEW) and move the call to likeService.removeLike's DB
operations into that method, then wrap the REQUIRES_NEW call in the same
deadlock-retry loop used by doAddLike (same retry count and backoff) so remove
operations are retried on deadlock; keep the runAfterCommit(...) cache-eviction
invocation outside the new REQUIRES_NEW method so cache is evicted only after
commit, and document the propagation and retry behavior similarly to addLike.

Comment on lines +78 to +100
doAnswer(invocation -> {
int attempt = calls.getAndIncrement();
if (attempt == 0) {
throw new OptimisticLockException("version conflict");
}
// 두 번째 호출부터는 실제 REQUIRES_NEW 메서드를 실행
return invocation.callRealMethod();
}).when(couponService).validateAndUseInNewTransaction(anyLong(), anyLong(), any());

// when: OrderFacade.placeOrder 호출 (부모 @Transactional 내에서 쿠폰 사용 + 주문 생성)
orderFacade.placeOrder(
userId,
List.of(new CreateOrderItemParam(productId, 1, null)),
issuedCouponId
);

// then: REQUIRES_NEW 트랜잭션은 첫 번째 실패 후 두 번째 시도에서 성공해야 하고,
// 부모 주문 트랜잭션은 UnexpectedRollbackException 없이 커밋되어 주문·쿠폰 상태가 반영된다.
assertThat(calls.get()).isEqualTo(2);

var myCoupons = couponService.findByUserId(userId, PageRequest.of(0, 10)).getContent();
assertThat(myCoupons).hasSize(1);
assertThat(myCoupons.get(0).getStatus()).isEqualTo(IssuedCouponStatus.USED);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

부모 주문 트랜잭션 커밋을 아직 검증하지 못한다.

운영에서는 쿠폰 사용만 REQUIRES_NEW로 먼저 커밋되고 바깥 주문 트랜잭션은 롤백되는 회귀가 가장 위험한데, 현재 검증은 calls == 2와 쿠폰 USED만 확인하므로 그 상태에서도 그대로 통과할 수 있다. 첫 실패도 실제 DB 버전 충돌이 아니라 테스트 더블이 만든 예외라서, 전파 경계가 깨져도 놓칠 가능성이 있다. 주문 생성 결과까지 함께 검증하도록 주문 ID 반환값, 주문 수 증가, 또는 주문 조회 결과를 assert하고, 가능하면 동일 issuedCouponId에 대한 실제 경쟁 업데이트나 저장 시점 충돌로 첫 실패를 만들기 바란다. 추가 테스트로는 첫 실패 후 주문 레코드가 실제로 커밋되었는지 확인하는 케이스와, 쿠폰 사용 이후 외부 주문 로직을 강제로 실패시켜 주문 롤백 시 이 테스트가 확실히 붉어지는 케이스를 넣는 편이 안전하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/coupon/CouponPropagationIntegrationTest.java`
around lines 78 - 100, The test currently only verifies calls == 2 and coupon
status, which can miss parent-transaction rollbacks; update the test around
couponService.validateAndUseInNewTransaction and orderFacade.placeOrder so it
also asserts that the parent order committed: capture and assert the order ID or
returned result from orderFacade.placeOrder (or assert order count increase /
fetch the created order) and verify that the persisted order exists and is in
the expected state along with issuedCouponId being USED; additionally replace
the artificial OptimisticLockException thrown by the test double with a real DB
concurrency setup (or simulate an actual competing update) so the first failure
is a real version conflict and ensures REQUIRES_NEW commits independently of the
parent transaction.

@zeexzeex
Copy link
Author

@CodeRabbit /review

@coderabbitai
Copy link

coderabbitai bot commented Mar 15, 2026

@zeexzeex PR #214 리뷰를 시작한다.


🧠 Learnings used
Learnt from: ukukdin
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 203
File: apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryService.java:53-55
Timestamp: 2026-03-13T05:44:03.072Z
Learning: In Loopers-dev-lab/loop-pack-be-l2-vol3-java PR `#203`, the PRODUCT_LIST cache eviction strategy in ProductService.evictProductListAfterCommit() intentionally uses cache.clear() (full cache invalidation) rather than selective eviction by brandId. This is a deliberate trade-off: TTL is 1 minute and admin write frequency is currently low, so full invalidation is considered acceptable. Selective eviction by brandId is deferred until admin modifications become frequent or cache hit rates are measured to be low via Micrometer metrics.

Learnt from: Namjin-kimm
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 152
File: apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java:86-91
Timestamp: 2026-03-06T09:00:23.795Z
Learning: In Loopers-dev-lab/loop-pack-be-l2-vol3-java PR `#152`, OrderFacade.java's deadlock prevention for multi-product orders relies on MySQL InnoDB's implicit behavior of scanning IN-clause PK queries in ascending order (B+Tree clustered index traversal). This is a deliberate design decision acknowledged by the author as a trade-off: InnoDB generally locks rows in PK ascending order for IN queries, but this is not a formal SQL standard guarantee. If the query optimizer changes execution plans (e.g., full table scan), lock order may change and deadlocks could occur. The safe alternative is adding explicit ORDER BY id to the SELECT FOR UPDATE query in ProductJpaRepository.findAllByIdsForUpdate(). InnoDB does auto-detect and roll back deadlocked transactions, so hangs are avoided but transaction failures still occur.
✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant