[Volume 5] 상품 목록 조회 성능, 좋아요 수 정렬 구조 개선 및 인덱스·캐시 적용#210
[Volume 5] 상품 목록 조회 성능, 좋아요 수 정렬 구조 개선 및 인덱스·캐시 적용#210SukheeChoi wants to merge 11 commits intoLoopers-dev-lab:SukheeChoifrom
Conversation
10만 건 상품에서 매 요청마다 COUNT+GROUP BY+in-memory sort 하던 AS-IS를 인덱스 기반 DB 정렬 + Redis 캐시로 전환하여 P95 응답 시간을 422배 개선. - Product.likeCount 비정규화 + atomic SQL 증감 (쓰기 경합 최소화) - 복합 인덱스 4개 추가 (like_count, brand+like_count, brand+price, likes.product_id) - Redis Cache-Aside + 버전 기반 무효화 (Master-Replica 토폴로지 활용) - 페이지네이션 (Page<ProductWithBrand>) 적용 - MaterializedView 시뮬레이션 (product_like_stats + 배치 동기화) - 성능 비교 엔드포인트 3종 + K6 부하 테스트 + Grafana 대시보드 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
K6 부하 테스트(200 RPS Peak) 중 Prometheus가 수집한 메트릭의 Grafana 대시보드 캡처. 3개 엔드포인트 A/B 비교 시각화. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
버퍼풀 4GB, 상품 10M 환경에서의 EXPLAIN 분석 + K6 200 RPS 부하 테스트 + Grafana 모니터링 결과를 문서에 추가한다. 핵심 결과: - 최적화 후 P95=14ms (캐시), 67ms (no-cache) — 1000만 건에서도 안정 - AS-IS는 단건 308초로 서비스 불능 확인 - 인덱스 기반 EXPLAIN rows=20, 데이터 100배 증가에도 O(1) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ProductCacheService(concrete) → ProductCachePort(interface) + 3개 Adapter로 구조를 전환하여 인프라 의존을 application에서 제거하고, L1(Caffeine 로컬) → L2(Redis 분산) Look-Aside 전략을 적용한다. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Grafana 스크린샷 갱신 (4-시나리오 벤치마크 구간 캡처) - blog-week5-read-optimization.md 신규 작성 (인덱스·비정규화·멀티레이어 캐시·DIP 설계→구현→검증) - K6 벤치마크 스크립트 추가 (product-list-benchmark.js, 100rps constant-arrival-rate) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- docs/는 설계 문서(design/) + 이미지(images/) 전용으로 유지 - blog/는 기술 블로그 전용 디렉토리로 분리 - .gitignore에 !blog/**/*.md 예외 추가 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthrough캐시 계층, 상품 좋아요 개수 비정규화, DB 인덱스, 배치 동기화를 추가하여 상품 조회 성능을 최적화한다. 다층 캐시(Caffeine + Redis), ProductCachePort 인터페이스, LikeCount 필드를 도입하고 Like 생성/삭제 시 동기화한다. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant PC as ProductController
participant PF as ProductFacade
participant CP as ProductCachePort
participant PR as ProductRepository
participant DB as Database
User->>PC: GET /api/v1/products/{id}
PC->>PF: getProductDetailCached(id)
PF->>CP: getProductDetail(id)
alt Cache Hit
CP-->>PF: ProductResponse
else Cache Miss
PF->>PR: getProductDetail(id)
PR->>DB: SELECT * FROM product
DB-->>PR: Product entity
PR-->>PF: ProductResponse
PF->>CP: putProductDetail(id, response)
CP-->>PF: (void)
end
PF-->>PC: ProductResponse
PC-->>User: ApiResponse<ProductResponse>
sequenceDiagram
actor User
participant LC as LikeController
participant LF as LikeFacade
participant PR as ProductRepository
participant CP as ProductCachePort
participant DB as Database
User->>LC: POST /api/v1/likes
LC->>LF: addLike(productId, memberId)
LF->>DB: INSERT INTO likes(...)
DB-->>LF: Like entity created
LF->>PR: incrementLikeCount(productId)
PR->>DB: UPDATE product SET like_count = like_count + 1
DB-->>PR: (1 row updated)
LF-->>LC: (void)
LC->>CP: evictProductDetail(productId)
CP-->>LC: (void)
LC->>CP: evictProductList()
CP-->>LC: (void)
LC-->>User: ApiResponse<Success>
sequenceDiagram
participant Batch as Batch Job
participant ET as LikeCountSyncTasklet
participant DB as Database
participant PR as ProductRepository
Batch->>ET: execute(StepContribution)
ET->>DB: REPLACE INTO product_like_stats<br/>SELECT product_id, COUNT(*) FROM likes
DB-->>ET: (N rows synced)
ET->>DB: UPDATE product p<br/>JOIN product_like_stats pls<br/>SET p.like_count = pls.like_count
DB-->>ET: (M rows corrected)
ET-->>Batch: RepeatStatus.FINISHED
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 다층 캐시 구현(Caffeine + Redis), 상품 엔티티 비정규화, 페이지네이션 추가, 배치 동기화 작업, 캐시 무효화 로직 등 여러 영역에서 상호 연관된 복잡한 변경이 있다. 캐시 일관성, 동시성, DB 성능 영향, 배치 동기화 정확성을 세심하게 검토해야 한다. Possibly related PRs
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- top: P95 Response Time 전체 + P50/RPS 상단 - middle: P50 + RPS + Error Rate + HikariCP - bottom: Error Rate + HikariCP + JVM Heap + Total Requests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductDto.java (1)
49-57:⚠️ Potential issue | 🟠 Major
ProductResponse.from(Product)경로에서는likeCount가 0으로 고정된다.이 팩토리는 새로 도입한 엔티티 필드를 쓰지 않고
0을 반환하고 있다. 상세 조회나 상세 캐시 경로가 이 메서드를 타면 좋아요 이벤트 후에도 API 응답이 계속0으로 남아, 목록과 상세가 서로 다른 값을 보여주게 된다. 수정안은product.getLikeCount()를 그대로 매핑하는 것이다. 추가로likeCount != 0인 상품의 상세 응답과 상세 캐시 값이 모두 동일하게 내려가는 테스트를 넣는 편이 좋다.🔧 수정 예시
public static ProductResponse from(Product product) { return new ProductResponse( product.getId(), product.getBrandId(), null, product.getName(), product.getPrice().getValue(), product.getStock().getQuantity(), - 0 + product.getLikeCount() ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductDto.java` around lines 49 - 57, The ProductResponse.from(Product) factory currently hardcodes likeCount to 0; change it to map the new entity field by using product.getLikeCount() when constructing ProductResponse (update the constructor call inside ProductResponse.from to pass product.getLikeCount() instead of 0). Also add a test that creates a Product with likeCount != 0 and asserts that both the detailed API response (via ProductResponse.from) and the cached detailed response return the same likeCount to prevent regressions.apps/commerce-api/src/test/java/com/loopers/concurrency/LikeConcurrencyTest.java (1)
87-138:⚠️ Potential issue | 🟠 Major단계별 성공 건수를 분리해서 검증해야 한다.
현재 검증은 최종 잔여 건수만 보므로,
addLike실패와removeLike실패가 우연히 상쇄되면 테스트가 통과한다. 이렇게 되면 동시성 회귀가 숨어 운영에서만Like레코드 수와Product.likeCount불일치가 드러날 수 있다. 수정안은 1차 좋아요 성공 수와 2차 취소 성공 수를 각각 집계하고, 1차 종료 직후100, 2차 종료 직후95를 별도로 검증하는 것이다. 추가로 한 건의 좋아요 실패와 한 건의 취소 실패가 동시에 발생해도 테스트가 반드시 실패하는 케이스를 넣는 편이 좋다. As per coding guidelines, "통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."🔧 수정 예시
int likeCount = 100; + AtomicInteger likeSuccessCount = new AtomicInteger(); Brand brand = brandRepository.save(new Brand("나이키", "스포츠 브랜드")); Product product = productRepository.save( new Product(brand.getId(), "에어맥스", new Price(100000), new Stock(10))); @@ try { likeFacade.addLike(memberId, productId); + likeSuccessCount.incrementAndGet(); } catch (Exception e) { // ignore } finally { @@ latch1.await(); executor1.shutdown(); + assertThat(likeSuccessCount.get()).isEqualTo(likeCount); + assertThat(likeRepository.countByProductId(productId)).isEqualTo(likeCount); // 5명이 동시에 좋아요 취소 int unlikeCount = 5; + AtomicInteger unlikeSuccessCount = new AtomicInteger(); ExecutorService executor2 = Executors.newFixedThreadPool(unlikeCount); @@ try { likeFacade.removeLike(memberId, productId); + unlikeSuccessCount.incrementAndGet(); } catch (Exception e) { // ignore } finally { @@ latch2.await(); executor2.shutdown(); // assert — Like 레코드 수와 Product.likeCount가 일치해야 한다 + assertThat(unlikeSuccessCount.get()).isEqualTo(unlikeCount); long actualLikeRecords = likeRepository.countByProductId(productId); Product updatedProduct = productRepository.findById(productId).orElseThrow(); assertThat(actualLikeRecords).isEqualTo(likeCount - unlikeCount);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/concurrency/LikeConcurrencyTest.java` around lines 87 - 138, The test currently only asserts the final counts so addLike/removeLike failures can cancel out; instrument the test to separately count successful adds and successful removes and assert after each phase: after the first phase assert successfulAdds == likeCount and likeRepository.countByProductId(productId) == likeCount and updated product.getLikeCount() == likeCount; after the second phase assert successfulRemoves == unlikeCount and likeRepository.countByProductId(productId) == likeCount - unlikeCount and product.getLikeCount() == likeCount - unlikeCount. Implement the counters with thread-safe types (e.g., AtomicInteger) inside the executor tasks that call likeFacade.addLike and likeFacade.removeLike, reference the existing symbols latch1/latch2, likeFacade.addLike, likeFacade.removeLike, likeRepository.countByProductId, and productRepository.findById to locate where to add these checks. Ensure exceptions still decrement the latch but do not increment success counters so any failed add/remove causes the corresponding phase assertion to fail.apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java (1)
120-149:⚠️ Potential issue | 🟠 Major트랜잭션 커밋 이전에 캐시 무효화가 발생하여 동시 조회 요청 사이의 race condition으로 인한 데이터 불일치 가능성이 있다.
세 가지 CUD 메서드 모두
@Transactional범위 내에서 eviction을 먼저 수행한다. 캐시 무효화(invalidate/delete)는 동기적으로 즉시 실행되지만, 데이터베이스 커밋은 메서드 반환 후에 발생한다. 이 사이에 동시 요청이 들어오면 캐시가 비워진 상태에서 DB를 조회하여 아직 커밋되지 않은 이전 상태의 데이터를 읽어 캐시에 재적재할 수 있고, 이후 커밋이 완료되어도 두 번째 무효화가 없어 stale entry가 TTL(상세: 30초, 목록: 15초) 동안 남는다.수정안:
@TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)또는TransactionSynchronizationManager를 사용하여 eviction을 커밋 성공 이후로 미뤄야 한다. 추가로 commit을 지연시키는 통합 테스트로 update/delete 중간 읽기에서 이전 상태가 캐시에 재적재되지 않는지 검증해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java` around lines 120 - 149, The cache eviction calls in createProduct, updateProduct, and deleteProduct run inside the `@Transactional` scope and currently execute before DB commit, causing a race where concurrent readers can repopulate stale data; move eviction to occur after successful commit by registering post-commit actions (e.g., use TransactionSynchronizationManager.registerSynchronization or convert eviction logic into a listener invoked via `@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT)) so that productCachePort.evictProductDetail(...) and productCachePort.evictProductList() are executed only after the transaction for ProductFacade.createProduct / updateProduct / deleteProduct has committed; also add an integration test that simulates concurrent read during update/delete to assert no stale cache repopulation.
🧹 Nitpick comments (14)
.gitignore (1)
45-45: 블로그 디렉토리 구조를 최적화하기 위해 gitignore 패턴을 수정하다.현재
!blog/**/*.md패턴은 블로그 디렉토리 구조와 맞지 않다. 블로그 파일이blog/직하위에 평면 구조로 저장되므로,!blog/*.md로 변경해야 한다.**글로브는 중첩 디렉토리가 없을 때 불필요하며, 패턴의 의도를 명확히 하는 데 방해가 된다.수정안:
!blog/*.md이렇게 변경하면 gitignore 규칙의 명확성을 높이고, 향후 디렉토리 구조 변경 시에도 패턴을 명시적으로 다시 검토할 수 있다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore at line 45, Replace the gitignore pattern '!blog/**/*.md' with '!blog/*.md' to match the repository’s flat blog directory layout; locate the existing '!blog/**/*.md' entry in the .gitignore and update it to '!blog/*.md' so only markdown files directly under blog/ are unignored and the overly broad '**' glob is removed.k6/product-list-no-optimization.js (1)
14-19: 페이지네이션 파라미터 누락이 의도적인지 확인이 필요하다.다른 스크립트(
product-list-optimized.js,product-list-no-cache.js,product-list-benchmark.js)는page와size파라미터를 포함하지만, 이 스크립트는 누락되어 있다.PR 설명에 따르면 no-optimization 엔드포인트는 "전량 반환" 방식이므로 의도적일 수 있으나, 벤치마크 비교 시 쿼리 패턴 차이로 인한 해석 오류가 발생할 수 있다. 주석으로 의도를 명시하는 것이 운영/유지보수 관점에서 좋다.
📝 수정안: 의도 명시 주석 추가
export default function () { const sort = sorts[Math.floor(Math.random() * sorts.length)]; + // no-optimization 엔드포인트는 페이지네이션 미적용 (전량 로딩 시나리오) const url = `${BASE_URL}/api/v1/products/no-optimization?sort=${sort}`; const res = http.get(url); checkResponse(res, 'no-optimization-list'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6/product-list-no-optimization.js` around lines 14 - 19, The no-optimization script is missing pagination params (page/size) intentionally to exercise the "return-all" endpoint; add a brief comment inside the exported default function (export default function()) above the URL construction (const url = ...) explaining that this endpoint deliberately omits page and size for full-result benchmarking and noting that other scripts (product-list-optimized.js, product-list-no-cache.js, product-list-benchmark.js) include pagination to avoid confusion when comparing results; keep the comment short and mention the endpoint path (/api/v1/products/no-optimization) and checkResponse('no-optimization-list') to make the intent obvious to future readers.k6/product-list-benchmark.js (1)
1-4:common.js공통 유틸리티를 사용하지 않고 중복 정의하고 있다.다른 k6 스크립트들(
product-list-optimized.js,product-detail.js등)은common.js에서BASE_URL,checkResponse를 import하여 사용하지만, 이 파일은 직접 정의하고 있다.운영 관점에서 BASE_URL 변경 시 이 파일만 누락될 수 있으며, 응답 검증 로직 수정 시 일관성이 깨질 수 있다.
♻️ 수정안: common.js import 사용
import http from 'k6/http'; -import { check } from 'k6'; - -const BASE_URL = __ENV.BASE_URL || 'http://localhost:8080'; +import { check } from 'k6'; +import { BASE_URL } from './common.js';단,
checkResponse사용 시 common.js의 try-catch 누락 문제가 먼저 해결되어야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6/product-list-benchmark.js` around lines 1 - 4, This file duplicates BASE_URL and response-check logic instead of reusing common.js; update product-list-benchmark.js to import BASE_URL and checkResponse from common.js (the same way product-list-optimized.js and product-detail.js do), remove the local BASE_URL and any local check/validation code, and rely on the centralized checkResponse implementation; ensure you first fix the try-catch bug inside common.js's checkResponse so the imported function behaves correctly before switching this file to the import.apps/commerce-batch/src/main/java/com/loopers/batch/job/likecountsync/step/LikeCountSyncTasklet.java (1)
22-25:REPLACE INTO는 MySQL 전용 구문이며, DELETE + INSERT로 동작하여 의도치 않은 부작용이 발생할 수 있다.REPLACE INTO는 기존 행을 삭제 후 재삽입하므로, AUTO_INCREMENT 컬럼이나 트리거가 있을 경우 예상치 못한 동작이 발생할 수 있다. 또한 DB 벤더 이식성이 없다.
INSERT ... ON DUPLICATE KEY UPDATE구문으로 변경하면 기존 행을 유지하면서 업데이트할 수 있다.♻️ INSERT ON DUPLICATE KEY UPDATE 변경안
int synced = entityManager.createNativeQuery( - "REPLACE INTO product_like_stats (product_id, like_count, synced_at) " - + "SELECT l.product_id, COUNT(*), NOW() FROM likes l GROUP BY l.product_id" + "INSERT INTO product_like_stats (product_id, like_count, synced_at) " + + "SELECT l.product_id, COUNT(*), NOW() FROM likes l GROUP BY l.product_id " + + "ON DUPLICATE KEY UPDATE like_count = VALUES(like_count), synced_at = VALUES(synced_at)" ).executeUpdate();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/batch/job/likecountsync/step/LikeCountSyncTasklet.java` around lines 22 - 25, The SQL uses MySQL-only REPLACE INTO (in entityManager.createNativeQuery inside LikeCountSyncTasklet) which deletes and reinserts rows causing side effects; change the statement to use INSERT ... ON DUPLICATE KEY UPDATE so existing rows are updated instead of deleted/recreated—locate the native query string passed to entityManager.createNativeQuery in LikeCountSyncTasklet and replace the REPLACE INTO ... SELECT ... with an INSERT INTO product_like_stats (product_id, like_count, synced_at) SELECT ... ON DUPLICATE KEY UPDATE like_count = VALUES(like_count), synced_at = VALUES(synced_at).apps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStats.java (2)
29-33: 생성자에서ZonedDateTime.now()직접 호출은 테스트 시 시간 제어를 어렵게 한다.현재 구조에서는 단위 테스트 시 syncedAt 값을 고정할 수 없어 시간 기반 검증이 불안정해진다. 운영상 큰 문제는 아니지만, 테스트 격리를 위해 Clock 주입 또는 외부에서 시간을 전달받는 방식을 고려할 수 있다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStats.java` around lines 29 - 33, The constructor ProductLikeStats(Long productId, int likeCount) directly calls ZonedDateTime.now(), which makes tests hard to control; change it to accept the time source instead (either add an overloaded constructor ProductLikeStats(Long productId, int likeCount, ZonedDateTime syncedAt) or inject a java.time.Clock into the constructor or factory so callers/tests can supply a fixed time), update usages to pass Clock.systemDefaultZone() or ZonedDateTime.now(clock) in production, and ensure the syncedAt field is set from the provided value rather than calling ZonedDateTime.now() internally.
13-39: 엔티티에equals/hashCode구현이 누락되었다.JPA 엔티티에서 equals/hashCode가 없으면, HashSet이나 영속성 컨텍스트 내 동일성 비교에서 예상치 못한 동작이 발생할 수 있다. 특히 productId가 명시적으로 할당되는 구조이므로 식별자 기반 equals/hashCode를 구현해야 한다.
♻️ equals/hashCode 추가 예시
+import java.util.Objects; + `@Entity` `@Table`(name = "product_like_stats") `@Getter` `@NoArgsConstructor`(access = AccessLevel.PROTECTED) public class ProductLikeStats { // ... existing fields ... + + `@Override` + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + ProductLikeStats that = (ProductLikeStats) o; + return productId != null && Objects.equals(productId, that.productId); + } + + `@Override` + public int hashCode() { + return Objects.hashCode(productId); + } }As per coding guidelines: "
**/domain/**/*.java: equals/hashCode는 식별자 기반 또는 값 기반으로 일관되게 설계한다."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStats.java` around lines 13 - 39, Add identifier-based equals and hashCode to ProductLikeStats using the productId field: implement equals(Object) to return true for same instance, check class compatibility, and compare non-null productId values (treat entities with null productId as not equal to other persisted entities); implement hashCode to derive from productId (return 0 or a constant when productId is null per JPA best practices). Place these methods in the ProductLikeStats class so identity comparisons (e.g., in sets or the persistence context) rely on productId.apps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStatsRepository.java (1)
5-11: 도메인 인터페이스에 배치 전용 메서드가 포함되어 있다.
syncAllFromLikes()와correctProductLikeCounts()는 배치 작업 전용 벌크 연산이다. 도메인 레이어에 이러한 인프라 특화 메서드를 노출하면 도메인과 인프라 관심사가 혼재될 수 있다.별도의 배치용 인터페이스로 분리하거나, 배치 모듈에서 직접 JPA Repository를 사용하는 것을 고려할 수 있다. 다만 현재 설계가 의도된 것이라면 유지해도 무방하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStatsRepository.java` around lines 5 - 11, The ProductLikeStatsRepository currently exposes batch-only operations (syncAllFromLikes() and correctProductLikeCounts()) which mix domain and infrastructure concerns; extract those two methods into a separate batch-specific interface (e.g., BatchProductLikeStatsRepository) placed in the batch/infrastructure layer, update the concrete implementation to implement both interfaces as needed, and change any consumers of syncAllFromLikes() and correctProductLikeCounts() to depend on the new BatchProductLikeStatsRepository (or use the JPA repository directly from the batch module) so the domain-facing ProductLikeStatsRepository only contains domain CRUD methods like save, saveAll, and findAll.apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductLikeStatsJpaRepository.java (1)
10-13: ProductLikeStatsJpaRepository와 LikeCountSyncTasklet에 동일한 REPLACE 쿼리가 중복으로 정의되어 있다.두 파일의 쿼리가 완전히 동일하다:
LikeCountSyncTasklet.java(라인 23-24): EntityManager를 통해 실행ProductLikeStatsJpaRepository.java(라인 11-12):@Query어노테이션으로 정의이 중복으로 인해 향후 쿼리 수정 시 두 곳을 모두 변경해야 하며, 한쪽만 수정되면 불일치 발생 위험이 있다. 쿼리 문자열을 상수로 추출하여 공유하거나, Repository 메서드를 통합하여 Tasklet에서 호출하도록 통일하기를 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductLikeStatsJpaRepository.java` around lines 10 - 13, The REPLACE INTO SQL is duplicated between ProductLikeStatsJpaRepository.syncAllFromLikes() and LikeCountSyncTasklet (EntityManager usage); consolidate by removing the hardcoded query from the Tasklet and have LikeCountSyncTasklet call the repository method syncAllFromLikes() instead (or alternatively extract the SQL into a single constant like ProductLikeStatsQueries.SYNC_REPLACE_SQL and reference it from both places). Update LikeCountSyncTasklet to inject ProductLikeStatsJpaRepository and call syncAllFromLikes(), or move the SQL constant to a shared class and use that constant in both ProductLikeStatsJpaRepository and LikeCountSyncTasklet so future changes happen in one place.apps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.java (1)
3-4: 도메인 저장소 인터페이스에서 Spring Data 타입을 직접 노출하지 않도록 추상화한다.
ProductRepository의 22-23번 줄에서Pageable과Page를 메서드 시그니처에 노출하면 도메인 계층이 Spring Data JPA의 페이지네이션 규약에 종속된다. 이로 인해 저장소 구현 교체, 캐시 계층 추가, 테스트 더블 작성이 어려워진다. 수정안은 도메인 전용PageRequest,PageResult모델을 정의하고,Pageable변환은 infrastructure 구현 안으로 내리는 것이다. 현재 도메인 계층 내 다른 파일들은 Spring Data 타입을 사용하지 않으므로 이 파일만 정리하면 된다.구현 시 다음을 점검한다:
- 도메인
PageRequest가 limit, offset, sort 정보를 보유하는지- infrastructure 구현에서 도메인 모델을 Spring Data
Pageable로 변환하는지- 테스트에서 페이지네이션 요청이 기대한 쿼리 동작(limit/offset)으로 변환되는지
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.java` around lines 3 - 4, ProductRepository currently exposes Spring Data types (Pageable, Page) in its method signatures; replace those with domain pagination models (e.g., create domain.PageRequest carrying limit/offset/sort and domain.PageResult<T> for results) and update ProductRepository method signatures to use PageRequest and PageResult instead of Pageable/Page; move conversion from domain.PageRequest -> org.springframework.data.domain.Pageable into the infrastructure implementation (the class that implements ProductRepository), ensure the infrastructure maps Page<T> -> domain.PageResult<T>, and update tests to assert that domain.PageRequest produces the expected limit/offset/sort behavior when converted by the infrastructure layer.apps/commerce-api/src/test/java/com/loopers/fake/FakeProductRepository.java (1)
139-148: fake 저장소의 기본 정렬 계약을 운영 코드와 맞춰야 한다.운영 코드는 기본/미지원 sort를
createdAt DESC로 정규화하지만, 이 fake는id DESC를 기본값으로 사용한다. 페이지네이션이 들어간 뒤에는 테스트가 운영과 다른 순서를 기준으로 통과해 잘못된 회귀 신호를 줄 수 있다.fake comparator를 운영 코드와 같은 규칙으로 맞추거나 공통 sort 변환 헬퍼를 공유해야 한다.
추가로
id순서와createdAt순서가 다르게 준비된 데이터로 fake와 운영 저장소가 같은 정렬 계약을 지키는지 테스트해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/fake/FakeProductRepository.java` around lines 139 - 148, The fake repository's toComparator currently defaults to id DESC while production normalizes unknown/default sorts to createdAt DESC; update FakeProductRepository.toComparator so the null/default and fallback cases return Comparator.comparing(Product::getCreatedAt).reversed() (and ensure the switch's default branch uses that same comparator) or delegate to the common sort-normalization helper used by production; after changing, add a unit test that seeds products with differing id vs createdAt orders and asserts the fake repository sorts pages the same as the production repository to prevent regressions.apps/commerce-api/src/main/java/com/loopers/application/product/ProductCachePort.java (1)
3-19: 애플리케이션 포트가 API DTO에 묶여 있다.
ProductCachePort가ProductDto를 직접 반환하면 API 응답 스키마 변경이 애플리케이션 계층, 캐시 어댑터, Redis 직렬화 포맷까지 함께 흔든다. 운영에서는 단순 응답 필드 변경만으로 캐시 역직렬화 실패나 불필요한 대량 무효화가 발생할 수 있다.포트는 application 전용 cache model 또는 domain/application DTO를 사용하고,
ProductDto변환은 controller/adapter 경계로 미루는 편이 안전하다.추가로 캐시에서 읽은 application 모델이 API DTO로 변환되는 경계 테스트를 넣어, API DTO 변경이 캐시 포트 계약에 전파되지 않는지 확인해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductCachePort.java` around lines 3 - 19, ProductCachePort currently exposes API-layer ProductDto types causing API changes to ripple into application and cache layers; update the port to use application/cache-specific models (e.g., create ProductCacheModel.ProductDetail and ProductCacheModel.PagedList or similar) and change signatures of ProductCachePort.getProductDetail/putProductDetail/evictProductDetail/getProductList/putProductList to use those models instead of ProductDto.ProductResponse and ProductDto.PagedProductResponse; move conversion between ProductDto and the new cache models into the controller/adapter boundary (controller/adapter should map API ProductDto <-> application cache models) and add a boundary unit/integration test that reads a cached application model and verifies it converts to the API DTO so API DTO schema changes do not break cache serialization.apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java (1)
141-168: 이 테스트는 DB 정렬 회귀를 막지 못한다.현재
ProductFacade는FakeProductRepository로 구성되어 있어, 통과 여부가 JPA 쿼리와 인덱스 정렬이 아니라 fake comparator 구현에 달려 있다. 운영 경로의likes_descSQL이나 tie-break가 깨져도 이 테스트는 그대로 통과할 수 있다.이 단위 테스트는 façade가 sort 값을 넘기는지만 확인하고, DB 정렬 자체는
ProductJpaRepository또는ProductRepositoryImpl을 사용하는 통합 테스트로 분리하는 편이 맞다.추가로 실제 DB에서
likeCount DESC, id DESC와 페이지네이션이 함께 유지되는 테스트를 넣어야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java` around lines 141 - 168, The current unit test getAllProducts_likesDesc_sortedByLikeCount relies on FakeProductRepository and thus does not validate real DB ordering; change this to an integration test that boots the Spring context (or uses `@DataJpaTest`) and exercises ProductFacade backed by the real ProductJpaRepository / ProductRepositoryImpl so the SQL ordering is validated; create a new integration test that inserts products with controlled likeCount and ids, calls productFacade.getAllProducts("likes_desc", PageRequest.of(...)) and asserts the result is ordered by likeCount DESC then id DESC and that pagination boundaries are preserved; keep the original unit test pared down to only assert that ProductFacade passes the "likes_desc" sort through to the repository (mocking the repository) if you want to retain a fast unit test.apps/commerce-api/src/test/java/com/loopers/fake/FakeProductCachePort.java (1)
6-36: 이 fake는 캐시 회귀를 검출하지 못한다.
get*는 항상null이고put*/evict*는 no-op이라서, 현재 테스트는 캐시 미스 후 DB fallback만 본다. 운영에서 캐시 적재·히트·무효화가 깨져도 테스트는 통과해 DB 부하 회귀를 놓친다.테스트용 fake라도 detail/list를
Map으로 보관하고 evict를 실제로 반영하게 만들어야 한다. 필요하면 호출 횟수도 노출하면 된다.추가로 첫 호출 miss → 두 번째 호출 hit, 수정/삭제/좋아요 이후 evict 동작을 검증하는 테스트를 넣어야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/fake/FakeProductCachePort.java` around lines 6 - 36, FakeProductCachePort currently always returns null and no-ops on put/evict so tests can't detect cache hits/evicts; implement an in-memory backing store (e.g., Maps) inside FakeProductCachePort to store ProductDto.ProductResponse by productId and ProductDto.PagedProductResponse keyed by (brandId, sort, page, size), make getProductDetail/getProductList return stored values, putProductDetail/putProductList store values, and evictProductDetail/evictProductList remove entries; optionally add simple counters (e.g., hitCount/getCount/putCount/evictCount) as fields to expose call counts for assertions and use these behaviors in tests to verify "first call = miss, second call = hit" and that update/delete/like flows trigger evict behavior via evictProductDetail/evictProductList.apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeController.java (1)
20-27: 좋아요 캐시 무효화 책임은 Controller가 아니라 LikeFacade의 after-commit 경로에 두는 편이 안전하다.현재 구현은 HTTP entrypoint가 캐시 side effect를 직접 알고 있어서, 이후 배치/이벤트/다른 API에서 like를 변경하면 무효화가 쉽게 누락된다. 운영에서는 이런 누락이 가장 오래 남는 stale cache 원인이 된다. 수정안:
LikeFacade내부나 도메인 이벤트에서 after-commit으로 상세 eviction과 목록 version invalidation을 수행해 모든 like 쓰기 경로를 한 곳으로 수렴시켜야 한다. 추가 테스트:LikeFacade.addLike/removeLike기준으로 성공 시 한 번만 무효화되고, 예외나 롤백 시에는 호출되지 않는지 통합 테스트를 추가해야 한다.As per coding guidelines "
**/*Controller*.java: Controller는 요청 검증(Bean Validation)과 응답 조립에 집중하고 비즈니스 로직은 Service로 이동한다."를 따랐다.Also applies to: 31-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeController.java` around lines 20 - 27, The controller currently performs cache eviction (productCachePort.evictProductDetail and evictProductList) after calling likeFacade.addLike, but eviction must live in the service/facade post-commit to guarantee it runs for all write paths and not on rollbacks; move the eviction logic into LikeFacade (e.g., inside addLike/removeLike) and execute it in an after-commit hook (TransactionSynchronization or a domain event listener) so eviction runs only on successful commits, and add integration tests verifying that LikeFacade.addLike/removeLike trigger a single eviction on success and do not evict on exception/rollback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/commerce-api/build.gradle.kts`:
- Around line 9-10: The caffeine dependency is currently floating; add a fixed
version key in gradle.properties (e.g., caffeineVersion=3.1.x) and update the
implementation declaration in the build script from
"com.github.ben-manes.caffeine:caffeine" to reference that property (use
project.properties["caffeineVersion"]) or hard-code
"com.github.ben-manes.caffeine:caffeine:3.1.x"; optionally add a CI dependency
verification step (dependencyInsight or a lockfile check task) to prevent future
drift.
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`:
- Around line 32-33: When saving/removing likes in LikeFacade, check the return
value of productRepository.incrementLikeCount(productId) and
decrementLikeCount(productId) and if the returned affected rows != 1 throw a
CoreException so the surrounding transaction rolls back; update the LikeFacade
methods that call likeRepository.save(...) and likeRepository.delete(...) to
validate the update count from productRepository and throw CoreException on any
non-1 result, and add unit/integration tests that stub productRepository to
return 0 for both add and remove paths to assert the transaction is rolled back
and no counter drift occurs.
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 69-85: The getAllProductsCached method currently uses page, size,
and sort directly for cache lookup and PageRequest creation, which can produce
invalid inputs and inconsistent exceptions; before calling
productCachePort.getProductList or PageRequest.of, validate that page >= 0, 1 <=
size <= MAX_PAGE_SIZE (define MAX_PAGE_SIZE), and that sort is one of the
allowed values, and if any check fails throw new
CoreException(ErrorType.BAD_REQUEST); ensure validation happens at the top of
getAllProductsCached (reference method name) so invalid requests never call
productCachePort or create PageRequest, and add unit tests asserting BAD_REQUEST
and that productCachePort is not invoked for invalid page/size/sort cases.
In `@apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java`:
- Around line 12-16: Product 엔티티에 새로 추가된 like_count 컬럼과 인덱스는 프로덕션에서
ddl-auto=none 설정 때문에 애플리케이션 배포만으로는 생성되지 않으니, Product 엔티티(product class) 변경에 맞추어
Flyway 또는 수동 SQL 마이그레이션을 추가해 DB 스키마를 먼저 적용하도록 하세요: (1) ALTER TABLE product ADD
COLUMN like_count INT NOT NULL DEFAULT 0; (2) 백필이 필요한 경우 UPDATE product SET
like_count=0; (3) CREATE INDEX statements for idx_product_like_count,
idx_product_brand_like_count, idx_product_brand_price; (4) 그 다음 애플리케이션 배포 및 배치
동기화 실행; 또한 릴리스 파이프라인 또는 테스트에 스키마 검증(ddl-auto=none 환경에서 Product 엔티티 필드가 DB에 존재하는지
확인하는 통합 테스트)을 추가하세요.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/CaffeineProductCacheAdapter.java`:
- Around line 53-65: The L1 list cache currently has no generation/versioning so
evictProductList() calling listCache.invalidateAll() clears unrelated brand/page
entries; change the contract in ProductCachePort to include a list-generation
(version) for list keys and update listKey(Long brandId, String sort, int page,
int size) to incorporate that generation, then modify evictProductList() to
increment or set the generation for the affected list scope instead of
invalidateAll() (or implement a pub/sub sync to bump L1 generation), and add
tests that pre-warm multiple brand/page entries and exercise repeated
like/unlike invalidations to assert only the relevant generation becomes
bypassed while other brand/page entries remain cached.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java`:
- Around line 40-48: The countQuery for findAllWithBrandPaged and
findAllByBrandIdWithBrandPaged is missing the LEFT JOIN Brand/soft-delete
filter, causing totalElements/totalPages to overcount; update both countQuery
strings to mirror the select query by including "LEFT JOIN Brand b ON b.id =
p.brandId" and the same "AND (b.deletedAt IS NULL OR b.id IS NULL)" condition
(and keep the brandId predicate for findAllByBrandIdWithBrandPaged), then add an
integration test that inserts a product linked to a soft-deleted Brand and
asserts both the content list shrinks and totalElements decreases accordingly.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductLikeStatsJpaRepository.java`:
- Around line 10-18: The `@Modifying` annotations in ProductLikeStatsJpaRepository
(on methods syncAllFromLikes and correctProductLikeCounts) should include
clearAutomatically = true to avoid stale entities in the persistence context
after the native update/replace queries; update both `@Modifying` declarations to
add clearAutomatically = true so the persistence context is cleared
automatically after executing these modifying queries.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/RedisProductCacheAdapter.java`:
- Around line 23-35: The code uses separate readTemplate (replica) and
writeTemplate (master) causing read-after-write staleness when
evictProductDetail()/evictProductList() perform master writes (delete or INCR)
but subsequent reads (getProductDetail(), buildListKey()) still use
readTemplate; modify these read paths to read critical consistency-sensitive
keys (the version key and immediate post-evict reads) from the master by
switching to writeTemplate or introducing an explicit masterRead method that
uses writeTemplate for version and post-evict reads, and ensure
evictProductList()/evictProductDetail() call that master-read check before
returning so L1 won't be populated with stale payloads; add tests that stub
readTemplate to simulate replica lag and verify that after
evictProductDetail()/evictProductList() the code detects stale responses and
re-reads from master (writeTemplate) to refresh correctly.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductBenchmarkController.java`:
- Around line 13-16: The ProductBenchmarkController (class
ProductBenchmarkController) currently exposes the heavy `/api/v1/products`
endpoints including the `/no-optimization` path in the default runtime; restrict
this by annotating the controller (or the specific heavy handler method) with a
runtime-only activation such as `@Profile`("benchmark") or by moving the
controller into a benchmark-only module so the bean does not load in default
profiles; update tests to assert that under the default profile the
controller/mapping is absent (404 or bean not present) and that under the
"benchmark" profile the controller and `/no-optimization` mapping are active.
In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/CaffeineProductCacheAdapterTest.java`:
- Around line 61-99: Add unit tests that verify cache keys include sort, page,
and size (not just brandId): create a test (e.g.,
differentSortPageSizeAreSeparateEntries) that calls cache.putProductList and
cache.getProductList with the same brandId (including null brandId) but
different sort values, different page numbers (0 vs 1), and different sizes (10
vs 20), asserting each variation yields its own cached entry (i.e., put one
response and verify get for the same key returns it while get for the changed
sort/page/size returns null or a different response); reuse the existing
cache.putProductList, cache.getProductList and cache.evictProductList symbols
and mirror the style of existing tests (putAndGet,
differentBrandIdIsSeparateEntry, evictClearsAllListEntries).
In
`@apps/commerce-api/src/test/java/com/loopers/performance/ProductPerformanceTest.java`:
- Around line 71-98: The test currently swallows seeding exceptions and does
per-row saves in ProductPerformanceTest causing silent data drift; change the
seeding to use chunked/batched inserts (or a dedicated SQL seeder) for both
Product and Like creation instead of single-row save loops (references:
productRepository.save, likeRepository.save,
productRepository.incrementLikeCount), collect and count any failures during
seeding and fail the test if failures > 0 (do not ignore exceptions), and after
seeding add a consistency assertion loop that for a sample of products calls the
authoritative count method (e.g., likeRepository.countByProductId or a similar
repository count) and asserts it equals product.getLikeCount() (or
productRepository.findById(...).getLikeCount()) to ensure likeCount matches
actual Like rows.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/likecountsync/step/LikeCountSyncTasklet.java`:
- Around line 20-37: The current execute method performs large REPLACE INTO and
UPDATE in a single transaction which risks timeouts and locks; refactor
execute(StepContribution, ChunkContext) to process likes in smaller batches
(e.g., paginate product IDs or aggregate by ranges) using the entityManager in
repeated smaller transactions to avoid long-held locks, add a step to ensure
products with zero likes are represented in product_like_stats (e.g., insert or
upsert entries for product ids missing from likes with like_count = 0 using
product table as source), and wrap DB operations with try/catch to log
exceptions (include the caught exception in logs) so failures are recorded;
reference the existing methods/variables entityManager, product_like_stats,
likes, and the execute method when implementing these changes.
In `@k6/common.js`:
- Around line 27-35: checkResponse currently calls JSON.parse(r.body) without
error handling which will throw on malformed responses; wrap the parse and
subsequent access in a try-catch inside checkResponse so that parse errors do
not crash the k6 iteration—on catch return false for the [`${name} has data`]
predicate (and optionally log the error with context using console.error
including name and error.message) so the check simply fails instead of throwing;
update the function around JSON.parse to handle this gracefully (referencing
checkResponse and JSON.parse) similar to how product-list-benchmark.js handles
parsing.
---
Outside diff comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 120-149: The cache eviction calls in createProduct, updateProduct,
and deleteProduct run inside the `@Transactional` scope and currently execute
before DB commit, causing a race where concurrent readers can repopulate stale
data; move eviction to occur after successful commit by registering post-commit
actions (e.g., use TransactionSynchronizationManager.registerSynchronization or
convert eviction logic into a listener invoked via
`@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT)) so that
productCachePort.evictProductDetail(...) and productCachePort.evictProductList()
are executed only after the transaction for ProductFacade.createProduct /
updateProduct / deleteProduct has committed; also add an integration test that
simulates concurrent read during update/delete to assert no stale cache
repopulation.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductDto.java`:
- Around line 49-57: The ProductResponse.from(Product) factory currently
hardcodes likeCount to 0; change it to map the new entity field by using
product.getLikeCount() when constructing ProductResponse (update the constructor
call inside ProductResponse.from to pass product.getLikeCount() instead of 0).
Also add a test that creates a Product with likeCount != 0 and asserts that both
the detailed API response (via ProductResponse.from) and the cached detailed
response return the same likeCount to prevent regressions.
In
`@apps/commerce-api/src/test/java/com/loopers/concurrency/LikeConcurrencyTest.java`:
- Around line 87-138: The test currently only asserts the final counts so
addLike/removeLike failures can cancel out; instrument the test to separately
count successful adds and successful removes and assert after each phase: after
the first phase assert successfulAdds == likeCount and
likeRepository.countByProductId(productId) == likeCount and updated
product.getLikeCount() == likeCount; after the second phase assert
successfulRemoves == unlikeCount and likeRepository.countByProductId(productId)
== likeCount - unlikeCount and product.getLikeCount() == likeCount -
unlikeCount. Implement the counters with thread-safe types (e.g., AtomicInteger)
inside the executor tasks that call likeFacade.addLike and
likeFacade.removeLike, reference the existing symbols latch1/latch2,
likeFacade.addLike, likeFacade.removeLike, likeRepository.countByProductId, and
productRepository.findById to locate where to add these checks. Ensure
exceptions still decrement the latch but do not increment success counters so
any failed add/remove causes the corresponding phase assertion to fail.
---
Nitpick comments:
In @.gitignore:
- Line 45: Replace the gitignore pattern '!blog/**/*.md' with '!blog/*.md' to
match the repository’s flat blog directory layout; locate the existing
'!blog/**/*.md' entry in the .gitignore and update it to '!blog/*.md' so only
markdown files directly under blog/ are unignored and the overly broad '**' glob
is removed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductCachePort.java`:
- Around line 3-19: ProductCachePort currently exposes API-layer ProductDto
types causing API changes to ripple into application and cache layers; update
the port to use application/cache-specific models (e.g., create
ProductCacheModel.ProductDetail and ProductCacheModel.PagedList or similar) and
change signatures of
ProductCachePort.getProductDetail/putProductDetail/evictProductDetail/getProductList/putProductList
to use those models instead of ProductDto.ProductResponse and
ProductDto.PagedProductResponse; move conversion between ProductDto and the new
cache models into the controller/adapter boundary (controller/adapter should map
API ProductDto <-> application cache models) and add a boundary unit/integration
test that reads a cached application model and verifies it converts to the API
DTO so API DTO schema changes do not break cache serialization.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStats.java`:
- Around line 29-33: The constructor ProductLikeStats(Long productId, int
likeCount) directly calls ZonedDateTime.now(), which makes tests hard to
control; change it to accept the time source instead (either add an overloaded
constructor ProductLikeStats(Long productId, int likeCount, ZonedDateTime
syncedAt) or inject a java.time.Clock into the constructor or factory so
callers/tests can supply a fixed time), update usages to pass
Clock.systemDefaultZone() or ZonedDateTime.now(clock) in production, and ensure
the syncedAt field is set from the provided value rather than calling
ZonedDateTime.now() internally.
- Around line 13-39: Add identifier-based equals and hashCode to
ProductLikeStats using the productId field: implement equals(Object) to return
true for same instance, check class compatibility, and compare non-null
productId values (treat entities with null productId as not equal to other
persisted entities); implement hashCode to derive from productId (return 0 or a
constant when productId is null per JPA best practices). Place these methods in
the ProductLikeStats class so identity comparisons (e.g., in sets or the
persistence context) rely on productId.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStatsRepository.java`:
- Around line 5-11: The ProductLikeStatsRepository currently exposes batch-only
operations (syncAllFromLikes() and correctProductLikeCounts()) which mix domain
and infrastructure concerns; extract those two methods into a separate
batch-specific interface (e.g., BatchProductLikeStatsRepository) placed in the
batch/infrastructure layer, update the concrete implementation to implement both
interfaces as needed, and change any consumers of syncAllFromLikes() and
correctProductLikeCounts() to depend on the new BatchProductLikeStatsRepository
(or use the JPA repository directly from the batch module) so the domain-facing
ProductLikeStatsRepository only contains domain CRUD methods like save, saveAll,
and findAll.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.java`:
- Around line 3-4: ProductRepository currently exposes Spring Data types
(Pageable, Page) in its method signatures; replace those with domain pagination
models (e.g., create domain.PageRequest carrying limit/offset/sort and
domain.PageResult<T> for results) and update ProductRepository method signatures
to use PageRequest and PageResult instead of Pageable/Page; move conversion from
domain.PageRequest -> org.springframework.data.domain.Pageable into the
infrastructure implementation (the class that implements ProductRepository),
ensure the infrastructure maps Page<T> -> domain.PageResult<T>, and update tests
to assert that domain.PageRequest produces the expected limit/offset/sort
behavior when converted by the infrastructure layer.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductLikeStatsJpaRepository.java`:
- Around line 10-13: The REPLACE INTO SQL is duplicated between
ProductLikeStatsJpaRepository.syncAllFromLikes() and LikeCountSyncTasklet
(EntityManager usage); consolidate by removing the hardcoded query from the
Tasklet and have LikeCountSyncTasklet call the repository method
syncAllFromLikes() instead (or alternatively extract the SQL into a single
constant like ProductLikeStatsQueries.SYNC_REPLACE_SQL and reference it from
both places). Update LikeCountSyncTasklet to inject
ProductLikeStatsJpaRepository and call syncAllFromLikes(), or move the SQL
constant to a shared class and use that constant in both
ProductLikeStatsJpaRepository and LikeCountSyncTasklet so future changes happen
in one place.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeController.java`:
- Around line 20-27: The controller currently performs cache eviction
(productCachePort.evictProductDetail and evictProductList) after calling
likeFacade.addLike, but eviction must live in the service/facade post-commit to
guarantee it runs for all write paths and not on rollbacks; move the eviction
logic into LikeFacade (e.g., inside addLike/removeLike) and execute it in an
after-commit hook (TransactionSynchronization or a domain event listener) so
eviction runs only on successful commits, and add integration tests verifying
that LikeFacade.addLike/removeLike trigger a single eviction on success and do
not evict on exception/rollback.
In
`@apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java`:
- Around line 141-168: The current unit test
getAllProducts_likesDesc_sortedByLikeCount relies on FakeProductRepository and
thus does not validate real DB ordering; change this to an integration test that
boots the Spring context (or uses `@DataJpaTest`) and exercises ProductFacade
backed by the real ProductJpaRepository / ProductRepositoryImpl so the SQL
ordering is validated; create a new integration test that inserts products with
controlled likeCount and ids, calls productFacade.getAllProducts("likes_desc",
PageRequest.of(...)) and asserts the result is ordered by likeCount DESC then id
DESC and that pagination boundaries are preserved; keep the original unit test
pared down to only assert that ProductFacade passes the "likes_desc" sort
through to the repository (mocking the repository) if you want to retain a fast
unit test.
In `@apps/commerce-api/src/test/java/com/loopers/fake/FakeProductCachePort.java`:
- Around line 6-36: FakeProductCachePort currently always returns null and
no-ops on put/evict so tests can't detect cache hits/evicts; implement an
in-memory backing store (e.g., Maps) inside FakeProductCachePort to store
ProductDto.ProductResponse by productId and ProductDto.PagedProductResponse
keyed by (brandId, sort, page, size), make getProductDetail/getProductList
return stored values, putProductDetail/putProductList store values, and
evictProductDetail/evictProductList remove entries; optionally add simple
counters (e.g., hitCount/getCount/putCount/evictCount) as fields to expose call
counts for assertions and use these behaviors in tests to verify "first call =
miss, second call = hit" and that update/delete/like flows trigger evict
behavior via evictProductDetail/evictProductList.
In `@apps/commerce-api/src/test/java/com/loopers/fake/FakeProductRepository.java`:
- Around line 139-148: The fake repository's toComparator currently defaults to
id DESC while production normalizes unknown/default sorts to createdAt DESC;
update FakeProductRepository.toComparator so the null/default and fallback cases
return Comparator.comparing(Product::getCreatedAt).reversed() (and ensure the
switch's default branch uses that same comparator) or delegate to the common
sort-normalization helper used by production; after changing, add a unit test
that seeds products with differing id vs createdAt orders and asserts the fake
repository sorts pages the same as the production repository to prevent
regressions.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/likecountsync/step/LikeCountSyncTasklet.java`:
- Around line 22-25: The SQL uses MySQL-only REPLACE INTO (in
entityManager.createNativeQuery inside LikeCountSyncTasklet) which deletes and
reinserts rows causing side effects; change the statement to use INSERT ... ON
DUPLICATE KEY UPDATE so existing rows are updated instead of
deleted/recreated—locate the native query string passed to
entityManager.createNativeQuery in LikeCountSyncTasklet and replace the REPLACE
INTO ... SELECT ... with an INSERT INTO product_like_stats (product_id,
like_count, synced_at) SELECT ... ON DUPLICATE KEY UPDATE like_count =
VALUES(like_count), synced_at = VALUES(synced_at).
In `@k6/product-list-benchmark.js`:
- Around line 1-4: This file duplicates BASE_URL and response-check logic
instead of reusing common.js; update product-list-benchmark.js to import
BASE_URL and checkResponse from common.js (the same way
product-list-optimized.js and product-detail.js do), remove the local BASE_URL
and any local check/validation code, and rely on the centralized checkResponse
implementation; ensure you first fix the try-catch bug inside common.js's
checkResponse so the imported function behaves correctly before switching this
file to the import.
In `@k6/product-list-no-optimization.js`:
- Around line 14-19: The no-optimization script is missing pagination params
(page/size) intentionally to exercise the "return-all" endpoint; add a brief
comment inside the exported default function (export default function()) above
the URL construction (const url = ...) explaining that this endpoint
deliberately omits page and size for full-result benchmarking and noting that
other scripts (product-list-optimized.js, product-list-no-cache.js,
product-list-benchmark.js) include pagination to avoid confusion when comparing
results; keep the comment short and mention the endpoint path
(/api/v1/products/no-optimization) and checkResponse('no-optimization-list') to
make the intent obvious to future readers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 94a40502-1649-49d1-a8b7-d83ccd8f5948
⛔ Files ignored due to path filters (8)
blog/blog-week5-read-optimization.mdis excluded by!**/*.mdand included by**blog/round5-read-optimization.mdis excluded by!**/*.mdand included by**docs/images/grafana-10m-error-hikari-jvm.pngis excluded by!**/*.png,!**/*.pngand included by**docs/images/grafana-10m-l1l2-error-hikari-jvm.pngis excluded by!**/*.png,!**/*.pngand included by**docs/images/grafana-10m-l1l2-response-time-rps.pngis excluded by!**/*.png,!**/*.pngand included by**docs/images/grafana-10m-response-time-rps.pngis excluded by!**/*.png,!**/*.pngand included by**docs/images/grafana-error-hikari-jvm.pngis excluded by!**/*.png,!**/*.pngand included by**docs/images/grafana-response-time-rps.pngis excluded by!**/*.png,!**/*.pngand included by**
📒 Files selected for processing (37)
.gitignoreapps/commerce-api/build.gradle.ktsapps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductCachePort.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.javaapps/commerce-api/src/main/java/com/loopers/domain/like/Like.javaapps/commerce-api/src/main/java/com/loopers/domain/product/Product.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStats.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductLikeStatsRepository.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/CaffeineProductCacheAdapter.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/MultiLayerProductCacheAdapter.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductLikeStatsJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductLikeStatsRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/RedisProductCacheAdapter.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeController.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductBenchmarkController.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductController.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductDto.javaapps/commerce-api/src/test/java/com/loopers/application/like/LikeFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/concurrency/LikeConcurrencyTest.javaapps/commerce-api/src/test/java/com/loopers/fake/FakeProductCachePort.javaapps/commerce-api/src/test/java/com/loopers/fake/FakeProductRepository.javaapps/commerce-api/src/test/java/com/loopers/infrastructure/product/CaffeineProductCacheAdapterTest.javaapps/commerce-api/src/test/java/com/loopers/infrastructure/product/MultiLayerProductCacheAdapterTest.javaapps/commerce-api/src/test/java/com/loopers/performance/ProductPerformanceTest.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/likecountsync/LikeCountSyncJobConfig.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/likecountsync/step/LikeCountSyncTasklet.javak6/common.jsk6/product-detail.jsk6/product-list-benchmark.jsk6/product-list-no-cache.jsk6/product-list-no-optimization.jsk6/product-list-optimized.js
| // cache | ||
| implementation("com.github.ben-manes.caffeine:caffeine") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 목적: caffeine 버전이 직접 고정되는지, 혹은 BOM/버전 카탈로그에서 관리되는지 확인한다.
# 기대 결과: direct version, version catalog entry, 또는 platform/BOM 선언 중 하나가 보여야 한다.
rg -n --hidden -g 'build.gradle.kts' -g 'settings.gradle.kts' -g 'libs.versions.toml' \
'com\.github\.ben-manes\.caffeine:caffeine|spring-boot-dependencies|platform\(|enforcedPlatform\(|versionCatalogs|caffeine'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 183
🏁 Script executed:
# 루트 레벨 build.gradle.kts 파일 확인
cat -n build.gradle.kts 2>/dev/null | head -50 || echo "Root build.gradle.kts not found"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1713
🏁 Script executed:
# settings.gradle.kts에서 버전 카탈로그 설정 확인
cat -n settings.gradle.kts 2>/dev/null | head -100 || echo "settings.gradle.kts not found"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1230
🏁 Script executed:
# libs.versions.toml 파일 위치 및 내용 확인
find . -name "libs.versions.toml" -type f -exec echo "Found: {}" \; -exec cat -n {} \;Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# 전체 프로젝트 구조에서 버전 관리 파일 확인
git ls-files | grep -E '(gradle|version|catalog)' | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 542
🏁 Script executed:
cat -n gradle.propertiesRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 747
🏁 Script executed:
cat -n apps/commerce-api/build.gradle.ktsRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1477
🏁 Script executed:
rg -n "caffeine" modules/*/build.gradle.kts apps/*/build.gradle.kts supports/*/build.gradle.ktsRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 183
Caffeine 의존성 버전을 gradle.properties에 명시하거나 직접 고정해야 한다.
현재 caffeine 의존성은 버전이 전혀 관리되지 않는 상태다. Spring Cloud Dependencies BOM에는 caffeine이 포함되지 않으므로, 이 라이브러리는 Gradle의 기본 해석 규칙에 따라 부동(floating) 버전으로 해석된다. 이는 빌드 시점에 따라 의도치 않게 다른 버전이 설치될 수 있으며, 캐시 메모리 사용 특성이나 eviction 정책 변경으로 운영 환경에서 예기치 않은 장애 분석이 어려워진다.
수정안:
gradle.properties에caffeine버전Version=3.1.x형태로 명시apps/commerce-api/build.gradle.kts라인 10을implementation("com.github.ben-manes.caffeine:caffeine:${project.properties["caffeineVersion"]}")형태로 변경- 또는 버전을 직접 고정:
implementation("com.github.ben-manes.caffeine:caffeine:3.1.x")
추가로 CI에서 의존성 버전을 검증하는 태스크(dependencyInsight, 또는 lockfile 체크)를 추가하면 버전 드리프트를 방지할 수 있다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/commerce-api/build.gradle.kts` around lines 9 - 10, The caffeine
dependency is currently floating; add a fixed version key in gradle.properties
(e.g., caffeineVersion=3.1.x) and update the implementation declaration in the
build script from "com.github.ben-manes.caffeine:caffeine" to reference that
property (use project.properties["caffeineVersion"]) or hard-code
"com.github.ben-manes.caffeine:caffeine:3.1.x"; optionally add a CI dependency
verification step (dependencyInsight or a lockfile check task) to prevent future
drift.
| likeRepository.save(new Like(memberId, productId)); | ||
| productRepository.incrementLikeCount(productId); |
There was a problem hiding this comment.
좋아요 행 변경과 likeCount 갱신 결과를 함께 검증해야 한다.
incrementLikeCount와 decrementLikeCount는 영향 행 수를 반환하지만 지금은 무시한다. 운영에서는 상품 soft delete나 경쟁 상황으로 0건 업데이트가 나와도 트랜잭션이 정상 종료되어 Like 레코드와 비정규화 카운터가 어긋나고, 이후 목록 정렬과 캐시 값이 틀어질 수 있다.
영향 행 수가 1이 아니면 CoreException을 던져 전체 트랜잭션을 롤백해야 한다.
추가로 저장소가 0을 반환하는 stub으로 add/remove 모두 실패 시 롤백되고 카운터 드리프트가 남지 않는지 테스트해야 한다.
Based on learnings, "In the loop-pack-be-l2-vol3-java project, enforce unified error handling by routing errors through CoreException to ApiControllerAdvice to ensure a consistent response format."
Also applies to: 43-44
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 32 - 33, When saving/removing likes in LikeFacade, check the return
value of productRepository.incrementLikeCount(productId) and
decrementLikeCount(productId) and if the returned affected rows != 1 throw a
CoreException so the surrounding transaction rolls back; update the LikeFacade
methods that call likeRepository.save(...) and likeRepository.delete(...) to
validate the update count from productRepository and throw CoreException on any
non-1 result, and add unit/integration tests that stub productRepository to
return 0 for both add and remove paths to assert the transaction is rolled back
and no counter drift occurs.
| public ProductDto.PagedProductResponse getAllProductsCached(Long brandId, String sort, int page, int size) { | ||
| ProductDto.PagedProductResponse cached = productCachePort.getProductList(brandId, sort, page, size); | ||
| if (cached != null) { | ||
| return cached; | ||
| } | ||
|
|
||
| Pageable pageable = PageRequest.of(page, size); | ||
| Page<ProductWithBrand> result; | ||
| if (brandId != null) { | ||
| result = getProductsByBrandId(brandId, sort, pageable); | ||
| } else { | ||
| result = getAllProducts(sort, pageable); | ||
| } | ||
|
|
||
| ProductDto.PagedProductResponse response = ProductDto.PagedProductResponse.from(result); | ||
| productCachePort.putProductList(brandId, sort, page, size, response); | ||
| return response; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "ProductFacade.java" -type f | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 169
🏁 Script executed:
git ls-files | grep -i "producfacade" | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
git ls-files | grep -i "product" | grep -i "facade" | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 254
🏁 Script executed:
fd "ProductFacade.java"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 167
🏁 Script executed:
cat -n apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java | head -100Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4769
🏁 Script executed:
wc -l apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 171
🏁 Script executed:
cat -n apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java | sed -n '69,90p'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1108
🏁 Script executed:
rg "PageRequest.of" --context 3Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 8129
🏁 Script executed:
rg "CoreException" --type java -A 2 | head -50Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4729
🏁 Script executed:
fd "ProductFacadeTest.java" -exec cat {} \;Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 272
🏁 Script executed:
cat apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java | head -200Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 8339
🏁 Script executed:
rg "getAllProductsCached" -B 2 -A 10Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 5682
🏁 Script executed:
rg "page.*size" apps/commerce-api/src/main/java/com/loopers/interfaces/api/product --type java -B 3 -A 3Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2265
🏁 Script executed:
rg "MAX_PAGE_SIZE\|PAGE_SIZE" --type javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
cat apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductController.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1232
🏁 Script executed:
rg "@Valid\|@NotNull\|@Min\|@Max" apps/commerce-api/src/main/java/com/loopers/interfaces/api/product --type java -B 2 -A 2Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
rg "IllegalArgumentException\|INVALID_REQUEST\|BAD_REQUEST" --type java | head -30Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
cat apps/commerce-api/src/main/java/com/loopers/support/error/ErrorType.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1009
🏁 Script executed:
rg "sort" apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java -B 3 -A 5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1589
🏁 Script executed:
rg "toSort" apps/commerce-api/src/main/java/com/loopers/infrastructure/product -B 2 -A 8Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 5511
🏁 Script executed:
rg "IllegalArgumentException" apps/commerce-api/src/main/java/com/loopers/infrastructure/product --type javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
cat apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java | sed -n '47,80p'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1478
🏁 Script executed:
rg "case.*->" apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java -A 1Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 240
🏁 Script executed:
cat apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java | sed -n '80,100p'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 732
🏁 Script executed:
rg "private Sort toSort" -A 20 apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 532
🏁 Script executed:
rg "ApiControllerAdvice\|ExceptionHandler" apps/commerce-api/src/main/java/com/loopers/support --type java -B 2 -A 5 | head -50Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
rg "IllegalArgumentException" apps/commerce-api/src/test -l | head -5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 238
🏁 Script executed:
grep -r "page.*size" apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java | grep -E "0|20|-1"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
cat apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.java | tail -100Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4104
🏁 Script executed:
rg "default -> Sort" apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java -A 1Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 153
🏁 Script executed:
rg "PAGE_SIZE\|MAX_SIZE" apps/commerce-api/src/main/java/com/loopers --type javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
캐시 조회 전 페이지/정렬 파라미터를 검증해야 한다.
현재 구현은 page, size, sort를 그대로 캐시 키로 사용하고 PageRequest 생성에 사용한다. 검증이 없으면 음수 page, 0 이하의 size, 지원하지 않는 sort 문자열이 캐시 포트로 전달되어 불필요한 캐시 엔트리를 생성하고 DB 부하를 증가시킨다. PageRequest.of()는 유효하지 않은 값에 대해 프레임워크 예외를 발생시키는데, 이를 통일된 에러 처리로 변환하지 않으면 응답 형식이 일관성 없어진다.
수정안: cache lookup 전에 요청 파라미터를 검증하고 실패 시 CoreException(ErrorType.BAD_REQUEST)로 변환한다. 구체적으로 page >= 0, 1 <= size <= MAX_PAGE_SIZE (상수 정의 필요), 허용된 sort 값만 통과시켜야 한다.
추가 테스트: page=-1, page<0, size=0, size<0, size>MAX_PAGE_SIZE, 지원하지 않는 sort 각각에서 BAD_REQUEST 응답이 내려가고 캐시 포트가 호출되지 않는지 검증한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 69 - 85, The getAllProductsCached method currently uses page, size,
and sort directly for cache lookup and PageRequest creation, which can produce
invalid inputs and inconsistent exceptions; before calling
productCachePort.getProductList or PageRequest.of, validate that page >= 0, 1 <=
size <= MAX_PAGE_SIZE (define MAX_PAGE_SIZE), and that sort is one of the
allowed values, and if any check fails throw new
CoreException(ErrorType.BAD_REQUEST); ensure validation happens at the top of
getAllProductsCached (reference method name) so invalid requests never call
productCachePort or create PageRequest, and add unit tests asserting BAD_REQUEST
and that productCachePort is not invoked for invalid page/size/sort cases.
| @Table(name = "product", indexes = { | ||
| @Index(name = "idx_product_brand_id", columnList = "brand_id") | ||
| @Index(name = "idx_product_brand_id", columnList = "brand_id"), | ||
| @Index(name = "idx_product_like_count", columnList = "like_count DESC, id DESC"), | ||
| @Index(name = "idx_product_brand_like_count", columnList = "brand_id, like_count DESC, id DESC"), | ||
| @Index(name = "idx_product_brand_price", columnList = "brand_id, price ASC, id ASC") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 목적: like_count 컬럼 추가와 백필/인덱스 생성 마이그레이션이 실제로 존재하는지 확인한다.
# 기대 결과: DEFAULT 0, NOT NULL, backfill/update, index 생성 관련 구문이 함께 보여야 한다.
rg -n -C2 --hidden -g '!**/build/**' -g '!**/target/**' \
'like_count|idx_product_like_count|idx_product_brand_like_count|idx_product_brand_price|ALTER TABLE product|CREATE INDEX'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 13886
🏁 Script executed:
#!/bin/bash
# Flyway, Liquibase, 또는 기타 DB 마이그레이션 파일 위치 검색
find . -type f \( -name "*V*.sql" -o -name "*V*.SQL" -o -name "*.xml" -o -name "*.yaml" \) \
-path "*/db/migration/*" -o -path "*/flyway/*" -o -path "*/liquibase/*" 2>/dev/null | head -20
echo "---"
# like_count 관련 SQL 마이그레이션 검색
rg -l 'like_count|ALTER TABLE product|CREATE TABLE product' \
--type sql --type xml --type yaml 2>/dev/null | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 88
🏁 Script executed:
#!/bin/bash
# Check for Hibernate DDL-auto and migration tool configuration
echo "=== Checking application config files ==="
fd -e properties -e yml -e yaml | xargs grep -l "ddl-auto\|flyway\|liquibase" 2>/dev/null | head -10
echo "=== Searching for SQL files in resources ==="
find . -path "*/resources/*" -name "*.sql" 2>/dev/null | head -20
echo "=== Checking for db/schema directories ==="
find . -type d \( -name "db" -o -name "schema" -o -name "sql" \) 2>/dev/null
echo "=== Checking build files for migration plugins ==="
rg "flyway|liquibase|migration" -t xml -t gradle -t toml 2>/dev/null | head -15Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 304
🏁 Script executed:
cat -n modules/jpa/src/main/resources/jpa.ymlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2903
🏁 Script executed:
cat -n apps/commerce-api/src/main/java/com/loopers/domain/product/Product.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2228
🏁 Script executed:
# Check git history for like_count introduction
git log --all --oneline --grep="like_count" | head -20
echo "---"
# Check if this is the first time like_count appears in Product.java
git log --all -p -- apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java \
| grep -B5 -A5 "like_count" | head -40Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 923
🏁 Script executed:
# Comprehensive search for any SQL files in the entire repo
echo "=== All SQL files in repo ==="
find . -name "*.sql" -type f 2>/dev/null
echo ""
echo "=== All directories with 'migration', 'schema', 'sql' in name ==="
find . -type d \( -name "*migration*" -o -name "*schema*" -o -name "*sql*" \) 2>/dev/null
echo ""
echo "=== Check docs directory for SQL scripts ==="
find docs -type f 2>/dev/null | grep -i sqlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 227
🏁 Script executed:
# Check if there's any deployment/runbook documentation mentioning like_count or manual DDL
echo "=== Checking for deployment/runbook docs ==="
find docs -type f \( -name "*.md" -o -name "*.txt" \) 2>/dev/null | xargs grep -l "like_count\|DDL\|migration\|deployment" 2>/dev/null | head -10
echo ""
echo "=== Check README for deployment instructions ==="
find . -maxdepth 2 -name "README*" -o -name "INSTALL*" -o -name "DEPLOY*" | xargs cat 2>/dev/null | head -100Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1387
🏁 Script executed:
# Check test files to understand if like_count is assumed to already exist
echo "=== Check test setup for product table schema ==="
find . -path "*/test/*" -name "*Product*.java" -type f | xargs grep -l "like_count\|@Table" 2>/dev/null | head -5
echo ""
echo "=== Batch job test to see if it expects pre-existing schema ==="
cat apps/commerce-batch/src/test/java/com/loopers/batch/job/likecountsync/LikeCountSyncJobConfigTest.java 2>/dev/null | head -100Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 285
🏁 Script executed:
# Check the ProductPerformanceTest to see schema setup
cat -n apps/commerce-api/src/test/java/com/loopers/performance/ProductPerformanceTest.java | head -150Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 6801
🏁 Script executed:
# Find the incrementLikeCount method to understand how like_count is being modified
echo "=== Search for incrementLikeCount implementation ==="
rg -n "incrementLikeCount" -A5 -B2
echo ""
echo "=== Check if ProductRepository has any methods for like_count management ==="
rg -n "LikeCount\|like_count" apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ -A3Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 10171
like_count 컬럼 배포는 명시적 DDL 스크립트 없이 진행 불가하다.
코드 레벨에서는 안전하게 설계되었다(nullable=false, atomic UPDATE, 배치 드리프트 보정). 하지만 프로덕션 환경은 ddl-auto=none으로 설정되어 있어, 사전에 별도의 DDL이 적용되어야 한다. 현재 레포지토리에는 마이그레이션 도구(Flyway/Liquibase)가 없고 SQL 파일도 없으므로, 수동 DDL 스크립트를 제공하거나 Flyway 마이그레이션 파일을 추가해야 한다.
배포 순서는 다음과 같아야 한다:
- DB에
like_count INT NOT NULL DEFAULT 0컬럼 추가 - 기존 상품 행 백필 (필요시 UPDATE product SET like_count=0)
- 인덱스 생성
- 애플리케이션 코드 배포
- 배치 동기화 작업 실행
또한 코드 리뷰 시 ddl-auto=none 상태에서 스키마 검증 테스트를 추가하는 것을 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java`
around lines 12 - 16, Product 엔티티에 새로 추가된 like_count 컬럼과 인덱스는 프로덕션에서
ddl-auto=none 설정 때문에 애플리케이션 배포만으로는 생성되지 않으니, Product 엔티티(product class) 변경에 맞추어
Flyway 또는 수동 SQL 마이그레이션을 추가해 DB 스키마를 먼저 적용하도록 하세요: (1) ALTER TABLE product ADD
COLUMN like_count INT NOT NULL DEFAULT 0; (2) 백필이 필요한 경우 UPDATE product SET
like_count=0; (3) CREATE INDEX statements for idx_product_like_count,
idx_product_brand_like_count, idx_product_brand_price; (4) 그 다음 애플리케이션 배포 및 배치
동기화 실행; 또한 릴리스 파이프라인 또는 테스트에 스키마 검증(ddl-auto=none 환경에서 Product 엔티티 필드가 DB에 존재하는지
확인하는 통합 테스트)을 추가하세요.
| @Override | ||
| public void evictProductList() { | ||
| listCache.invalidateAll(); | ||
| } | ||
|
|
||
| private String detailKey(Long productId) { | ||
| return "detail:" + productId; | ||
| } | ||
|
|
||
| private String listKey(Long brandId, String sort, int page, int size) { | ||
| String brandPart = brandId != null ? String.valueOf(brandId) : "all"; | ||
| return "list:brand:" + brandPart + ":sort:" + sort + ":page:" + page + ":size:" + size; | ||
| } |
There was a problem hiding this comment.
L1 목록 캐시도 세대 기반 무효화를 이해해야 한다.
현재 L1 목록 키에는 version이 없고 evictProductList()는 전체 invalidateAll()만 수행한다. 운영에서는 좋아요 이벤트가 몰릴 때마다 무관한 brand/page 엔트리까지 함께 비워져 Caffeine hit ratio가 급락하고, Redis/DB read amplification으로 이어진다. 수정안: ProductCachePort에 목록 세대(version) 개념을 명시적으로 올려 L1/L2가 같은 세대로 키를 만들게 하거나, pub/sub 등으로 L1 세대를 동기화해 invalidateAll()은 상품 CUD에만 남기는 편이 안전하다. 추가 테스트: 여러 brand/page 목록을 미리 적재한 뒤 like성 무효화를 반복해도 전체 clear가 일어나지 않고 이전 세대 엔트리만 우회되는지 검증해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/CaffeineProductCacheAdapter.java`
around lines 53 - 65, The L1 list cache currently has no generation/versioning
so evictProductList() calling listCache.invalidateAll() clears unrelated
brand/page entries; change the contract in ProductCachePort to include a
list-generation (version) for list keys and update listKey(Long brandId, String
sort, int page, int size) to incorporate that generation, then modify
evictProductList() to increment or set the generation for the affected list
scope instead of invalidateAll() (or implement a pub/sub sync to bump L1
generation), and add tests that pre-warm multiple brand/page entries and
exercise repeated like/unlike invalidations to assert only the relevant
generation becomes bypassed while other brand/page entries remain cached.
| @RestController | ||
| @RequiredArgsConstructor | ||
| @RequestMapping("/api/v1/products") | ||
| public class ProductBenchmarkController { |
There was a problem hiding this comment.
벤치마크 엔드포인트를 기본 런타임에 노출하면 self-DoS 경로가 된다.
이 컨트롤러는 main 소스에서 공개 /api/v1/products 하위로 올라오고, /no-optimization은 의도적으로 전량 조회 + in-memory 정렬 경로를 연다. 운영에서는 외부 호출만으로도 PR에서 측정한 최악 경로를 재현할 수 있어 장애 유발 경로가 된다. 수정안: @Profile("benchmark"), 별도 모듈, 혹은 강한 관리자 인증/네트워크 격리 중 하나로 기본 프로필에서는 bean 자체가 올라오지 않게 해야 한다. 추가 테스트: 기본 profile에서는 해당 매핑이 404 또는 bean 부재이고, benchmark profile에서만 활성화되는지 검증해야 한다.
수정 예시
+import org.springframework.context.annotation.Profile;
+
+@Profile("benchmark")
`@RestController`
`@RequiredArgsConstructor`
`@RequestMapping`("/api/v1/products")
public class ProductBenchmarkController {Also applies to: 20-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductBenchmarkController.java`
around lines 13 - 16, The ProductBenchmarkController (class
ProductBenchmarkController) currently exposes the heavy `/api/v1/products`
endpoints including the `/no-optimization` path in the default runtime; restrict
this by annotating the controller (or the specific heavy handler method) with a
runtime-only activation such as `@Profile`("benchmark") or by moving the
controller into a benchmark-only module so the bean does not load in default
profiles; update tests to assert that under the default profile the
controller/mapping is absent (404 or bean not present) and that under the
"benchmark" profile the controller and `/no-optimization` mapping are active.
| @DisplayName("put 후 get하면 저장된 값이 반환된다") | ||
| @Test | ||
| void putAndGet() { | ||
| ProductDto.PagedProductResponse response = new ProductDto.PagedProductResponse( | ||
| List.of(), 0, 0, 0, 20); | ||
|
|
||
| cache.putProductList(null, "latest", 0, 20, response); | ||
|
|
||
| ProductDto.PagedProductResponse cached = cache.getProductList(null, "latest", 0, 20); | ||
| assertThat(cached).isEqualTo(response); | ||
| } | ||
|
|
||
| @DisplayName("brandId가 다르면 별도 캐시 엔트리이다") | ||
| @Test | ||
| void differentBrandIdIsSeparateEntry() { | ||
| ProductDto.PagedProductResponse allBrands = new ProductDto.PagedProductResponse( | ||
| List.of(), 100, 5, 0, 20); | ||
| ProductDto.PagedProductResponse brand1 = new ProductDto.PagedProductResponse( | ||
| List.of(), 10, 1, 0, 20); | ||
|
|
||
| cache.putProductList(null, "latest", 0, 20, allBrands); | ||
| cache.putProductList(1L, "latest", 0, 20, brand1); | ||
|
|
||
| assertThat(cache.getProductList(null, "latest", 0, 20).totalElements()).isEqualTo(100); | ||
| assertThat(cache.getProductList(1L, "latest", 0, 20).totalElements()).isEqualTo(10); | ||
| } | ||
|
|
||
| @DisplayName("evictProductList는 모든 목록 캐시를 무효화한다") | ||
| @Test | ||
| void evictClearsAllListEntries() { | ||
| cache.putProductList(null, "latest", 0, 20, new ProductDto.PagedProductResponse( | ||
| List.of(), 0, 0, 0, 20)); | ||
| cache.putProductList(1L, "likes_desc", 0, 10, new ProductDto.PagedProductResponse( | ||
| List.of(), 0, 0, 0, 10)); | ||
|
|
||
| cache.evictProductList(); | ||
|
|
||
| assertThat(cache.getProductList(null, "latest", 0, 20)).isNull(); | ||
| assertThat(cache.getProductList(1L, "likes_desc", 0, 10)).isNull(); |
There was a problem hiding this comment.
목록 캐시 키 충돌 경계값 테스트가 부족하다.
지금은 brandId 분리만 검증해서 sort, page, size 중 하나가 키 생성에서 빠져도 테스트가 통과한다. 운영에서는 다른 정렬이나 페이지 결과가 같은 캐시 엔트리를 공유해 잘못된 목록을 반환할 수 있다. 수정안은 동일 brandId에서 sort, page, size를 각각 바꾼 경우가 모두 별도 엔트리임을 검증하는 테스트를 추가하는 것이다. 추가로 null brandId, page 0/1, size 10/20 조합도 함께 검증하는 편이 좋다. As per coding guidelines, "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🔧 테스트 추가 예시
+ `@DisplayName`("sort/page/size가 다르면 별도 캐시 엔트리이다")
+ `@Test`
+ void differentSortPageAndSizeAreSeparateEntries() {
+ ProductDto.PagedProductResponse latestPage0 = new ProductDto.PagedProductResponse(
+ List.of(), 100, 5, 0, 20);
+ ProductDto.PagedProductResponse likesPage1 = new ProductDto.PagedProductResponse(
+ List.of(), 100, 10, 1, 10);
+
+ cache.putProductList(1L, "latest", 0, 20, latestPage0);
+ cache.putProductList(1L, "likes_desc", 1, 10, likesPage1);
+
+ assertThat(cache.getProductList(1L, "latest", 0, 20)).isEqualTo(latestPage0);
+ assertThat(cache.getProductList(1L, "likes_desc", 1, 10)).isEqualTo(likesPage1);
+ assertThat(cache.getProductList(1L, "latest", 1, 20)).isNull();
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/CaffeineProductCacheAdapterTest.java`
around lines 61 - 99, Add unit tests that verify cache keys include sort, page,
and size (not just brandId): create a test (e.g.,
differentSortPageSizeAreSeparateEntries) that calls cache.putProductList and
cache.getProductList with the same brandId (including null brandId) but
different sort values, different page numbers (0 vs 1), and different sizes (10
vs 20), asserting each variation yields its own cached entry (i.e., put one
response and verify get for the same key returns it while get for the changed
sort/page/size returns null or a different response); reuse the existing
cache.putProductList, cache.getProductList and cache.evictProductList symbols
and mirror the style of existing tests (putAndGet,
differentBrandIdIsSeparateEntry, evictClearsAllListEntries).
| // 상품 10만 개 (브랜드당 ~1,000개) | ||
| List<Product> products = new ArrayList<>(); | ||
| for (int i = 0; i < productCount; i++) { | ||
| Brand brand = brands.get(i / productPerBrand); | ||
| int price = 1000 + random.nextInt(499_000); // 1,000 ~ 500,000 | ||
| Product product = productRepository.save( | ||
| new Product(brand.getId(), "상품" + i, new Price(price), new Stock(random.nextInt(100)))); | ||
| products.add(product); | ||
|
|
||
| if ((i + 1) % 10_000 == 0) { | ||
| log.info("상품 {} 개 생성 완료", i + 1); | ||
| } | ||
| } | ||
|
|
||
| // likeCount 설정 (멱법칙 분포 — 소수 상품이 높은 좋아요) | ||
| for (int i = 0; i < productCount; i++) { | ||
| int likes = (int) Math.round(Math.pow(random.nextDouble(), 3) * 10_000); | ||
| if (likes > 0) { | ||
| Product p = products.get(i); | ||
| for (int j = 0; j < likes && j < 50; j++) { // 실제 Like 레코드는 최대 50개만 | ||
| try { | ||
| likeRepository.save(new Like((long) (i * 100 + j + 1), p.getId())); | ||
| productRepository.incrementLikeCount(p.getId()); | ||
| } catch (Exception ignored) { | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
수동 실행용 테스트라도 시딩 실패를 숨기면 EXPLAIN 결과를 믿기 어렵다.
현재는 대량 데이터를 단건 save 루프로 적재하면서 좋아요 생성 예외를 무시한다. 이렇게 하면 수동 성능 검증이 시딩 비용에 크게 좌우되고, 일부 Like 레코드와 likeCount 증감이 어긋나도 그대로 분석을 진행해 잘못된 결론을 내릴 수 있다. 수정안은 청크 단위 배치 적재나 별도 SQL seeder로 시딩을 분리하고, 실패 건수를 집계해 1건이라도 있으면 즉시 실패시키는 것이다. 추가로 시딩 후 표본 상품들에 대해 countByProductId와 product.likeCount가 일치하는지 검증하는 정합성 체크를 넣는 편이 좋다. As per coding guidelines, "통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/performance/ProductPerformanceTest.java`
around lines 71 - 98, The test currently swallows seeding exceptions and does
per-row saves in ProductPerformanceTest causing silent data drift; change the
seeding to use chunked/batched inserts (or a dedicated SQL seeder) for both
Product and Like creation instead of single-row save loops (references:
productRepository.save, likeRepository.save,
productRepository.incrementLikeCount), collect and count any failures during
seeding and fail the test if failures > 0 (do not ignore exceptions), and after
seeding add a consistency assertion loop that for a sample of products calls the
authoritative count method (e.g., likeRepository.countByProductId or a similar
repository count) and asserts it equals product.getLikeCount() (or
productRepository.findById(...).getLikeCount()) to ensure likeCount matches
actual Like rows.
| public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) { | ||
| log.info("[LikeCountSync] 1단계: likes 테이블 → product_like_stats 동기화 시작"); | ||
| int synced = entityManager.createNativeQuery( | ||
| "REPLACE INTO product_like_stats (product_id, like_count, synced_at) " | ||
| + "SELECT l.product_id, COUNT(*), NOW() FROM likes l GROUP BY l.product_id" | ||
| ).executeUpdate(); | ||
| log.info("[LikeCountSync] 1단계 완료 — 동기화 행 수: {}", synced); | ||
|
|
||
| log.info("[LikeCountSync] 2단계: product.like_count 드리프트 보정 시작"); | ||
| int corrected = entityManager.createNativeQuery( | ||
| "UPDATE product p JOIN product_like_stats pls ON p.id = pls.product_id " | ||
| + "SET p.like_count = pls.like_count " | ||
| + "WHERE p.like_count != pls.like_count AND p.deleted_at IS NULL" | ||
| ).executeUpdate(); | ||
| log.info("[LikeCountSync] 2단계 완료 — 보정된 상품 수: {}", corrected); | ||
|
|
||
| return RepeatStatus.FINISHED; | ||
| } |
There was a problem hiding this comment.
대량 데이터 처리 시 트랜잭션 타임아웃 및 락 경합 위험이 있다.
천만 건 규모의 likes 테이블에서 전체 집계 후 REPLACE INTO를 수행하면 단일 트랜잭션에서 오랜 시간 락을 점유할 수 있다. 이는 운영 중 다른 좋아요 증감 요청과 락 경합을 유발하거나 트랜잭션 타임아웃으로 배치 실패를 초래할 수 있다.
추가로, 좋아요가 0개인 상품은 likes 테이블에 레코드가 없으므로 product_like_stats에 반영되지 않아 보정 대상에서 누락된다.
권장 사항:
- 청크 단위로 분할 처리하거나, 배치 실행 시간대를 트래픽이 적은 시간으로 제한한다.
- 좋아요 0개인 상품도 처리하려면 별도 로직 추가가 필요하다.
- 실패 시 원인 파악을 위해 예외 로깅을 추가한다.
🛠️ 예외 로깅 추가 예시
`@Override`
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) {
+ try {
log.info("[LikeCountSync] 1단계: likes 테이블 → product_like_stats 동기화 시작");
int synced = entityManager.createNativeQuery(
"REPLACE INTO product_like_stats (product_id, like_count, synced_at) "
+ "SELECT l.product_id, COUNT(*), NOW() FROM likes l GROUP BY l.product_id"
).executeUpdate();
log.info("[LikeCountSync] 1단계 완료 — 동기화 행 수: {}", synced);
log.info("[LikeCountSync] 2단계: product.like_count 드리프트 보정 시작");
int corrected = entityManager.createNativeQuery(
"UPDATE product p JOIN product_like_stats pls ON p.id = pls.product_id "
+ "SET p.like_count = pls.like_count "
+ "WHERE p.like_count != pls.like_count AND p.deleted_at IS NULL"
).executeUpdate();
log.info("[LikeCountSync] 2단계 완료 — 보정된 상품 수: {}", corrected);
return RepeatStatus.FINISHED;
+ } catch (Exception e) {
+ log.error("[LikeCountSync] 동기화 실패", e);
+ throw e;
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/likecountsync/step/LikeCountSyncTasklet.java`
around lines 20 - 37, The current execute method performs large REPLACE INTO and
UPDATE in a single transaction which risks timeouts and locks; refactor
execute(StepContribution, ChunkContext) to process likes in smaller batches
(e.g., paginate product IDs or aggregate by ranges) using the entityManager in
repeated smaller transactions to avoid long-held locks, add a step to ensure
products with zero likes are represented in product_like_stats (e.g., insert or
upsert entries for product ids missing from likes with like_count = 0 using
product table as source), and wrap DB operations with try/catch to log
exceptions (include the caught exception in logs) so failures are recorded;
reference the existing methods/variables entityManager, product_like_stats,
likes, and the execute method when implementing these changes.
| export function checkResponse(res, name) { | ||
| check(res, { | ||
| [`${name} status 200`]: (r) => r.status === 200, | ||
| [`${name} has data`]: (r) => { | ||
| const body = JSON.parse(r.body); | ||
| return body.meta && body.meta.result === 'SUCCESS'; | ||
| }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
checkResponse에서 JSON 파싱 오류 처리가 누락되었다.
운영 관점에서 서버가 오류 상태이거나 타임아웃 시 malformed JSON을 반환할 수 있다. 이 경우 JSON.parse가 예외를 발생시켜 k6 iteration이 비정상 종료되며, 테스트 결과 해석이 왜곡될 수 있다.
product-list-benchmark.js에서는 try-catch로 감싸 처리하고 있으나, 공통 유틸리티인 이 함수는 그렇지 않다.
🛡️ 수정안: try-catch 추가
export function checkResponse(res, name) {
check(res, {
[`${name} status 200`]: (r) => r.status === 200,
[`${name} has data`]: (r) => {
- const body = JSON.parse(r.body);
- return body.meta && body.meta.result === 'SUCCESS';
+ try {
+ const body = JSON.parse(r.body);
+ return body.meta && body.meta.result === 'SUCCESS';
+ } catch (e) {
+ return false;
+ }
},
});
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k6/common.js` around lines 27 - 35, checkResponse currently calls
JSON.parse(r.body) without error handling which will throw on malformed
responses; wrap the parse and subsequent access in a try-catch inside
checkResponse so that parse errors do not crash the k6 iteration—on catch return
false for the [`${name} has data`] predicate (and optionally log the error with
context using console.error including name and error.message) so the check
simply fails instead of throwing; update the function around JSON.parse to
handle this gracefully (referencing checkResponse and JSON.parse) similar to how
product-list-benchmark.js handles parsing.
- 블로그를 "왜 그렇게 판단했는가" 중심으로 전면 재작성 - TL;DR 추가, AS-IS/TO-BE 비교 강화, 판단 흐름이 읽히는 톤 - Grafana Total Requests 쿼리를 [15m] → [$__range]로 수정 → no-cache 엔드포인트가 0으로 표시되던 문제 해결 (3.10K로 정상 표시) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- TL;DR에 "페이지네이션 조회(20건/페이지)" 명시 - AS-IS 설명에 "전체 데이터를 먼저 메모리에 올려야" 하는 이유 보충 - K6 테스트 환경에 "15개 조합 랜덤 요청" 명시 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📌 Summary
Product.likeCount를 제거하고COUNT(*)로 파생시켰으나, 상품 10만 건 이상에서 전량 로딩 + in-memory sort + COUNT 집계가 읽기 병목을 유발한다🧭 Context & Decision
문제 정의
COUNT(*)+GROUP BY로 조회, 정렬은 JavaComparatorin-memory sort, 전량 반환 (페이지네이션 없음), 캐시 없음likeCount == COUNT(*)정합성 확인선택 1. 좋아요 수 집계 방식 — 비정규화 재도입 판단
4주차에서 제거한
Product.likeCount를 다시 도입해야 하는가?COUNT(*)파생) — 정규화 유지, 쓰기 경합 없음. 하지만 10만 건에서 Full Table Scan + filesort, 1000만 건에서 사실상 사용 불가likeCount컬럼 재도입 (atomic SQL) — 인덱스 활용 가능, DB 정렬 가능. 쓰기 시 UPDATE 1회 추가되지만like_count = like_count + 1로 마이크로초 잠금최종 결정: B (Primary) + C (Secondary)
product_like_stats+ 배치 Job)으로 드리프트 보정SELECT FOR UPDATE→ 트랜잭션 전체)과는 본질적으로 다르다선택 2. 인덱스 설계
(like_count),(brand_id)등 개별 생성. 복합 조건에서 filesort 발생 가능(brand_id, like_count DESC, id DESC)등 정렬 컬럼까지 포함최종 결정: B — 복합 인덱스 4개
idx_product_like_count(like_count DESC, id DESC)idx_product_brand_like_count(brand_id, like_count DESC, id DESC)idx_product_brand_price(brand_id, price ASC, id ASC)idx_likes_product_id(product_id)선택 3. 캐시 전략
@Cacheable(Spring Cache Abstraction) — 간결하지만 AOP로 캐시 흐름이 감춰짐. TTL 세밀 제어 어려움RedisTemplate직접 사용 — 캐시 흐름이 명확히 보임. 이미 구축된 Master/Replica 토폴로지 활용최종 결정: B — RedisTemplate 직접 사용
redisTemplateMaster)로 쓰기/무효화, Replica(defaultRedisTemplate)로 읽기INCR product:list:version으로 O(1) 무효화.SCAN/KEYS패턴 삭제 회피)선택 4. 캐시 무효화 단위
최종 결정: B — 이벤트 성격에 따라 분리
선택 5. 캐시 아키텍처 — DIP + 멀티 레이어 (L1 Caffeine + L2 Redis)
문제
ProductCacheService가 application 레이어의 concrete class로RedisTemplate,@Qualifier("redisTemplateMaster")에 직결되어 있었다. Repository는 DIP를 잘 지키고 있으나 캐시만 위반하는 비대칭 구조였다.이로 인한 실무 문제:
FakeProductCacheService extends ProductCacheService+super(null, null, null)— 생성자 시그니처가 바뀌면 모든 Fake가 깨짐RedisTemplate을 직접 다루는 코드선택지
최종 결정: C — DIP 기반 멀티 레이어 캐시
설계 구조
인터페이스 위치 판단 —
application에 둔 이유: 캐시는 도메인 규칙("상품을 저장/조회한다")이 아니라 유스케이스 최적화 전략("상품 응답 DTO를 빠르게 재사용한다")이다.ProductRepository가 domain에 있는 것과 관심사의 성격이 다르다.L1/L2 설정 근거
MultiLayerAdapter.evict*에 Pub/Sub 발행만 추가하면 됨성능 검증 — L2-only vs L1+L2 비교 (1000만 건)
L1(Caffeine) 도입 효과를 검증하기 위해, Caffeine의
get메서드를null반환으로 우회하여 L2-only 조건을 만들고 동일 환경에서 비교했다.트레이드오프
🏗️ Design Overview
변경 범위
ProductCachePort,CaffeineProductCacheAdapter,RedisProductCacheAdapter,MultiLayerProductCacheAdapter,ProductLikeStats+ Repository,ProductBenchmarkController,LikeCountSyncJob, K6 스크립트 5개ProductCacheService(인터페이스 + 3개 Adapter로 분리),FakeProductCacheService→FakeProductCachePort,enrichWithLikeCount(), in-memoryComparator정렬주요 컴포넌트 책임
ProductCachePortCaffeineProductCacheAdapterRedisProductCacheAdapterProductCacheService로직 100% 보존 (Master/Replica, 버전 기반 무효화, try-catch)MultiLayerProductCacheAdapter@Primary. L1+L2 Look-Aside 조합. backfill/evict 순서 제어ProductFacadeProductRepository.incrementLikeCount()LikeCountSyncTaskletproduct_like_stats→product.like_count드리프트 보정🔁 Flow Diagram
Look-Aside 멀티 레이어 캐시 흐름
MultiLayerProductCacheAdapter가 L1(Caffeine)과 L2(Redis)를 조합하는 핵심 흐름.flowchart TD subgraph GET["GET — 조회"] G1[Facade.getProductDetailCached] --> G2{L1 Caffeine} G2 -->|HIT| G3[즉시 반환 ⚡ μs] G2 -->|MISS| G4{L2 Redis} G4 -->|HIT| G5[L1에 backfill] --> G3 G4 -->|MISS| G6[null → Facade가 DB 조회 후 PUT] end subgraph PUT["PUT — 저장"] P1[Facade → putProductDetail] --> P2[L2 Redis 먼저 저장] P2 --> P3[L1 Caffeine 저장] P2 -. "L2가 truth source<br/>L2 실패 시 L1만 있는<br/>불일치 방지" .-> P2 end subgraph EVICT["EVICT — 무효화"] E1[CUD/좋아요 변경] --> E2[L1 Caffeine 먼저 삭제] E2 --> E3[L2 Redis 삭제] E2 -. "L1 stale 서빙 시간<br/>최소화" .-> E2 end순서가 다른 이유:
상품 목록 조회 (멀티 레이어 캐시 적용)
sequenceDiagram autonumber participant Client participant Controller participant Facade participant MultiLayer as MultiLayerAdapter participant L1 as L1 Caffeine participant L2 as L2 Redis participant Repository participant DB Client->>Controller: GET /products?sort=likes_desc&page=0 Controller->>Facade: getAllProductsCached(sort, page, size) Facade->>MultiLayer: getProductList(cacheKey) MultiLayer->>L1: get(key) alt L1 HIT L1-->>MultiLayer: PagedProductResponse ⚡ MultiLayer-->>Facade: 캐시 응답 else L1 MISS L1-->>MultiLayer: null MultiLayer->>L2: get(key) alt L2 HIT L2-->>MultiLayer: PagedProductResponse MultiLayer->>L1: backfill(key, data) MultiLayer-->>Facade: 캐시 응답 else L2 MISS L2-->>MultiLayer: null MultiLayer-->>Facade: null Facade->>Repository: findAllWithBrand(pageable, sort) Repository->>DB: SELECT ... ORDER BY like_count DESC LIMIT 20 DB-->>Repository: Page<Product> Repository-->>Facade: Page<ProductWithBrand> Facade->>MultiLayer: putProductList(key, data) MultiLayer->>L2: set(key, json, TTL 5분) MultiLayer->>L1: put(key, object, TTL 15초) end end Facade-->>Controller: PagedProductResponse Controller-->>Client: 200 OK좋아요 등록 + 멀티 레이어 캐시 무효화
sequenceDiagram autonumber participant Client participant LikeController participant LikeFacade participant LikeRepo participant ProductRepo participant MultiLayer as MultiLayerAdapter participant L1 as L1 Caffeine participant L2 as L2 Redis participant DB Client->>LikeController: POST /likes (productId) LikeController->>LikeFacade: addLike(memberId, productId) rect rgb(50, 50, 80) Note over LikeFacade,DB: @Transactional 경계 LikeFacade->>LikeRepo: save(like) LikeRepo->>DB: INSERT INTO likes LikeFacade->>ProductRepo: incrementLikeCount(productId) ProductRepo->>DB: UPDATE product SET like_count = like_count + 1 end LikeController->>MultiLayer: evictProductDetail(productId) MultiLayer->>L1: invalidate(detail:productId) MultiLayer->>L2: DEL product:detail:{id} LikeController->>MultiLayer: evictProductList() MultiLayer->>L1: invalidateAll(listCache) MultiLayer->>L2: INCR product:list:version LikeController-->>Client: 200 OK📊 EXPLAIN Before / After
좋아요순 정렬 (가장 비싼 쿼리)
idx_product_like_count/ 20 / Using whereidx_product_like_count/ 20 / Using where브랜드 필터 + 좋아요순
ref/idx_product_brand_like_count/ 34,704📊 K6 부하 테스트 — 최적화 단계별 비교 (1000만 건 테이블, 100 RPS, 1분)
읽기 최적화 각 단계의 기여도를 검증하기 위해, 동일 조건(1000만 건 규모 테이블, 페이지네이션 조회 20건/페이지, 100 rps constant-arrival-rate, 1분)에서 4개 시나리오를 벤치마크했다. 페이지 0~4 × 정렬 3종(좋아요순, 가격순, 최신순) = 15개 조합을 랜덤 요청. 테스트 간 MySQL 컨테이너를 재시작하여 잔여 부하의 영향을 제거했다.
📈 Grafana 모니터링 (1000만 건, 4단계 비교)
대시보드가 말해주는 것:
51 rps만 처리(나머지 타임아웃 유실). 캐시 적용: 100 rps 안정 달성. 같은 서버 사양에서 캐시 유무가 처리 가능 트래픽을 23배 가른다.🏗️ 변경 파일
수정 14개 + 신규 20개 (클릭하여 펼치기)
수정 파일
Product.javalikeCount필드 + 인덱스 3개ProductRepository.javaincrementLikeCount/decrementLikeCount, 페이지네이션Like.javaproduct_id인덱스ProductJpaRepository.java@Modifying증감 쿼리ProductRepositoryImpl.javatoSort/toProductWithBrand수정LikeFacade.javaProductFacade.javaProductCacheService→ProductCachePort의존 전환, 캐시/페이지네이션ProductController.javapage/size파라미터ProductDto.javaPagedProductResponseLikeController.javaProductCacheService→ProductCachePort의존 전환, 캐시 무효화build.gradle.ktsFakeProductRepository.javaProductFacadeTest.javaFakeProductCacheService→FakeProductCachePortLikeFacadeTest.javaLikeConcurrencyTest.java신규 파일
ProductCachePort.javaCaffeineProductCacheAdapter.javaRedisProductCacheAdapter.javaMultiLayerProductCacheAdapter.java@Primary)FakeProductCachePort.javaCaffeineProductCacheAdapterTest.javaMultiLayerProductCacheAdapterTest.javaProductLikeStats.javaProductLikeStatsRepository.javaProductLikeStatsJpaRepository.javaProductLikeStatsRepositoryImpl.javaProductBenchmarkController.javaProductPerformanceTest.javaLikeCountSyncJobConfig.javaLikeCountSyncTasklet.javak6/common.jsk6/product-list-optimized.jsk6/product-list-no-cache.jsk6/product-list-no-optimization.jsk6/product-detail.js삭제 파일
ProductCacheService.javaProductCachePort+ 3개 Adapter로 대체FakeProductCacheService.javaFakeProductCachePort로 대체✅ 과제 체크리스트
🔖 Index
idx_product_brand_like_count—(brand_id, like_count DESC, id DESC)브랜드 필터 + 좋아요순idx_product_brand_price—(brand_id, price ASC, id ASC)브랜드 필터 + 가격순idx_product_like_count—(like_count DESC, id DESC)전체 상품 + 좋아요순idx_likes_product_id—(product_id)좋아요 카운트 커버링 인덱스❤️ Structure
Product.likeCount비정규화 컬럼 추가 + DBORDER BY like_count DESC전환Comparator정렬 완전 제거 + 페이지네이션 적용 (Page<ProductWithBrand>)product_like_stats테이블 + commerce-batchLikeCountSyncJob으로 MV 시뮬레이션 (Nice-To-Have)like_count = like_count + 1/like_count - 1로 실시간 동기화@Transactional경계 — 롤백 시 자동 원복Product.likeCount == COUNT(*)정합성 확인⚡ Cache
product:detail:{id}/ TTL 10분 / Cache-Asideproduct:list:v{ver}:.../ TTL 5분 / 버전 기반 무효화 (INCR로 O(1))ProductCachePort인터페이스를 application에 분리 — 테스트 Fake가 인터페이스 구현으로 전환되어 생성자 변경에 면역@PrimaryMultiLayerAdapter 패턴으로 호출부 변경 최소화 (타입+변수명만 교체)변경 목적: 10만/1000만 건 규모에서 COUNT(*) 기반 좋아요 수 조회와 Java in-memory 정렬로 인한 읽기 병목을 제거하고, DB 정렬+페이지네이션+멀티레이어 캐싱으로 단계적 성능 최적화.
핵심 변경점: (1) Product.likeCount 필드 재도입(기본값 0) 및 Like 생성/삭제 시 원자적 증감 SQL(incrementLikeCount/decrementLikeCount), (2) 복합 인덱스 4개 추가(idx_product_like_count, idx_product_brand_like_count, idx_product_brand_price, idx_likes_product_id), (3) ProductCachePort 인터페이스 기반 DIP 멀티레이어 캐시(Caffeine L1 + Redis L2, MultiLayerProductCacheAdapter @primary), (4) 상품 수정/삭제 시 상세 evict + 목록은 버전 INCR(product:list:version)으로 무효화, (5) ProductLikeStats 엔티티 및 LikeCountSyncTasklet 배치(syncAllFromLikes → correctProductLikeCounts 2단계) + Redis Master/Replica 읽기 분리(readTemplate replica 우선, writeTemplate master 전용).
성능 검증 결과: EXPLAIN 스캔 행 수 9,955,217→20(약 50만배 감소), 1000만 건 100 RPS 부하 테스트에서 P95 응답시간 3초→8.04ms(L1+L2), 에러율 100%→0%, K6 4단계 벤치마크(no-optimization/no-cache/L2/L1+L2) 스크립트로 성능 단계별 검증.
테스트/검증: (1) 동시성 테스트(100 threads, LikeConcurrencyTest)로 Like 레코드 수 == Product.likeCount 정합성 확인, (2) Caffeine/MultiLayer 캐시 어댑터 단위 테스트(상세/목록 캐시, L1→L2 백필, 모든 계층 evict), (3) ProductPerformanceTest로 EXPLAIN 쿼리 플랜 분석, (4) K6 로드 테스트 5개 시나리오(detail, benchmark, no-cache, no-optimization, optimized).
주의사항: ① LikeCountSyncTasklet 최초 배치 미실행 시 기존 데이터의 likeCount가 0으로 유지(초기 배치 실행 시점 확인 필요), ② 목록 캐시 버전 INCR 전략이 고빈도 좋아요 이벤트 시 스래싱을 의도적으로 회피하지만 실운영 환경에서 캐시 적중률 모니터링 권장, ③ Redis readTemplate이 replica 우선 설정(ReadFrom.REPLICA_PREFERRED)이므로 master/replica 동기화 지연이 있을 경우 캐시 조회 시 최신성 이슈 가능성 검토.