Conversation
📝 WalkthroughWalkthroughRedis 기반 캐싱 시스템을 도입하여 상품 단일 조회(getActiveProduct)와 목록 조회(getActiveProducts) 메서드에 캐시 계층을 추가했다. PageResult 레코드를 신설하여 페이지네이션 결과를 표준화하고, 삭제 및 업데이트 시에는 캐시 제거 어노테이션으로 데이터 일관성을 보장한다. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Controller as Controller<br/>(ProductV1Controller)
participant Facade as Facade<br/>(ProductFacade)
participant Redis as Redis<br/>Cache
participant DB as Database
rect rgba(200, 150, 100, 0.5)
Note over Client,DB: 캐시 미스 시나리오
Client->>Controller: GET /api/v1/products?page=0
Controller->>Facade: getActiveProducts(brandId, sort, pageable)
Facade->>Redis: 캐시 조회 (PRODUCTS)
Redis-->>Facade: 캐시 없음 (미스)
Facade->>DB: 쿼리 실행 (brandId, sort, pageable)
DB-->>Facade: Page<ProductInfo>
Facade->>Facade: PageResult.from(page) 변환
Facade->>Redis: 캐시 저장 (PRODUCTS, 10분 TTL)
Facade-->>Controller: PageResult<ProductInfo>
Controller-->>Client: 200 OK + PageResponse
end
rect rgba(100, 200, 100, 0.5)
Note over Client,Redis: 캐시 히트 시나리오
Client->>Controller: GET /api/v1/products?page=0
Controller->>Facade: getActiveProducts(brandId, sort, pageable)
Facade->>Redis: 캐시 조회 (PRODUCTS)
Redis-->>Facade: PageResult<ProductInfo> (캐시 히트)
Facade-->>Controller: PageResult<ProductInfo>
Controller-->>Client: 200 OK + PageResponse
Note over Redis,DB: DB 접근 없음
end
rect rgba(200, 100, 100, 0.5)
Note over Client,DB: 캐시 제거 시나리오
Client->>Controller: DELETE /api-admin/v1/products/{id}
Controller->>Facade: delete(id)
Facade->>DB: soft delete 실행
Facade->>Redis: 캐시 제거 (PRODUCT, key=id)
Facade->>Redis: 캐시 제거 (PRODUCTS, allEntries=true)
Facade-->>Controller: 200 OK
Controller-->>Client: 삭제 완료
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
운영 관점 검토 사항캐시 키 전략의 위험성: 캐시 제거 전략의 과도한 범위: PRODUCTS 캐시를 삭제 및 업데이트 시 Redis 연결 장애 시 동작 정의 부재: CacheConfig에서 Redis 캐시 관리자만 구성하고 폴백(fallback) 로직이나 에러 핸들링이 없다. Redis 서버 다운 시 애플리케이션이 응답하지 않을 수 있다. 수정안: CacheErrorHandler를 구현하여 캐시 작업 실패 시 원본 데이터를 직접 조회하는 폴백 전략을 추가하고, 이를 테스트하는 통합 테스트(Redis 불가 상태 시뮬레이션)를 작성하자. TTL 설정의 일관성 부재: PRODUCT는 30분, PRODUCTS는 10분 TTL로 서로 다르게 설정되어 있다. 캐시 갱신 주기가 일치하지 않아 PRODUCT 캐시는 최신이지만 PRODUCTS 캐시는 오래된 데이터를 반환할 수 있다. 수정안: 비즈니스 요구사항에 맞게 TTL을 통일하거나, TTL 설정의 의도를 문서화하고 모니터링 알림(캐시 히트율, TTL 이후 재조회 레이턴시)을 추가하자. 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can generate a title for your PR based on the changes with custom instructions.Set the |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (6)
apps/commerce-api/src/test/java/com/loopers/DataInitializer.java (2)
70-83: 마지막 배치가 flush되지 않을 수 있다.
i % 500 == 499조건은 500번째, 1000번째 등에서만 flush한다.count가 500의 배수가 아닌 경우(예: 2,500은 해당, 10,000도 해당하나) 마지막 배치가 누락될 가능성이 있다. 현재 값들은 우연히 500의 배수이나, 향후productCount변경 시 버그로 이어질 수 있다.♻️ 루프 종료 후 명시적 flush 추가
void insertProducts(Long brandId, int count) { transactionTemplate.executeWithoutResult(status -> { for (int i = 0; i < count; i++) { int price = ThreadLocalRandom.current().nextInt(1_000, 100_001); entityManager.persist(Product.create(brandId, "상품_" + UUID.randomUUID().toString() .substring(0, 8), null, price, 100)); if (i % 500 == 499) { entityManager.flush(); entityManager.clear(); } } + entityManager.flush(); + entityManager.clear(); }); }트랜잭션 커밋 시점에 자동 flush되므로 현재 동작에는 문제없으나, 명시적 처리가 코드 의도를 명확히 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/DataInitializer.java` around lines 70 - 83, The batch flush condition in insertProducts currently only flushes when i % 500 == 499, so the final partial batch may not be flushed; after the for-loop inside transactionTemplate.executeWithoutResult in insertProducts, add an explicit entityManager.flush() and entityManager.clear() to ensure any remaining persisted entities are flushed/cleared before transaction ends (keep the existing periodic flush/clear logic intact).
56-68:saveBrands()에서 매 엔티티마다flush()호출은 비효율적이다.50개 브랜드에 대해 매번 flush()를 호출하면 50회의 DB 왕복이 발생한다.
insertProducts()처럼 배치 단위로 flush하거나, ID 획득이 목적이라면 트랜잭션 종료 시점에 한 번만 flush하는 방식이 효율적이다.♻️ 배치 flush 적용 수정안
List<Long> saveBrands(int count) { return transactionTemplate.execute(status -> { List<Long> ids = new ArrayList<>(); for (int i = 1; i <= count; i++) { Brand brand = Brand.create("브랜드-" + i, null); entityManager.persist(brand); - entityManager.flush(); ids.add(brand.getId()); } + entityManager.flush(); return ids; }); }참고:
GenerationType.IDENTITY전략을 사용하므로 persist 시점에 ID가 할당되어 트랜잭션 종료 전 flush로도 ID 획득이 가능하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/DataInitializer.java` around lines 56 - 68, The saveBrands method is flushing the EntityManager on every iteration which causes N DB round-trips; in transactionTemplate.execute (method saveBrands) remove the per-entity entityManager.flush() inside the loop and instead call entityManager.flush() once after the loop (or every N items for batching, e.g., if you want batch sizes similar to insertProducts) so persistence remains within the transaction and IDs (GenerationType.IDENTITY) are still obtainable after persist; keep entityManager.persist(brand) inside the loop and only flush/clear in batch boundaries or at the end to improve performance.apps/commerce-api/src/main/java/com/loopers/application/PageResult.java (1)
8-14:PageResult의items가 외부 변경에 열려 있다.record로 감쌌지만
List<T>를 그대로 보관해 원본 컬렉션 수정이 응답 payload나 캐시 값에 그대로 반영될 수 있다. 운영에서는 이런 상태 오염이 재현이 어려워 디버깅 비용이 크다. compact constructor에서List.copyOf(items)로 방어적 복사를 강제하고, 생성 후 원본 리스트를 수정해도items()가 바뀌지 않는 테스트를 추가하는 편이 안전하다.예시 수정안
public record PageResult<T>( List<T> items, int page, int size, long totalElements, int totalPages ) { + public PageResult { + items = List.copyOf(items); + } + public static <T> PageResult<T> from(Page<T> page) { return new PageResult<>( page.getContent(), page.getNumber(), page.getSize(),As per coding guidelines
**/*.java: null 처리, 방어적 복사, 불변성, equals/hashCode/toString 구현 안정성을 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/PageResult.java` around lines 8 - 14, The PageResult<T> record currently stores the mutable List<T> passed into its items component, exposing it to external mutation; modify the PageResult compact constructor to perform a defensive copy via List.copyOf(items) (assigning the copied list back to the record component), and add a unit test that mutates the original source list after constructing PageResult and asserts that pageResult.items() is unchanged; target the PageResult record and its compact constructor for this change and add a test verifying immutability of items().apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductV1ApiE2ETest.java (1)
57-60: 현재 구성에서는 병렬 실행이 비활성화되어 플래키 문제 위험은 낮으나,flushAll()범위 축소를 권장한다.build.gradle.kts에서
maxParallelForks = 1로 테스트가 순차 실행되므로, 원래 우려한 병렬 테스트 간 Redis 키 삭제 문제는 발생하지 않는다. 다만flushAll()은 전체 Redis 데이터베이스를 무조건 비우는 방식이므로, 향후 병렬 실행이 활성화될 경우 테스트 간 간섭이 재발할 위험이 있다.테스트 격리 수준을 높이려면 다음 중 하나를 선택하도록 제안한다: (1) 테스트별 Redis 데이터베이스 인덱스 분리, (2) 특정 key prefix만 정리하는 방식, (3) 테스트에서 생성한 키만 추적하여 정리. 현재 순차 실행 보장이 유지된다면 동작하지만, 코드 관점에서는 보다 명시적인 격리가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductV1ApiE2ETest.java` around lines 57 - 60, The tearDown currently calls redisCleanUp.truncateAll() (which likely issues a Redis FLUSHALL) and should be narrowed to avoid wiping unrelated data; update the cleanup strategy used by the tearDown method to either use a dedicated Redis DB index and call FLUSHDB for that index, or only delete keys with a test-specific prefix, or track and delete keys created during the test. Concretely, replace or overload redisCleanUp.truncateAll() with a method such as redisCleanUp.flushDbIndex(int testDbIndex) or redisCleanUp.deleteKeysByPrefix(String prefix) (or a tracked-key cleanup like redisCleanUp.deleteTrackedKeys()) and call that from ProductV1ApiE2ETest.tearDown to ensure isolation without using FLUSHALL.apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.java (2)
163-167: 목록 캐시 검증이 기본 쿼리만 다뤄 캐시 키 충돌 위험을 놓칠 수 있다현재는
/api/v1/products기본 요청만 검증한다. 캐시 키에서 페이지/정렬/필터 파라미터가 누락되면 서로 다른 조회가 같은 캐시를 재사용해 잘못된 목록을 노출할 수 있다. 쿼리 조합별로 캐시 분리를 검증하는 시나리오를 추가하는 것이 좋다. 추가 테스트로는page=0,size=1과page=1,size=1,brandId필터 유무를 연속 호출해 응답 분리와 삭제 후 무효화 동작을 함께 확인해야 한다.As per coding guidelines
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.Also applies to: 175-179, 200-204, 216-220
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.java` around lines 163 - 167, The test only calls PRODUCT_ENDPOINT with default query parameters via testRestTemplate.exchange and therefore misses cache-key collisions caused by pagination/sort/filter parameters; update ProductCacheE2ETest to add scenarios that call PRODUCT_ENDPOINT with distinct query combinations (e.g., "?page=0&size=1" and "?page=1&size=1", and calls with and without "brandId" filter) using the same testRestTemplate.exchange flow, assert that responses differ as expected, then perform a delete/invalidate operation and re-call each query to assert cache invalidation/refresh; focus changes around the existing testRestTemplate.exchange usages and add assertions comparing responses and after-delete re-fetches to verify proper cache separation and invalidation.
90-94: 웜업/삭제 호출의 중간 응답을 단언하지 않아 장애 원인 추적이 늦어진다여러
exchange호출 결과를 버리면 실패 시 캐시 문제인지 API 계약 문제인지 분리가 어렵다. 운영 관점에서는 장애 분석 시간이 늘어난다. 각 웜업/삭제 호출에서 상태 코드를 즉시 단언한 뒤 캐시 동작 단언을 수행하도록 분리하는 것이 좋다. 추가 테스트로는 관리자 헤더 누락/오류 시 삭제 API가 401/403을 반환하는 실패 흐름을 포함해야 한다.수정 예시
- testRestTemplate.exchange( + ResponseEntity<ApiResponse<ProductV1Dto.ProductResponse>> warmup = + testRestTemplate.exchange( PRODUCT_ENDPOINT + "/" + saved.getId(), HttpMethod.GET, null, new ParameterizedTypeReference<ApiResponse<ProductV1Dto.ProductResponse>>() {} ); + assertThat(warmup.getStatusCode()).isEqualTo(HttpStatus.OK); @@ - testRestTemplate.exchange( + ResponseEntity<ApiResponse<Void>> deleteResponse = testRestTemplate.exchange( ADMIN_PRODUCT_ENDPOINT + "/" + saved.getId(), HttpMethod.DELETE, new HttpEntity<>(adminHeaders()), new ParameterizedTypeReference<ApiResponse<Void>>() {} ); + assertThat(deleteResponse.getStatusCode().is2xxSuccessful()).isTrue();Also applies to: 124-128, 131-136, 163-167, 200-204, 207-212
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.java` around lines 90 - 94, The test currently discards responses from multiple testRestTemplate.exchange calls (e.g., calls against PRODUCT_ENDPOINT + "/" + saved.getId() returning ApiResponse<ProductV1Dto.ProductResponse>), which obscures whether failures are from HTTP contract vs cache behavior; update each exchange invocation in ProductCacheE2ETest so you capture the ResponseEntity and immediately assert the HTTP status (e.g., 200/204/401/403 as appropriate) before performing any cache-related assertions, split warmup/delete flows into separate assertions (warmup status assertion then cache state assertion; delete status assertion then cache state assertion), and add focused negative tests that call the delete API without the admin header to assert 401/403 responses so failures are traceable to auth vs caching.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 42-49: getActiveProducts is cached under CacheConfig.PRODUCTS but
register() does not evict that cache, so newly registered products can be hidden
until cache TTL expires; add `@CacheEvict`(cacheNames = CacheConfig.PRODUCTS,
allEntries = true) to the register(...) method in ProductFacade (matching
delete() and update()), and add an integration test that: 1) calls
getActiveProducts to prime the cache, 2) calls register(...) to create a new
product, and 3) calls getActiveProducts again asserting the new product is
present to verify the cache was invalidated.
In `@apps/commerce-api/src/main/java/com/loopers/config/CacheConfig.java`:
- Around line 54-57: In CacheConfig (RedisCacheManager.builder usage), make the
RedisCacheManager transaction-aware so evictions occur after commit: update the
builder call that constructs the manager (the call that
.withCacheConfiguration(PRODUCT, productConfig).withCacheConfiguration(PRODUCTS,
productsConfig).build()) to include .transactionAware() before build(); if that
is not feasible in your environment, wrap the produced CacheManager in a
TransactionAwareCacheManagerProxy to ensure `@CacheEvict` happens post-commit.
After this change add an integration test that targets the PRODUCT and PRODUCTS
caches: run two parallel threads where one thread performs update/delete (inside
a transaction) and the other performs concurrent reads, then assert no stale
entries remain (cache entries are evicted only after commit).
In `@apps/commerce-api/src/test/java/com/loopers/DataInitializer.java`:
- Around line 27-31: The overrideDataSource method currently hardcodes DB
credentials; change it to read JDBC URL, username, and password from environment
variables or system properties (e.g., System.getenv("TEST_DB_URL") /
System.getProperty("test.db.url")) with sensible defaults or fallbacks, and
register those values into the DynamicPropertyRegistry inside
overrideDataSource; alternatively make overrideDataSource load from a test
profile (application-local.yml) by referencing spring test properties so
credentials aren't stored in code (update references in overrideDataSource and
any tests that rely on it).
- Around line 39-40: Mark this performance-data seeding test so it won't run in
CI by adding the JUnit disable annotation to DataInitializer (either on the
class or the initialize() method); import org.junit.jupiter.api.Disabled and
annotate the DataInitializer class or the initialize() method with `@Disabled` and
include a short comment explaining it's a manual/perf-only seeder, then verify
CI excludes it from test runs.
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.java`:
- Around line 58-62: Add a `@BeforeEach` setUp that performs the same initial
cleanup as tearDown (call databaseCleanUp.truncateAllTables() and
redisCleanUp.truncateAll()) to guarantee isolation before the first test; change
the existing tearDown method so Redis cleanup is executed inside a try/finally
(perform databaseCleanUp.truncateAllTables() in try and always call
redisCleanUp.truncateAll() in finally) to ensure Redis is cleared even if
truncation throws, and add an early assertion at the start of each test (or in
setUp) to confirm repository state (e.g., assert productRepository.count() == 0
or equivalent) so isolation is verifiable.
---
Nitpick comments:
In `@apps/commerce-api/src/main/java/com/loopers/application/PageResult.java`:
- Around line 8-14: The PageResult<T> record currently stores the mutable
List<T> passed into its items component, exposing it to external mutation;
modify the PageResult compact constructor to perform a defensive copy via
List.copyOf(items) (assigning the copied list back to the record component), and
add a unit test that mutates the original source list after constructing
PageResult and asserts that pageResult.items() is unchanged; target the
PageResult record and its compact constructor for this change and add a test
verifying immutability of items().
In `@apps/commerce-api/src/test/java/com/loopers/DataInitializer.java`:
- Around line 70-83: The batch flush condition in insertProducts currently only
flushes when i % 500 == 499, so the final partial batch may not be flushed;
after the for-loop inside transactionTemplate.executeWithoutResult in
insertProducts, add an explicit entityManager.flush() and entityManager.clear()
to ensure any remaining persisted entities are flushed/cleared before
transaction ends (keep the existing periodic flush/clear logic intact).
- Around line 56-68: The saveBrands method is flushing the EntityManager on
every iteration which causes N DB round-trips; in transactionTemplate.execute
(method saveBrands) remove the per-entity entityManager.flush() inside the loop
and instead call entityManager.flush() once after the loop (or every N items for
batching, e.g., if you want batch sizes similar to insertProducts) so
persistence remains within the transaction and IDs (GenerationType.IDENTITY) are
still obtainable after persist; keep entityManager.persist(brand) inside the
loop and only flush/clear in batch boundaries or at the end to improve
performance.
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.java`:
- Around line 163-167: The test only calls PRODUCT_ENDPOINT with default query
parameters via testRestTemplate.exchange and therefore misses cache-key
collisions caused by pagination/sort/filter parameters; update
ProductCacheE2ETest to add scenarios that call PRODUCT_ENDPOINT with distinct
query combinations (e.g., "?page=0&size=1" and "?page=1&size=1", and calls with
and without "brandId" filter) using the same testRestTemplate.exchange flow,
assert that responses differ as expected, then perform a delete/invalidate
operation and re-call each query to assert cache invalidation/refresh; focus
changes around the existing testRestTemplate.exchange usages and add assertions
comparing responses and after-delete re-fetches to verify proper cache
separation and invalidation.
- Around line 90-94: The test currently discards responses from multiple
testRestTemplate.exchange calls (e.g., calls against PRODUCT_ENDPOINT + "/" +
saved.getId() returning ApiResponse<ProductV1Dto.ProductResponse>), which
obscures whether failures are from HTTP contract vs cache behavior; update each
exchange invocation in ProductCacheE2ETest so you capture the ResponseEntity and
immediately assert the HTTP status (e.g., 200/204/401/403 as appropriate) before
performing any cache-related assertions, split warmup/delete flows into separate
assertions (warmup status assertion then cache state assertion; delete status
assertion then cache state assertion), and add focused negative tests that call
the delete API without the admin header to assert 401/403 responses so failures
are traceable to auth vs caching.
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductV1ApiE2ETest.java`:
- Around line 57-60: The tearDown currently calls redisCleanUp.truncateAll()
(which likely issues a Redis FLUSHALL) and should be narrowed to avoid wiping
unrelated data; update the cleanup strategy used by the tearDown method to
either use a dedicated Redis DB index and call FLUSHDB for that index, or only
delete keys with a test-specific prefix, or track and delete keys created during
the test. Concretely, replace or overload redisCleanUp.truncateAll() with a
method such as redisCleanUp.flushDbIndex(int testDbIndex) or
redisCleanUp.deleteKeysByPrefix(String prefix) (or a tracked-key cleanup like
redisCleanUp.deleteTrackedKeys()) and call that from
ProductV1ApiE2ETest.tearDown to ensure isolation without using FLUSHALL.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c2b29fc1-42e1-43eb-98ca-617356004974
⛔ Files ignored due to path filters (1)
docs/data-initializer.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (10)
apps/commerce-api/src/main/java/com/loopers/application/PageResult.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductService.javaapps/commerce-api/src/main/java/com/loopers/config/CacheConfig.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/PageResponse.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductV1Controller.javaapps/commerce-api/src/test/java/com/loopers/DataInitializer.javaapps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.javaapps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductV1ApiE2ETest.java
| @Cacheable(cacheNames = CacheConfig.PRODUCTS, key = "#brandId + ':' + #sort + ':' + #pageable.pageNumber + ':' + #pageable.pageSize") | ||
| public PageResult<ProductInfo> getActiveProducts(Long brandId, ProductSort sort, Pageable pageable) { | ||
| Page<ProductInfo> products = productService.getActiveProducts(brandId, sort, pageable); | ||
| Set<Long> brandIds = products.stream().map(p -> p.brand().id()).collect(Collectors.toSet()); | ||
| Map<Long, String> brandNameMap = brandService.getBrandNameMap(brandIds); | ||
| return products.map(p -> p.withBrand( | ||
| return PageResult.from(products.map(p -> p.withBrand( | ||
| new ProductInfo.BrandSummary(p.brand().id(), brandNameMap.getOrDefault(p.brand().id(), null)) | ||
| )); | ||
| ))); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# ProductFacade.java 전체 확인
find . -name "ProductFacade.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 169
🏁 Script executed:
#!/bin/bash
# CacheConfig 확인
find . -name "CacheConfig.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 154
🏁 Script executed:
#!/bin/bash
# ProductService.java 위치 확인
find . -name "ProductService.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 170
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3123
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/config/CacheConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3372
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 7213
🏁 Script executed:
find . -path "*/test*" -name "*ProductFacade*Test*.java" -o -path "*/test*" -name "*Product*Integration*Test*.java" | grep -i testRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 173
🏁 Script executed:
find . -path "*/test*" -type f -name "*.java" | grep -i product | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 696
🏁 Script executed:
cat -n ./apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 11092
🏁 Script executed:
wc -l ./apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 177
🏁 Script executed:
cat -n ./apps/commerce-api/src/test/java/com/loopers/application/product/ProductFacadeTest.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 7796
신규 상품 등록 뒤 목록 캐시를 비워야 한다.
현재 register() 경로는 PRODUCTS 캐시를 무효화하지 않아, 이미 캐시된 목록이 최대 10분 동안 새 상품을 누락한다. 운영 관점에서 관리자 등록 직후 사용자에게 신규 상품이 즉시 노출되어야 한다. 같은 클래스의 delete(), update() 메서드가 이미 @CacheEvict(cacheNames = CacheConfig.PRODUCTS, allEntries = true)를 사용하는 것과 일관되게 register() 메서드에도 동일한 어노테이션을 추가하고, 목록 캐시를 먼저 채운 뒤 상품을 등록했을 때 다음 조회에 새 상품이 즉시 포함되는 통합 테스트를 추가해 달라.
예시 수정안
+ `@CacheEvict`(cacheNames = CacheConfig.PRODUCTS, allEntries = true)
`@Transactional`
public ProductInfo register(ProductCreateCommand command) {
brandService.getBrand(command.brandId());
return productService.register(command);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 42 - 49, getActiveProducts is cached under CacheConfig.PRODUCTS but
register() does not evict that cache, so newly registered products can be hidden
until cache TTL expires; add `@CacheEvict`(cacheNames = CacheConfig.PRODUCTS,
allEntries = true) to the register(...) method in ProductFacade (matching
delete() and update()), and add an integration test that: 1) calls
getActiveProducts to prime the cache, 2) calls register(...) to create a new
product, and 3) calls getActiveProducts again asserting the new product is
present to verify the cache was invalidated.
| return RedisCacheManager.builder(connectionFactory) | ||
| .withCacheConfiguration(PRODUCT, productConfig) | ||
| .withCacheConfiguration(PRODUCTS, productsConfig) | ||
| .build(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C2 'TransactionAwareCacheManagerProxy|transactionAware\s*\(' --type java
rg -n -C2 '@EnableCaching|@EnableTransactionManagement|order\s*=' --type javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
cat -n apps/commerce-api/src/main/java/com/loopers/config/CacheConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3372
🏁 Script executed:
rg -n 'evictProductListAfterCommit|@EventListener|TransactionalEventListener' --type java -A 5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
rg -n 'spring-data-redis' pom.xml build.gradle.ktsRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 132
🏁 Script executed:
rg -n 'class ProductService' --type java -A 30 | head -60Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 7480
🏁 Script executed:
wc -l apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 172
🏁 Script executed:
cat apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 6093
🏁 Script executed:
find . -name build.gradle.kts -o -name build.gradle | head -5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 252
🏁 Script executed:
rg -n 'spring-data-redis' --type gradle --type kotlinRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
rg -n '@Cacheable|@CacheEvict|@CachePut' --type java -B 2 -A 3Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 5069
🏁 Script executed:
cat apps/commerce-api/build.gradle.kts | grep -A 50 "dependencies"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1077
🏁 Script executed:
cat apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 9461
🏁 Script executed:
cat modules/redis/build.gradle.kts | grep -i "spring-data-redis"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
cat build.gradle.kts | grep -i "spring"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 957
🏁 Script executed:
rg -n 'spring.version|springBootVersion|redisVersion' build.gradle.ktsRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
grep -r "springBootVersion\|spring-data-redis\|redisVersion" . --include="*.kts" --include="*.gradle" --include="*.properties" 2>/dev/null | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 294
🏁 Script executed:
cat gradle.propertiesRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 621
🏁 Script executed:
rg -n 'transactionAware\(\)' --type kotlin --type javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
rg -n '@EventListener.*afterCommit|TransactionalEventListener' --type java -B 2 -A 5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
cat modules/redis/build.gradle.ktsRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 291
🏁 Script executed:
rg -n 'class ProductFacade' --type java -A 70 | head -100Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 13033
RedisCacheManager가 transaction-aware 설정이 필요하다.
현재 @CacheEvict는 트랜잭션 커밋 전에 캐시 제거를 수행하므로, 커밋 진행 중인 시간 윈도우에서 동시 조회가 들어오면 제거된 캐시 항목이 다시 채워질 수 있다. 특히 PRODUCT(TTL 30분)와 PRODUCTS(TTL 10분)의 TTL이 길어서 stale data가 오래 유지된다. RedisCacheManager.builder()에 .transactionAware() 메서드를 추가하여 transaction commit 이후에만 eviction이 발동되도록 보장하거나, 그 방법이 불가능한 경우 TransactionAwareCacheManagerProxy로 래핑하여 같은 보장을 제공해야 한다.
이후 두 개의 병렬 스레드가 update/delete 도중 동시에 조회하는 시나리오에서 stale cache entry가 남지 않는지 검증하는 통합 테스트도 추가해 달라.
수정안
return RedisCacheManager.builder(connectionFactory)
+ .transactionAware()
.withCacheConfiguration(PRODUCT, productConfig)
.withCacheConfiguration(PRODUCTS, productsConfig)
.build();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return RedisCacheManager.builder(connectionFactory) | |
| .withCacheConfiguration(PRODUCT, productConfig) | |
| .withCacheConfiguration(PRODUCTS, productsConfig) | |
| .build(); | |
| return RedisCacheManager.builder(connectionFactory) | |
| .transactionAware() | |
| .withCacheConfiguration(PRODUCT, productConfig) | |
| .withCacheConfiguration(PRODUCTS, productsConfig) | |
| .build(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/commerce-api/src/main/java/com/loopers/config/CacheConfig.java` around
lines 54 - 57, In CacheConfig (RedisCacheManager.builder usage), make the
RedisCacheManager transaction-aware so evictions occur after commit: update the
builder call that constructs the manager (the call that
.withCacheConfiguration(PRODUCT, productConfig).withCacheConfiguration(PRODUCTS,
productsConfig).build()) to include .transactionAware() before build(); if that
is not feasible in your environment, wrap the produced CacheManager in a
TransactionAwareCacheManagerProxy to ensure `@CacheEvict` happens post-commit.
After this change add an integration test that targets the PRODUCT and PRODUCTS
caches: run two parallel threads where one thread performs update/delete (inside
a transaction) and the other performs concurrent reads, then assert no stale
entries remain (cache entries are evicted only after commit).
| static void overrideDataSource(DynamicPropertyRegistry registry) { | ||
| registry.add("datasource.mysql-jpa.main.jdbc-url", () -> "jdbc:mysql://localhost:3306/loopers"); | ||
| registry.add("datasource.mysql-jpa.main.username", () -> "application"); | ||
| registry.add("datasource.mysql-jpa.main.password", () -> "application"); | ||
| } |
There was a problem hiding this comment.
DB 자격증명 하드코딩은 보안 및 환경 이식성 문제를 야기한다.
localhost:3306, application/application 자격증명이 코드에 직접 노출되어 있다. 테스트 코드라 하더라도 Git 히스토리에 남으며, 개발자마다 로컬 환경이 다를 경우 매번 수정이 필요하다.
🛠️ 환경변수 또는 시스템 프로퍼티 활용 수정안
`@DynamicPropertySource`
static void overrideDataSource(DynamicPropertyRegistry registry) {
- registry.add("datasource.mysql-jpa.main.jdbc-url", () -> "jdbc:mysql://localhost:3306/loopers");
- registry.add("datasource.mysql-jpa.main.username", () -> "application");
- registry.add("datasource.mysql-jpa.main.password", () -> "application");
+ registry.add("datasource.mysql-jpa.main.jdbc-url",
+ () -> System.getenv().getOrDefault("TEST_DB_URL", "jdbc:mysql://localhost:3306/loopers"));
+ registry.add("datasource.mysql-jpa.main.username",
+ () -> System.getenv().getOrDefault("TEST_DB_USERNAME", "application"));
+ registry.add("datasource.mysql-jpa.main.password",
+ () -> System.getenv().getOrDefault("TEST_DB_PASSWORD", "application"));
}또는 application-local.yml에 테스트 전용 프로파일 설정을 두고 참조하는 방식도 고려할 수 있다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/commerce-api/src/test/java/com/loopers/DataInitializer.java` around
lines 27 - 31, The overrideDataSource method currently hardcodes DB credentials;
change it to read JDBC URL, username, and password from environment variables or
system properties (e.g., System.getenv("TEST_DB_URL") /
System.getProperty("test.db.url")) with sensible defaults or fallbacks, and
register those values into the DynamicPropertyRegistry inside
overrideDataSource; alternatively make overrideDataSource load from a test
profile (application-local.yml) by referencing spring test properties so
credentials aren't stored in code (update references in overrideDataSource and
any tests that rely on it).
| @Test | ||
| void initialize() { |
There was a problem hiding this comment.
@Disabled 어노테이션 누락으로 테스트 스위트 실행 시 의도치 않은 데이터 적재 위험이 있다.
이 클래스는 성능 테스트용 데이터 시딩 유틸리티이나, @Test 어노테이션만 있고 @Disabled가 없어 mvn test 또는 CI 파이프라인에서 전체 테스트 실행 시 자동으로 수행된다. 약 20만 건의 상품 데이터가 예기치 않게 적재되어 다른 테스트 결과에 영향을 주거나, 로컬 DB 상태를 오염시킬 수 있다.
🛠️ 수정안
import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.Disabled;
import org.springframework.beans.factory.annotation.Autowired;+ `@Disabled`("성능 테스트 데이터 시딩 전용 - 수동 실행 필요")
`@Test`
void initialize() {추가 검증: CI 파이프라인에서 이 테스트가 실행되지 않는지 확인이 필요하다.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Test | |
| void initialize() { | |
| import org.junit.jupiter.api.Disabled; | |
| import org.junit.jupiter.api.Test; | |
| import org.springframework.beans.factory.annotation.Autowired; | |
| `@Disabled`("성능 테스트 데이터 시딩 전용 - 수동 실행 필요") | |
| `@Test` | |
| void initialize() { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/commerce-api/src/test/java/com/loopers/DataInitializer.java` around
lines 39 - 40, Mark this performance-data seeding test so it won't run in CI by
adding the JUnit disable annotation to DataInitializer (either on the class or
the initialize() method); import org.junit.jupiter.api.Disabled and annotate the
DataInitializer class or the initialize() method with `@Disabled` and include a
short comment explaining it's a manual/perf-only seeder, then verify CI excludes
it from test runs.
| @AfterEach | ||
| void tearDown() { | ||
| databaseCleanUp.truncateAllTables(); | ||
| redisCleanUp.truncateAll(); | ||
| } |
There was a problem hiding this comment.
테스트 격리가 첫 실행 시점에 보장되지 않는다
@AfterEach만으로는 첫 테스트 시작 전 상태를 비우지 못해 초기 데이터 로더 유무에 따라 결과가 흔들릴 수 있다. 운영 관점에서는 CI 플래키와 재현 불가 이슈로 이어진다. @BeforeEach에도 동일한 초기화를 추가하고, 정리 로직은 try/finally로 Redis 정리를 항상 수행하도록 수정하는 것이 좋다. 추가 테스트로는 각 테스트 시작 직후 저장소 상태(예: 상품 0건)를 단언하는 검증을 넣어 격리 보장을 확인해야 한다.
수정 예시
import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
@@
+ `@BeforeEach`
+ void setUp() {
+ resetState();
+ }
+
`@AfterEach`
void tearDown() {
- databaseCleanUp.truncateAllTables();
- redisCleanUp.truncateAll();
+ resetState();
+ }
+
+ private void resetState() {
+ try {
+ databaseCleanUp.truncateAllTables();
+ } finally {
+ redisCleanUp.truncateAll();
+ }
}As per coding guidelines **/*Test*.java: 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.java`
around lines 58 - 62, Add a `@BeforeEach` setUp that performs the same initial
cleanup as tearDown (call databaseCleanUp.truncateAllTables() and
redisCleanUp.truncateAll()) to guarantee isolation before the first test; change
the existing tearDown method so Redis cleanup is executed inside a try/finally
(perform databaseCleanUp.truncateAllTables() in try and always call
redisCleanUp.truncateAll() in finally) to ensure Redis is cleared even if
truncation throws, and add an early assertion at the start of each test (or in
setUp) to confirm repository state (e.g., assert productRepository.count() == 0
or equivalent) so isolation is verifiable.
|
@coderabbitai 아래 체크리스트에 충족될 수 있도록 구현되었는지 확인해. 🔖 Index
❤️ Structure
⚡ Cache
|
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 17929 🏁 Script executed: Length of output: 6786 🏁 Script executed: Length of output: 49784 🏁 Script executed: Length of output: 10445
🔖 Index✅ 상품 목록 API에서 brandId 기반 검색, 좋아요 순 정렬 등을 처리했다
// brandId 필터
if (brandId != null) {
builder.and(product.brandId.eq(brandId));
}
// LIKES_DESC 정렬
case LIKES_DESC -> product.likeCount.desc();
|
| 캐시 | TTL | 무효화 트리거 |
|---|---|---|
product (상세) |
30분 | 상품 삭제(ProductFacade.delete), 상품 수정(ProductService.update) |
products (목록) |
10분 | 상품 삭제(ProductFacade.delete, allEntries=true), 상품 수정(ProductService.update, allEntries=true) |
단, 좋아요/좋아요 취소 시 캐시 evict가 없어 최대 TTL(10분)까지 캐시된 likeCount와 실제 값이 불일치할 수 있다. PR 설명에 의도적인 트레이드오프로 명시되어 있으나, 운영 환경에서 데이터 정합성 기대 수준에 따라 재검토가 필요하다.
⚠️ 캐시 미스 상황에서도 서비스가 정상 동작하도록 처리했다
@Cacheable의 표준 동작(캐시 미스 → DB 조회 → 캐시 저장)은 정상 구현되어 있으며, ProductCacheE2ETest에서 검증된다. 다만 Redis 연결 장애(connection failure) 시 별도의 fallback 처리가 없다. 현재 구성에서 Redis가 다운되면 RedisConnectionFailureException이 발생하여 서비스 전체 장애로 이어질 수 있다. 운영 관점에서 아래와 같은 조치를 고려해야 한다.
// 예: CacheErrorHandler 등록으로 캐시 오류 무시 처리
`@Bean`
public CacheErrorHandler errorHandler() {
return new SimpleCacheErrorHandler(); // 캐시 오류 시 로그만 남기고 진행
}또는 RedisCacheManager 빌더에 .transactionAware() 및 별도 서킷브레이커 적용을 검토해야 한다.
종합
| 항목 | 결과 |
|---|---|
| brandId 검색, 좋아요 순 정렬 처리 | ✅ 충족 |
| 인덱스 적용 및 전후 성능비교 | |
| 좋아요 수 조회 및 정렬 구조 개선 | ✅ 충족 |
| 좋아요 수 동기화 | ✅ 충족 |
| Redis 캐시 TTL/무효화 전략 | ✅ 충족 (like 이벤트는 의도적 eventual consistency) |
| 캐시 미스 시 정상 동작 |
🧠 Learnings used
Learnt from: ukukdin
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 203
File: apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryService.java:53-55
Timestamp: 2026-03-13T05:44:03.072Z
Learning: In Loopers-dev-lab/loop-pack-be-l2-vol3-java PR `#203`, the PRODUCT_LIST cache eviction strategy in ProductService.evictProductListAfterCommit() intentionally uses cache.clear() (full cache invalidation) rather than selective eviction by brandId. This is a deliberate trade-off: TTL is 1 minute and admin write frequency is currently low, so full invalidation is considered acceptable. Selective eviction by brandId is deferred until admin modifications become frequent or cache hit rates are measured to be low via Micrometer metrics.
Round 5 - 상품 조회 성능 개선 (인덱스 + Redis 캐시)
📌 Summary
이번 주차 과제 관련 상세 내용은 블로그에 정리했습니다.
[인덱스 편] https://velog.io/@hey-sion/%EC%8A%AC%EB%A1%9C%EC%9A%B0-%EC%BF%BC%EB%A6%AC-%EA%B0%9C%EC%84%A0%ED%95%98%EA%B8%B0-1.-index
[캐시 편] https://velog.io/@hey-sion/API-%EC%9D%91%EB%8B%B5-%EC%86%8D%EB%8F%84-%EA%B0%9C%EC%84%A0%ED%95%98%EA%B8%B0-cache
🧭 Context & Decision
① 인덱스 전략
문제
products테이블에 PK 인덱스만 존재하여 모든 조회가 Full Table Scanbrand_id필터 +like_count DESC정렬 쿼리:rows = 199,176,type = ALL,Using filesort선택지
(deleted_at, visibility, brand_id)(deleted_at, visibility, price)(deleted_at, visibility, like_count DESC)like_count인덱스를 추가하지 않은 이유like_count는 매번 UPDATE가 발생하는 쓰기 빈도가 높은 컬럼인덱스 배치 전략
brand_id는 선택적 파라미터이므로 선두에 두면 필터 없는 쿼리에서 인덱스를 전혀 활용하지 못한다.deleted_at,visibility를 선두에 배치하면 인덱스 하나로 브랜드 필터 유무 모두를 커버할 수 있다.EXPLAIN 결과 비교 (20만 건,
brand_id = 1,ORDER BY like_count DESC)② 캐시 전략
왜 Redis인가 (Caffeine vs Redis)
Caffeine은 JVM 메모리 직접 조회라 속도 자체는 더 빠르지만, 멀티 인스턴스 환경에서는 서버마다 캐시가 분리되어 캐시 불일치 문제가 발생한다.
이 문제를 해결하기 위해 실무에서는 로컬 캐시 TTL을 매우 짧게 설정하기도 하지만, 캐시 일관성을 보다 단순하게 유지할 수 있다는 점에서 Redis 선택이 더 합리적이라고 판단했다.
캐시 구성
productproduct::99productsproducts::null:LATEST:0:10캐시 무효화 전략
@CacheEvict)직렬화 이슈 및 해결
GenericJackson2JsonRedisSerializer+activateDefaultTyping(NON_FINAL)조합에서 Javarecord타입은final이라 타입 정보가 저장되지 않아 캐시 히트 시 역직렬화 실패(500 에러)가 발생했다. 기존 E2E 테스트가 모두 통과하고 있었던 이유는, 각 테스트가@AfterEach에서 Redis를 초기화해 캐시 히트가 한 번도 발생하지 않았기 때문이다.해결: 캐시별로 타입을 명시하는
Jackson2JsonRedisSerializer로 교체하여activateDefaultTyping자체를 제거했다.📊 응답시간 비교
⚖️ 트레이드오프 및 향후 고민
인덱스 없는 캐시 미스 상황
like_count인덱스 없이 캐시에만 의존하는 방식을 선택했다. 캐시 미스는 TTL 만료 후 해당 키에 대한 첫 번째 요청에서만 발생한다. DB 조회 비용은 최초 1회에 집중되고 이후에는 캐시가 처리한다.고객의 관심사는 "많은 사람들에게 인기 있는 상품이 무엇인지", "대략 얼마나 많은 사람들이 관심을 갖고 있는지"이지, 정확한 숫자가 아니라고 판단했다. 완벽한 실시간성보다 빠른 응답이 더 중요한 케이스다.
Cache Stampede
TTL 만료 직후 다수의 요청이 동시에 들어오면 전부 DB로 쏠리는 Cache Stampede 현상이 발생할 수 있다. 현재 규모에서는 허용하고 넘어갔지만, 트래픽이 커진다면 별도 대응(Mutex Lock, Probabilistic Early Expiration 등)이 필요하다.
향후 검토 사항
like_count인덱스 추가 또는 Cache Stampede 방어 로직 도입 검토✅ 변경 범위
Product엔티티에 복합 인덱스 2개 추가ProductFacade에@Cacheable,@CacheEvict적용CacheConfig: Redis 캐시 매니저 설정, 직렬화 방식 개선 (Jackson2JsonRedisSerializer타입 명시)PageResult<T>도입:PageImpl역직렬화 이슈 해결RedisCleanUp@AfterEach적용: E2E 테스트 간 Redis 캐시 오염 방지ProductCacheE2ETest추가: 캐시 히트/무효화 E2E 테스트DataInitializer추가: 인덱스/캐시 성능 측정용 20만 건 테스트 데이터 적재 (test디렉토리,@ActiveProfiles("local"))변경 목적: ~200k 상품 데이터셋의 상품 조회(상세, 목록) 쿼리 성능을 Redis 캐싱으로 개선 (상세 30분, 목록 10분 TTL).
핵심 변경점:
PageResult<T>레코드 타입 도입으로 SpringPage<T>대체, Jackson 직렬화 이슈 해결ProductFacade에@Cacheable(상세/목록 조회),@CacheEvict(삭제/수정 시) 어노테이션 적용CacheConfig에서Jackson2JsonRedisSerializer와JavaTimeModule구성으로 시간 타입 직렬화 처리RedisCleanUp유틸 추가로 E2E 테스트 간 Redis 격리리스크/주의사항:
(deleted_at, visibility, brand_id),(deleted_at, visibility, price)복합 인덱스 추가를 언급했으나 실제 코드에서 확인되지 않음 → 인덱스 추가가 누락되었거나 별도 마이그레이션에서 관리되는지 확인 필수allEntries=true무효화로 모든 페이지 캐시 소실 가능성테스트/검증:
ProductCacheE2ETest에서 캐시 히트(DB 소프트 삭제 후 캐시 데이터 반환), 캐시 미스(삭제 API 호출 후 무효화), 관계 데이터 정합성 검증; 각 테스트 후RedisCleanUp.truncateAll()수행.