Add v1/orders POST endpoint to get orders in batches#4048
Add v1/orders POST endpoint to get orders in batches#4048
Conversation
Adds POST handler for `v1/orders` endpoint that requires a list of order uids and responds with a vector of their data. Has a hardcoded limit of 5000 orders per request.
There was a problem hiding this comment.
Code Review
This pull request introduces a new endpoint v1/get_orders to retrieve orders in batches using a POST request with a limit of 5000 orders per request. The code adds a new module get_orders_by_uid.rs, modifies api.rs to include the new route, and updates database/orders.rs and orderbook.rs to support fetching multiple orders. I found a missing test case for the MAX_ORDERS_LIMIT validation.
|
Tested on sepolia staging: (queries twice for the same order) response |
|
By the way, we can have a discussion on what the endpoint itself should be. I got a couple suggestions from Claude
based on the above suggestions I have opted for a custom not-order-id-like name |
| async move { | ||
| Result::<_, Infallible>::Ok(match request_result { | ||
| Ok(uids) => { | ||
| let result = orderbook.get_orders(&uids).await; |
There was a problem hiding this comment.
This will accumulate all the orders in memory, which is probably not a good idea. Can we instead return a stream of data directly from the DB?
There was a problem hiding this comment.
The stream is problematic to implement since an order can be either a regular one or a Jit order. It is doable though although I am not sure if this its this PR.
Additionally, if keeping the X amount of orders is too much, we can lower the limit I've beset (5K), to a more acceptable level.
There was a problem hiding this comment.
Could you elaborate? sqlx::fetch already returns a stream, no? Then, axum::Body::from_stream() should send a stream of data back to the client.
There was a problem hiding this comment.
Code Review
The implementation adds a new endpoint to fetch orders in batches. A critical issue was found in crates/orderbook/src/database/orders.rs where await is used on a stream, which will cause a compilation error. The implementation also inefficiently collects two full vectors into memory; a suggestion is provided to fix the bug and improve performance by chaining the streams.
| let orders: Vec<Result<Order>> = | ||
| orders::many_full_orders_with_quotes(&mut ex, uids.as_slice()) | ||
| .await | ||
| .filter_map(async |order| order.ok()) | ||
| .map(|order| { | ||
| let (order, quote) = order.into_order_and_quote(); | ||
| full_order_with_quote_into_model_order(order, quote.as_ref()) | ||
| }) | ||
| .collect() | ||
| .await; | ||
| let jit_orders: Vec<Result<Order>> = | ||
| database::jit_orders::get_many_by_id(&mut ex, uids.as_slice()) | ||
| .await | ||
| .filter_map(async |order| order.ok()) | ||
| .map(|order| full_order_into_model_order(order)) | ||
| .collect() | ||
| .await; | ||
|
|
||
| orders.into_iter().chain(jit_orders).collect() |
There was a problem hiding this comment.
The .await calls on lines 330 and 340 are incorrect as many_full_orders_with_quotes and get_many_by_id return a BoxStream, not a Future. This will cause a compilation error.
Additionally, the current implementation is inefficient as it collects all results from two separate database queries into two Vecs in memory before merging them. For a large number of UIDs, this can lead to high memory usage.
This can be fixed by removing the erroneous .await calls and refactoring to chain the streams together before collecting, which is more memory-efficient.
| let orders: Vec<Result<Order>> = | |
| orders::many_full_orders_with_quotes(&mut ex, uids.as_slice()) | |
| .await | |
| .filter_map(async |order| order.ok()) | |
| .map(|order| { | |
| let (order, quote) = order.into_order_and_quote(); | |
| full_order_with_quote_into_model_order(order, quote.as_ref()) | |
| }) | |
| .collect() | |
| .await; | |
| let jit_orders: Vec<Result<Order>> = | |
| database::jit_orders::get_many_by_id(&mut ex, uids.as_slice()) | |
| .await | |
| .filter_map(async |order| order.ok()) | |
| .map(|order| full_order_into_model_order(order)) | |
| .collect() | |
| .await; | |
| orders.into_iter().chain(jit_orders).collect() | |
| let orders = orders::many_full_orders_with_quotes(&mut ex, uids.as_slice()) | |
| .filter_map(|order| async { order.ok() }) | |
| .map(|order| { | |
| let (order, quote) = order.into_order_and_quote(); | |
| full_order_with_quote_into_model_order(order, quote.as_ref()) | |
| }); | |
| let jit_orders = database::jit_orders::get_many_by_id(&mut ex, uids.as_slice()) | |
| .filter_map(|order| async { order.ok() }) | |
| .map(|order| full_order_into_model_order(order)); | |
| orders.chain(jit_orders).try_collect().await |
| #[tokio::test] | ||
| async fn get_orders_by_uid_request_too_many_orders() { | ||
| let mut uids = Vec::new(); | ||
| for _ in 0..5001 { |
There was a problem hiding this comment.
nit: use the MAX_ORDERS_LIMIT const so the test doesn't break if/when we change the limit
fafk
left a comment
There was a problem hiding this comment.
LGTM. The endpoint should be documented in openapi.yml too I think.
Description
Aave wants to track specific orders in bulk, knowing their ids.
Changes
Adds POST handler for
v1/orders/lookupendpoint that requires a list of order uids and responds with a vector of their data. Has a hardcoded limit of 5000 orders per request.How to test
Test on staging, query multiple orders using this API.