Skip to content

Conversation

@Lewiscowles1986
Copy link

@Lewiscowles1986 Lewiscowles1986 commented Dec 30, 2025

So I tried a few things, which get the minimum time in k6 down to microseconds for a request. They set slightly more aggressive timeouts, use pydantic to marshal the response from the POST endpoint to the DB. and set a latency on every request DB calls of 1 second. (the set local DB call, actually creates some overhead)

Ive not re-run the entire test, instead I've contributed a k6 test, which I used to verify on my local machine; within the also modified docker-compose yaml.

The migrations are still manual; and you'll probably notice I tried increasing server connections. I Think adjusting the min_size of db connections could be another good way to gain performance.

I'm pretty sure it still wont top bun, but you could get a few more thousand requests per second

---
appPort: 8080
db:
user: fastapi_app
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@emerson-proenca
Copy link

Do you believe running Pypy would improve Python performance?

Pypy + FastAPI = fastapi/fastapi#3944
Pypy speed = https://speed.pypy.org/

@Lewiscowles1986
Copy link
Author

Maybe @emerson-proenca; If you make a PR I'd love to run it. I've not had a lot of experience with it, so I consider it to be esoteric at the moment; but maybe it's very easy and I'm just unclear about approach.

I do know AI was telling me I was barking up the wrong tree with this, but then when I looked at all my latencies, they seemed to support me, more than AI review of my work.

@Lewiscowles1986
Copy link
Author

Lewiscowles1986 commented Dec 30, 2025

Here are some logs from K6 test included with PR, run on an M2 Air laptop


          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

     execution: local
        script: ../script.js
        output: -

     scenarios: (100.00%) 1 scenario, 100 max VUs, 1m0s max duration (incl. graceful stop):
              * default: 100 looping VUs for 30s (gracefulStop: 30s)


     ✓ is status 201

     checks.........................: 100.00% ✓ 137626      ✗ 0     
     data_received..................: 44 MB   1.5 MB/s
     data_sent......................: 26 MB   878 kB/s
     http_req_blocked...............: avg=2.77µs  min=0s     med=1µs     max=18.46ms p(90)=2µs     p(95)=2µs    
     http_req_connecting............: avg=1.22µs  min=0s     med=0s      max=6.77ms  p(90)=0s      p(95)=0s     
     http_req_duration..............: avg=21.58ms min=6.09ms med=18.25ms max=1.45s   p(90)=28.56ms p(95)=37.09ms
       { expected_response:true }...: avg=21.58ms min=6.09ms med=18.25ms max=1.45s   p(90)=28.56ms p(95)=37.09ms
     http_req_failed................: 0.00%   ✓ 0           ✗ 137626
     http_req_receiving.............: avg=16.37µs min=4µs    med=10µs    max=5.15ms  p(90)=30µs    p(95)=40µs   
     http_req_sending...............: avg=5.25µs  min=1µs    med=3µs     max=4.96ms  p(90)=8µs     p(95)=12µs   
     http_req_tls_handshaking.......: avg=0s      min=0s     med=0s      max=0s      p(90)=0s      p(95)=0s     
     http_req_waiting...............: avg=21.56ms min=6.02ms med=18.23ms max=1.45s   p(90)=28.53ms p(95)=37.08ms
     http_reqs......................: 137626  4585.523346/s
     iteration_duration.............: avg=21.79ms min=6.47ms med=18.45ms max=1.45s   p(90)=28.77ms p(95)=37.32ms
     iterations.....................: 137626  4585.523346/s
     vus............................: 100     min=100       max=100 
     vus_max........................: 100     min=100       max=100 

@Lewiscowles1986
Copy link
Author

Apologies, I realised after re-running the base-line I'd committed something I should not have. I've commented it out, but basically, I doubled the number of queries being run as part of a failed experiment and then accidentally git add . it. The rest is done now

Copy link
Owner

@antonputra antonputra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @Lewiscowles1986, for the PR. Sorry I don't have clear guidelines for pull requests. Would you be able to just create a fastapi-app-v2 folder and keep the original as is? It's just easier to compare in the future.

@Lewiscowles1986
Copy link
Author

To be honest, it merging isnt' the most important thing as you'll only get at most a few thousand requests (not per second, overall 😄 )

Thank you for the tutorial; if you'd still like the changes I'll make them

@antonputra
Copy link
Owner

@Lewiscowles1986 Yes please, I'll keep it. When it's time to refresh the benchmark, I'll go over and test everything again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants