Skip to content

Custom Backend for the TrashAI Repository using TensorFlow JS and NodeJS, which allows you to skip the included front-end of the original repository and send a HTTP request, containing a .jpg image, directly to the model, in order to receive the most likely result.

License

Notifications You must be signed in to change notification settings

dytme/TrashAI-API

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TrashAI-API

Custom Backend for the TrashAI Repository using TensorFlow JS and NodeJS, which allows you to skip the included front-end of the original product and send a HTTP request, containing a .jpg image, directly to the model, in order to receive the most likely detected object.

Usage

Endpoint

The server listens on port 3000 for requests at the URL /detect. It only accepts requests that are of Content-Type image/jpeg. You can send the image in the form of a byte array, and the API will handle the required decoding and transformations for the model to analyze.

  POST http://localhost:3000/detect

While you can send an image of any filesize, the model can only analyse resolutions up to 640x640. As such, there's no point in providing large resolution images, as detection quality will not improve.

Response

The API will respond exclusively with the ClassID of the most likely result, a String which may be connected to one of the keys of the name-map.json file included.

Possible responses include:

Response Meaning
"-2" Bad Request or Model Busy. For Bad Request, you can also check the Status Code of the Response.
"-1" No object has been detected.
"0" to "59" Most likely detection. Can be looked up in name-map.json

Example: Using Java HttpClient

String detectedObject; // Key of the detected object for the name-map.json

// Make a request
HttpRequest req = HttpRequest.newBuilder()
  .uri(URI.create("http://localhost:3000/detect"))
  .header("Content-Type", "image/jpeg")
  .POST(HttpRequest.BodyPublishers.ofByteArray(imageBytes))
  .build();

// Catch the response
try {
    
  HttpResponse<String> res = client.send(req, HttpResponse.BodyHandlers.ofString());
  println("Connection Status: " + res.statusCode());
  detectedObject = res.body();
    
} catch (IOException e) {
  println(e);

} catch (InterruptedException e) {
  println(e);
}

Deployment

Docker (Recommended)

This deployment is compatible with both Node/CPU & JavaScript/WebGL

To deploy this project using Docker, you first have to build an image.

  docker build -t trash-ai-api . 

You can then run the image as a container, exposing port 3000 through:

  docker run -p 3000:3000 trash-ai-api

After this first terminal-driven run, you will then be able to handle this container directly in Docker Desktop (if applicable)

Terminal

⚠️

This deployment is only for the JavaScript / WebGL release.

Keep in mind that this mode requires that you handle dependencies (like the correct version of Node) yourself. The project was built around Node 18-slim For testing purposes, you can run the model using:

  npm run start-api

About

Custom Backend for the TrashAI Repository using TensorFlow JS and NodeJS, which allows you to skip the included front-end of the original repository and send a HTTP request, containing a .jpg image, directly to the model, in order to receive the most likely result.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published