Hosting Machine Learning Models
This guide will help you host your machine learning model
Last updated
This guide will help you host your machine learning model
Last updated
Only ONNX files are supported, in case you have formats like .pth you need to convert them;
If your model is not in ONNX format, you can follow the conversion tutorial video below. Additional resources are available on YouTube and in online articles.
Provide all the details as expected, that is the classes (in their order as used in training the model), the image dimensions (still as used in training).
When not provided in order, model results will be misleading, also note that model wont be usable in case provided classes are invalid
Andromeda Labs provides all the necessary compute to host and run your models, its your responsibility to develop to use the provided resources responsibly, misuse of our API will lead to a ban of your account
Click Create Asset, and choose what kind of asset you wish to create
Below is how you could use the playground
Model results, will be view from the side results pane, you can update the models after fine tuning, and upload new files;, by clicking the Edit option
Once you are satisfied with the playground, you can now integrate the model into your workflow, as shown below;
A model /Asset ID and the API endpoint URL is required before you can access the hosted model in your code.
When you have the MODEL (or asset) ID and the API URL, you can access the Model via the following implementations;
These implementations encompass what our API's expected, therefore, endeavor to customize your implementation;
import requests
import os
def make_inference(url, image_path, model_id):
"""
Send an image to the Andromedalabs API for inference.
Args:
url (str): Complete API URL copied from the website
image_path (str): Path to the image file (JPEG, PNG, BMP, or WebP)
model_id (str): The UUID of the model to use for inference
Returns:
dict: The API response as a dictionary
"""
# Get the filename from the path
filename = os.path.basename(image_path)
# Determine content type based on file extension
if filename.lower().endswith(".png"):
content_type = "image/png"
elif filename.lower().endswith(".bmp"):
content_type = "image/bmp"
elif filename.lower().endswith(".webp"):
content_type = "image/webp"
else:
content_type = "image/jpeg" # Default for .jpeg/.jpg files
try:
# Open the image file
with open(image_path, "rb") as image_file:
# Prepare the request
files = {"file": (filename, image_file, content_type)}
data = {"asset": model_id} # The model UUID to use for inference
# Send the request to the API
response = requests.post(url, files=files, data=data)
# Check if the request was successful
response.raise_for_status()
# Return the API response
return response.json()
except requests.exceptions.HTTPError as e:
print(f"API Error: {e}")
print(f"Response: {e.response.text}")
return None
except Exception as e:
print(f"Error: {e}")
return None
# Example usage:
if __name__ == "__main__":
# Copy the complete URL from the website
api_url = "https://andromedalabs-api.space/reason/670bf59b-e9e9-4205-b1ff-f961cb1e0867/"
# Path to your image file
image_path = "./my_image.jpeg"
# Model ID for inference
model_id = "86dfd433-700f-41d6-b395-2a89bdf81f06"
# Make the API call
result = make_inference(api_url, image_path, model_id)
if result:
print("Inference result:")
print(result)