Andromeda Labs
  • Welcome
  • SERVICES
    • Getting Started
    • Creating ChatBots & Inference API's
    • Accessing Created API Endpoints
    • Hosting Machine Learning Models
  • SUPPORT
    • Support
Powered by GitBook
On this page
  • 1. Converting your model to ONNX
  • 2. Hosting your model
  • 3. Managing & Accessing the model
  • Accessing the model via the API
  1. SERVICES

Hosting Machine Learning Models

This guide will help you host your machine learning model

PreviousAccessing Created API EndpointsNextSupport

Last updated 1 month ago

Only ONNX files are supported, in case you have formats like .pth you need to convert them;

1. Converting your model to ONNX

If your model is not in ONNX format, you can follow the conversion tutorial video below. Additional resources are available on YouTube and in online articles.

You can also use the following snippet from : microsoft learn
import torch.onnx 

#Function to Convert to ONNX 
def Convert_ONNX(): 

    # set the model to inference mode 
    model.eval() 

    # Let's create a dummy input tensor  
    dummy_input = torch.randn(1, input_size, requires_grad=True)  

    # Export the model   
    torch.onnx.export(model,         # model being run 
         dummy_input,       # model input (or a tuple for multiple inputs) 
         "ImageClassifier.onnx",       # where to save the model  
         export_params=True,  # store the trained parameter weights inside the model file 
         opset_version=10,    # the ONNX version to export the model to 
         do_constant_folding=True,  # whether to execute constant folding for optimization 
         input_names = ['modelInput'],   # the model's input names 
         output_names = ['modelOutput'], # the model's output names 
         dynamic_axes={'modelInput' : {0 : 'batch_size'},    # variable length axes 
                                'modelOutput' : {0 : 'batch_size'}}) 
    print(" ") 
    print('Model has been converted to ONNX')
    
    
if __name__ == "__main__": 
    # Let's build our model 
    #train(5) 
    #print('Finished Training') 

    # Test which classes performed well 
    #testAccuracy() 

    # Let's load the model we just created and test the accuracy per label 
    model = Network()
    
    path = "myFirstModel.pth" 
    
    model.load_state_dict(torch.load(path)) 

    # Test with batch of images 
    #testBatch() 
    # Test how the classes performed 
    #testClassess() 
 
    # Conversion to ONNX 
    Convert_ONNX()

2. Hosting your model

Provide all the details as expected, that is the classes (in their order as used in training the model), the image dimensions (still as used in training).

When not provided in order, model results will be misleading, also note that model wont be usable in case provided classes are invalid

3. Managing & Accessing the model

Andromeda Labs provides all the necessary compute to host and run your models, its your responsibility to develop to use the provided resources responsibly, misuse of our API will lead to a ban of your account

Click Create Asset, and choose what kind of asset you wish to create

Below is how you could use the playground

Model results, will be view from the side results pane, you can update the models after fine tuning, and upload new files;, by clicking the Edit option

Once you are satisfied with the playground, you can now integrate the model into your workflow, as shown below;

Accessing the model via the API

A model /Asset ID and the API endpoint URL is required before you can access the hosted model in your code.

When you have the MODEL (or asset) ID and the API URL, you can access the Model via the following implementations;

These implementations encompass what our API's expected, therefore, endeavor to customize your implementation;

import requests
import os


def make_inference(url, image_path, model_id):
    """
    Send an image to the Andromedalabs API for inference.

    Args:
        url (str): Complete API URL copied from the website
        image_path (str): Path to the image file (JPEG, PNG, BMP, or WebP)
        model_id (str): The UUID of the model to use for inference

    Returns:
        dict: The API response as a dictionary
    """
    # Get the filename from the path
    filename = os.path.basename(image_path)

    # Determine content type based on file extension
    if filename.lower().endswith(".png"):
        content_type = "image/png"
    elif filename.lower().endswith(".bmp"):
        content_type = "image/bmp"
    elif filename.lower().endswith(".webp"):
        content_type = "image/webp"
    else:
        content_type = "image/jpeg"  # Default for .jpeg/.jpg files

    try:
        # Open the image file
        with open(image_path, "rb") as image_file:
            # Prepare the request
            files = {"file": (filename, image_file, content_type)}

            data = {"asset": model_id}  # The model UUID to use for inference

            # Send the request to the API
            response = requests.post(url, files=files, data=data)

            # Check if the request was successful
            response.raise_for_status()

            # Return the API response
            return response.json()

    except requests.exceptions.HTTPError as e:
        print(f"API Error: {e}")
        print(f"Response: {e.response.text}")
        return None
    except Exception as e:
        print(f"Error: {e}")
        return None


# Example usage:
if __name__ == "__main__":
    # Copy the complete URL from the website
    api_url = "https://andromedalabs-api.space/reason/670bf59b-e9e9-4205-b1ff-f961cb1e0867/"

    # Path to your image file
    image_path = "./my_image.jpeg"

    # Model ID for inference
    model_id = "86dfd433-700f-41d6-b395-2a89bdf81f06"

    # Make the API call
    result = make_inference(api_url, image_path, model_id)

    if result:
        print("Inference result:")

        print(result)
/**
 * Send an image to the Andromedalabs API for inference.
 * 
 * @param {string} url - Complete API URL copied from the website
 * @param {File} imageFile - Image file object (from file input or drag & drop)
 * @param {string} modelId - The UUID of the model to use for inference
 * @returns {Promise<object>} - The API response as a JSON object
 */
async function makeInference(url, imageFile, modelId) {
  try {
    // Create form data
    const formData = new FormData();
    
    // Add the image file to the form
    formData.append('file', imageFile);
    
    // Add the model ID
    formData.append('asset', modelId);
    
    // Send the request to the API
    const response = await fetch(url, {
      method: 'POST',
      body: formData
    });
    
    if (!response.ok) {
      throw new Error(`API responded with status: ${response.status}`);
    }
    
    // Return the API response
    return await response.json();
    
  } catch (error) {
    console.error(`Error: ${error.message}`);
    return null;
  }
}

// Example usage with file input:
document.getElementById('fileInput').addEventListener('change', async (event) => {
  const imageFile = event.target.files[0];
  if (!imageFile) return;
  
  // Copy the complete URL from the website
  const apiUrl = 'https://andromedalabs-api.space/reason/670bf59b-e9e9-4205-b1ff-f961cb1e0867/';
  
  // Model ID for inference
  const modelId = '86dfd433-700f-41d6-b395-2a89bdf81f06';
  
  // Show loading indicator
  document.getElementById('result').textContent = 'Processing...';
  
  // Make the API call
  const result = await makeInference(apiUrl, imageFile, modelId);
  
  if (result) {
    document.getElementById('result').textContent = JSON.stringify(result, null, 2);
  } else {
    document.getElementById('result').textContent = 'Error processing the image';
  }
});
import axios, { AxiosResponse } from 'axios';
import * as fs from 'fs';
import * as path from 'path';
import FormData from 'form-data';

/**
 * Send an image to the Andromedalabs API for inference.
 * 
 * @param url - Complete API URL copied from the website
 * @param imagePath - Path to the image file (JPEG, PNG, BMP, or WebP)
 * @param modelId - The UUID of the model to use for inference
 * @returns The API response as a JSON object or null if there's an error
 */
async function makeInference(url: string, imagePath: string, modelId: string): Promise<any | null> {
  // Get the filename from the path
  const filename = path.basename(imagePath);
  
  // Determine content type based on file extension
  let contentType = 'image/jpeg'; // Default
  if (filename.toLowerCase().endsWith('.png')) {
    contentType = 'image/png';
  } else if (filename.toLowerCase().endsWith('.bmp')) {
    contentType = 'image/bmp';
  } else if (filename.toLowerCase().endsWith('.webp')) {
    contentType = 'image/webp';
  }
  
  try {
    // Create form data
    const formData = new FormData();
    
    // Add the image file to the form
    formData.append('file', fs.createReadStream(imagePath), {
      filename,
      contentType
    });
    
    // Add the model ID
    formData.append('asset', modelId);
    
    // Send the request to the API
    const response: AxiosResponse = await axios.post(url, formData, {
      headers: formData.getHeaders()
    });
    
    // Return the API response
    return response.data;
    
  } catch (error: any) {
    if (error.response) {
      console.error(`API Error: ${error.message}`);
      console.error(`Response: ${JSON.stringify(error.response.data)}`);
    } else {
      console.error(`Error: ${error.message}`);
    }
    return null;
  }
}

// Example usage:
async function main() {
  // Copy the complete URL from the website
  const apiUrl = 'https://andromedalabs-api.space/reason/670bf59b-e9e9-4205-b1ff-f961cb1e0867/';
  
  // Path to your image file
  const imagePath = './my_image.jpeg';
  
  // Model ID for inference
  const modelId = '86dfd433-700f-41d6-b395-2a89bdf81f06';
  
  // Make the API call
  const result = await makeInference(apiUrl, imagePath, modelId);
  
  if (result) {
    console.log('Inference result:');
    console.log(result);
  }
}

main().catch(console.error);
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.HashMap;
import java.util.Map;

import okhttp3.MediaType;
import okhttp3.MultipartBody;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.RequestBody;
import okhttp3.Response;

import com.google.gson.Gson;
import com.google.gson.JsonObject;

public class AndromedaApiClient {
    
    private static final OkHttpClient client = new OkHttpClient();
    private static final Gson gson = new Gson();

    /**
     * Send an image to the Andromedalabs API for inference.
     * 
     * @param url Complete API URL copied from the website
     * @param imagePath Path to the image file (JPEG, PNG, BMP, or WebP)
     * @param modelId The UUID of the model to use for inference
     * @return The API response as a JsonObject or null if there's an error
     */
    public static JsonObject makeInference(String url, String imagePath, String modelId) {
        File imageFile = new File(imagePath);
        String filename = imageFile.getName();
        
        // Determine content type based on file extension
        MediaType contentType = MediaType.parse("image/jpeg"); // Default
        if (filename.toLowerCase().endsWith(".png")) {
            contentType = MediaType.parse("image/png");
        } else if (filename.toLowerCase().endsWith(".bmp")) {
            contentType = MediaType.parse("image/bmp");
        } else if (filename.toLowerCase().endsWith(".webp")) {
            contentType = MediaType.parse("image/webp");
        }
        
        try {
            // Create request body
            RequestBody requestBody = new MultipartBody.Builder()
                .setType(MultipartBody.FORM)
                .addFormDataPart("file", filename,
                    RequestBody.create(contentType, imageFile))
                .addFormDataPart("asset", modelId)
                .build();
            
            // Build request
            Request request = new Request.Builder()
                .url(url)
                .post(requestBody)
                .build();
            
            // Send request and get response
            try (Response response = client.newCall(request).execute()) {
                if (!response.isSuccessful()) {
                    System.err.println("API Error: " + response.code());
                    System.err.println("Response: " + response.body().string());
                    return null;
                }
                
                // Parse and return the JSON response
                String responseBody = response.body().string();
                return gson.fromJson(responseBody, JsonObject.class);
            }
            
        } catch (IOException e) {
            System.err.println("Error: " + e.getMessage());
            return null;
        }
    }
    
    public static void main(String[] args) {
        // Copy the complete URL from the website
        String apiUrl = "https://andromedalabs-api.space/reason/670bf59b-e9e9-4205-b1ff-f961cb1e0867/";
        
        // Path to your image file
        String imagePath = "./my_image.jpeg";
        
        // Model ID for inference
        String modelId = "86dfd433-700f-41d6-b395-2a89bdf81f06";
        
        // Make the API call
        JsonObject result = makeInference(apiUrl, imagePath, modelId);
        
        if (result != null) {
            System.out.println("Inference result:");
            System.out.println(result);
        }
    }
}
package main

import (
	"bytes"
	"encoding/json"
	"fmt"
	"io"
	"io/ioutil"
	"mime/multipart"
	"net/http"
	"os"
	"path/filepath"
	"strings"
)

// MakeInference sends an image to the Andromedalabs API for inference
func MakeInference(url, imagePath, modelID string) (map[string]interface{}, error) {
	// Get the filename from the path
	filename := filepath.Base(imagePath)
	
	// Determine content type based on file extension
	contentType := "image/jpeg" // Default
	if strings.HasSuffix(strings.ToLower(filename), ".png") {
		contentType = "image/png"
	} else if strings.HasSuffix(strings.ToLower(filename), ".bmp") {
		contentType = "image/bmp"
	} else if strings.HasSuffix(strings.ToLower(filename), ".webp") {
		contentType = "image/webp"
	}
	
	// Create a buffer to store the multipart form data
	var requestBody bytes.Buffer
	writer := multipart.NewWriter(&requestBody)
	
	// Add the model ID to the form
	if err := writer.WriteField("asset", modelID); err != nil {
		return nil, fmt.Errorf("error adding model ID to form: %v", err)
	}
	
	// Open the image file
	file, err := os.Open(imagePath)
	if err != nil {
		return nil, fmt.Errorf("error opening image file: %v", err)
	}
	defer file.Close()
	
	// Create a form file for the image
	part, err := writer.CreateFormFile("file", filename)
	if err != nil {
		return nil, fmt.Errorf("error creating form file: %v", err)
	}
	
	// Copy the file content to the form
	if _, err = io.Copy(part, file); err != nil {
		return nil, fmt.Errorf("error copying file content: %v", err)
	}
	
	// Close the writer
	if err = writer.Close(); err != nil {
		return nil, fmt.Errorf("error closing multipart writer: %v", err)
	}
	
	// Create a new HTTP request
	req, err := http.NewRequest("POST", url, &requestBody)
	if err != nil {
		return nil, fmt.Errorf("error creating request: %v", err)
	}
	
	// Set content type header
	req.Header.Set("Content-Type", writer.FormDataContentType())
	
	// Send the request
	client := &http.Client{}
	resp, err := client.Do(req)
	if err != nil {
		return nil, fmt.Errorf("error sending request: %v", err)
	}
	defer resp.Body.Close()
	
	// Read the response body
	respBody, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		return nil, fmt.Errorf("error reading response body: %v", err)
	}
	
	// Check if the request was successful
	if resp.StatusCode != http.StatusOK {
		return nil, fmt.Errorf("API error: %s", respBody)
	}
	
	// Parse the JSON response
	var result map[string]interface{}
	if err := json.Unmarshal(respBody, &result); err != nil {
		return nil, fmt.Errorf("error parsing JSON response: %v", err)
	}
	
	return result, nil
}

func main() {
	// Copy the complete URL from the website
	apiURL := "https://andromedalabs-api.space/reason/670bf59b-e9e9-4205-b1ff-f961cb1e0867/"
	
	// Path to your image file
	imagePath := "./my_image.jpeg"
	
	// Model ID for inference
	modelID := "86dfd433-700f-41d6-b395-2a89bdf81f06"
	
	// Make the API call
	result, err := MakeInference(apiURL, imagePath, modelID)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
		return
	}
	
	// Print the result
	fmt.Println("Inference result:")
	prettyJSON, _ := json.MarshalIndent(result, "", "  ")
	fmt.Println(string(prettyJSON))
}
dependencies:
  flutter:
    sdk: flutter
  http: ^1.2.1 # Or latest version
  path: ^1.9.0 # Or latest version
  http_parser: ^4.0.2 # For MediaType
  # You might also need image_picker if you want users to select images
  # image_picker: ^1.0.7
// For jsonDecode
import 'dart:convert'; 
// For File operations
import 'dart:io'; 
import 'package:http/http.dart' as http;
// For basename and extension
import 'package:path/path.dart' as p;
// For MediaType
import 'package:http_parser/http_parser.dart';

// Define a type alias for clarity, representing the JSON response structure
typedef ApiResponse = Map<String, dynamic>;

/// Sends an image to the Andromedalabs API for inference.
///
/// Args:
///   apiUrl (String): Complete API URL copied from the website.
///   imagePath (String): Path to the image file (JPEG, PNG, BMP, or WebP).
///   modelId (String): The ID of the model to use for inference (copy it from the website)
///
/// Returns:
///   Future<ApiResponse?>: The API response as a Map (decoded JSON),
///                         or null if an error occurred.
Future<ApiResponse?> makeInference(String apiUrl, String imagePath, String modelId) async {
  final imageFile = File(imagePath);

  // 1. Check if the file exists before proceeding
  if (!await imageFile.exists()) {
    print("Error: Image file not found at path: $imagePath");
    return null;
  }

  // 2. Get filename and determine content type
  final filename = p.basename(imagePath);
  final fileExtension = p.extension(imagePath).toLowerCase();
  String contentType;

  switch (fileExtension) {
    case ".png":
      contentType = "image/png";
      break;
    case ".bmp":
      contentType = "image/bmp";
      break;
    case ".webp":
      contentType = "image/webp";
      break;
    case ".jpg":
    case ".jpeg":
      contentType = "image/jpeg";
      break;
    default:
      print("Error: Unsupported image format: $fileExtension");
      // Or default to jpeg if preferred: contentType = "image/jpeg";
      return null;
  }

  try {
    // 3. Create a Multipart request
    final request = http.MultipartRequest('POST', Uri.parse(apiUrl));

    // 4. Add the model_id as a form field
    request.fields['asset'] = modelId;

    // 5. Add the image file
    request.files.add(
      await http.MultipartFile.fromPath(
        'file', // The field name expected by the API for the file
        imagePath,
        filename: filename, // Send the original filename
        contentType: MediaType.parse(contentType), // Set the correct content type
      ),
    );

    // 6. Send the request and get the response
    print("Sending inference request to: $apiUrl");
    final streamedResponse = await request.send();

    // 7. Read the response
    final response = await http.Response.fromStream(streamedResponse);

    // 8. Check the status code
    if (response.statusCode >= 200 && response.statusCode < 300) {
      // Success! Decode the JSON body
      try {
        final ApiResponse responseData = jsonDecode(response.body);
        return responseData;
      } on FormatException catch (e) {
         print("Error decoding JSON response: $e");
         print("Response body: ${response.body}");
         return null;
      }
    } else {
      // API Error
      print("API Error: Status Code ${response.statusCode}");
      print("Response: ${response.body}");
      return null;
    }
  } on http.ClientException catch (e) {
    // Network or request sending errors
    print("Network/Client Error: $e");
    return null;
  } on FileSystemException catch (e) {
    // Errors reading the file
     print("File System Error: $e");
     return null;
  } catch (e) {
    // Catch any other unexpected errors
    print("An unexpected error occurred: $e");
    return null;
  }
}

// --- Example Usage (e.g., inside a Flutter widget or main function) ---

// This would typically be triggered by a button press or in initState
Future<void> runInferenceExample() async {
  // Copy the complete URL from the website
  const String apiUrl = "https://andromedalabs-api.space/reason/670bf59b-e9e9-4205-b1ff-f961cb1e0867/";

  // Example using image_picker (install the package first):
  // If image will be selected from the gallery
  final ImagePicker picker = ImagePicker();
  
  final XFile? image = await picker.pickImage(source: ImageSource.gallery);
  
  if (image == null) {
     print("No image selected.");
     return;
  }
  
  // path to the image
  final String imagePath = image.path;

  // Model ID for inference
  const String modelId = "86dfd433-700f-41d6-b395-2a89bdf81f06";

  print("Starting inference...");
  final result = await makeInference(apiUrl, imagePath, modelId);

  if (result != null) {
    print("Inference result:");
    print(result);
    // You can now use the 'result' map in your Flutter UI
  } else {
    print("Inference failed.");
  }
}

// To run the example (e.g., in your main.dart or a button's onPressed):
void main() {
  runApp(MyApp());
  
  // Call the example function
  runInferenceExample();
}

Go to the Hosting Panel and Click on New Model
Click on New Model
Classes in order
Select the valid dimensions (as used in training)
Upon saving the model , its visible at the Hosting tab
Try out the playground Or Manage the model
Click the folder icon Choose an image and then get results
Enable those sources that you will need to use
API URL
Where the model (Or Asset) ID is located
Drawing
Page cover image