Train Custom Elements and Generate Images
Custom Train Elements (LoRAs) with uploaded images
Overview:
Element Training allows users to create custom LoRAs (Low-Rank Adaptations) for more flexible image generation. Users can train Elements on various SDXL models at 1024×1024 with options to use default or customise advanced settings. This enables more consistent image creation for specific styles, characters, products, and visual looks, giving users greater control over generating unique and tailored imagery. You can follow this detailed guide on how to use Element (LoRA) Training via the UI.
Follow this guide on how to train your own Elements via API. You can find an example of the code here.
How to use:
1. Dataset Assembly
Refer to the 1. Dataset Assembly guide on key considerations and instructions to create an optimal dataset. Once you have assembled your dataset, download your images to a new shared folder called ‘LoRA-Training’. For this example, we will be creating a dataset of pastel colors of dogs.
2. Set up Environment
Make sure you have an API subscription plan and an API key. If not, follow our Quick Start guide to get started.
Within your ‘LoRA-Training’ folder, create three new Python files called ‘create-dataset.py’, ‘upload-images.py’ and ‘train-element.py’. Open a terminal window within the ‘LoRA-Training’ folder created in Step 1, and ensure that you have the requests package installed. You can do this by running pip install requests from your terminal.
3. Create your Dataset
3.1 Create Dataset ID
Create an empty dataset by following this API Documentation, including a name and description. This example will have ‘dog-stock-photos’ with the description ‘Stock photos of a small dog against pastel backgrounds’. Example code can be found within the ‘create-dataset.py’:
import requests
url = "<https://cloud.leonardo.ai/api/rest/v1/datasets">
payload = {
"name": "dog-stock-photos",
"description": "Stock photos of a small dog against pastel backgrounds"
}
headers = {
"accept": "application/json",
"content-type": "application/json",
"authorization": "Bearer YOUR_API_KEY"
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)
Run the script (python create-dataset.py), in order to run this programmatically. If successful, you will get a response back similar to:
{
"insert_datasets_one":{
"id":"Your DATASET_ID"
}
}
Save this Dataset ID as it will be used in the next step.
To check if your Dataset was created successfully, head to the UI under Training & Datasets, and the new empty dataset should be created here.
3.2 Upload images to Dataset
Once the dataset is created, start uploading images to the dataset ID. Make a call to the ‘Upload dataset image’ API, which will create a temporary presigned URL. This will expire within 2 minutes. More detailed instructions on how to upload an image using a presigned URL are available here.
def init_dataset_image_upload(dataset_id):
"""Initialise dataset image upload and get presigned URL"""
init_url = f"{BASE_URL}/datasets/{dataset_id}/upload"
payload = {"extension": "jpg"}
response = requests.post(init_url, json=payload, headers=headers)
if response.status_code != 200:
raise Exception(f"Init dataset image upload failed: {response.text}")
upload_data = response.json()['uploadDatasetImage']
return {
'url': upload_data['url'],
'fields': json.loads(upload_data['fields']),
'image_id': upload_data['id']
}
Make a post call to upload an image to the URL returned, repeating this for all images within the folder.
def upload_image_to_s3(presigned_url, fields, image_path):
"""Upload image to S3 using presigned URL"""
with open(image_path, 'rb') as image_file:
files = {'file': ('image.jpg', image_file, 'image/jpg')}
upload_data = fields.copy()
response = requests.post(presigned_url, data=upload_data, files=files)
if response.status_code not in [200, 204]:
raise Exception(f"S3 upload failed for {image_path}: {response.text}")
return True
If successful, you will see all of your relevant images uploaded to the folder reflected within the UI under Training & Datasets
4. Training the Custom Element
Once all the images have been successfully uploaded, a custom Element can be trained from that dataset.
This can be done programmatically using the ‘Train a Custom Element’ API with the following parameters:
- instance_prompt: This word will be automatically included by default when the Element is active to trigger it. The Trigger word should ideally be abstract such as markxperson if you are training on something that is not in the base model to avoid any misinterpretation by the AI. For our purposes, we will use ‘a dogstockphoto’
- lora_focus: Select a category, ensure this is adjusted when training an Element. Each category will utilize different methods of training for the best results
- train_text_encoder: The text encoder in Stable Diffusion translates text prompts into a language the AI understands. Training the text encoder helps the model better interpret and generate images that match specific text descriptions more accurately. We will turn this on for better results
- sd_version: version of stable diffusion to use if not using a custom model. We will use SD_1_0 as per default
- num_train_epochs: this determines how many complete passes (cycles) the training algorithm makes through the entire dataset during model training. 100 is the default setting
- learning_rate: the learning rate determines how big or small the steps are when the model adjusts its internal parameters during training. We will keep this as 0.000001 for default
We add these parameters into our train-element.py and run the script:
import requests
url = "<https://cloud.leonardo.ai/api/rest/v1/elements">
payload = {
"name": "dog-stock-photos",
"instance_prompt": "a dog",
"lora_focus": "General",
"train_text_encoder": True,
"resolution": 1024,
"sd_version": "SDXL_1_0",
"num_train_epochs": 100,
"learning_rate": 0.000001,
"description": "Stock photos of a small dog against pastel backgrounds",
"datasetId": "<YOUR_DATASETID>"
}
headers = {
"accept": "application/json",
"content-type": "application/json",
"authorization": "Bearer <YOUR_API_KEY>"
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)
Please note that the training process may take from several minutes to a few hours depending on the number of images and training settings. The following response will appear if successful.
{
"sdTrainingJob":{
"userLoraId":1234,
"apiCreditCost":750
}
}
Save your userLoraId, as we will need this for future generations of images. To find a list of all your previous userLoraIds, use this endpoint.
5. Generating your images
To generate the images using your trained dataset, we will use the Generate Image API and generate an image as you normally would, but also include the ‘userElements’ information. Create a new python script called generate-image.py, and copy and paste the below script. An example for this is below, however you will need to replace your API_Key, select an appropriate Model_ID and insert your userLoraID which was saved at the end of the prior step. Note that you will need to include the instance_prompt trigger words specified in Step 4.
import requests
url = "<https://cloud.leonardo.ai/api/rest/v1/generations">
payload = {
"alchemy": True,
"height": 768,
"modelId": "<YOUR_MODELID>",
"num_images": 4,
"presetStyle": "DYNAMIC",
"prompt": "A dogstockphoto looking happily at the camera",
"width": 1024,
"userElements": [
{
"userLoraId": <YOUR_USERLORAID>,
"weight": 1
}
]
}
headers = {
"accept": "application/json",
"content-type": "application/json",
"authorization": "Bearer <YOUR_API_KEY>"
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)
After running this python script, you will be able to see your images generated within the console, which should reflect a similar style to the one you have used. Modify the weight parameter if the Element does not significantly influence the output.
Updated 4 days ago