Compare commits

..

10 Commits

Author SHA1 Message Date
tami-p40
ca1c34b91b negative emmbedings 2024-05-26 08:49:27 +03:00
tami-p40
de2badd5a0 xcfg 2024-05-20 10:29:34 +03:00
tami-p40
3fa661a54c qa 2024-05-18 20:41:30 +03:00
tami-p40
c8e825b247 AIrefactor 2024-05-18 14:06:29 +03:00
tami-p40
9e73316ab8 local run 2024-05-18 13:20:52 +03:00
tami-p40
7991f74a39 readme 2024-05-18 13:18:58 +03:00
tami-p40
1aacd87547 shorts work 2024-05-17 22:24:12 +03:00
tami-p40
a206a1103c limit caption length to 1024 2024-05-17 14:31:15 +03:00
tami-p40
31907abe33 img2img 2024-05-16 01:05:35 +03:00
ariel1985
ec813f571e adding .env template and update readme 2024-04-09 22:40:23 +03:00
6 changed files with 477 additions and 186 deletions

4
.env_template Normal file
View File

@@ -0,0 +1,4 @@
TOKEN=<telegram-bot-token >
API_ID=<telegram-id-api-id >
API_HASH=<telegram-id-api-hash>
SD_URL=<stable-diffusion-api-url>

7
.gitignore vendored
View File

@@ -1,3 +1,8 @@
*.png *.png
.env .env
.session *.session
vscode/
venv/
*.session-journal
logs/stable_diff_telegram_bot.log
*.session

View File

@@ -1,16 +1,16 @@
# AI Powered Art in a Telegram Bot! # AI Powered Art in a Telegram Bot!
this is a txt2img bot to converse with SDweb bot [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) running on tami telegram channel this is a txt2img/img2img bot to converse with SDweb bot [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) running on tami telegram channel
# How to ## How to
### txt2img
supported invocation:
`/draw <text>` - send prompt text to the bot and it will draw an image `/draw <text>` - send prompt text to the bot and it will draw an image
you can add `negative_prompt` using `ng: <text>` you can add `negative_prompt` using `ng: <text>`
you can add `denoised intermediate steps` using `steps: <text>` you can add `denoised intermediate steps` using `steps: <text>`
basicly anything the `/controlnet/txt2img` API payload supports basicly anything the `/controlnet/txt2img` API payload supports
like, like,
```json ```json
{ {
"prompt": "", "prompt": "",
@@ -22,6 +22,7 @@ like,
"cfg_scale": 7 "cfg_scale": 7
} }
``` ```
examples: examples:
`/draw a city street` `/draw a city street`
and without people and without people
@@ -32,26 +33,45 @@ with more steps
to change the model use: to change the model use:
`/getmodels` - to get a list of models and then click to set it. `/getmodels` - to get a list of models and then click to set it.
- note1: Anything after ng will be considered as nergative prompt. a.k.a things you do not want to see in your diffusion! - note1: Anything after ng will be considered as nergative prompt. a.k.a things you do not want to see in your diffusion!
- note2: on [negative_prompt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt) (aka ng): - note2: on [negative_prompt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt) (aka ng):
thia is a bit of a black art. i took the recommended defaults for the `Deliberate` model from this fun [alt-model spreadsheet](https://docs.google.com/spreadsheets/d/1Q0bYKRfVOTUHQbUsIISCztpdZXzfo9kOoAy17Qhz3hI/edit#gid=797387129). thia is a bit of a black art. i took the recommended defaults for the `Deliberate` model from this fun [alt-model spreadsheet](https://docs.google.com/spreadsheets/d/1Q0bYKRfVOTUHQbUsIISCztpdZXzfo9kOoAy17Qhz3hI/edit#gid=797387129).
~~and you (currntly) can only ADD to it, not replace.~~
- note3: on `steps` - step of 1 will generate only the first "step" of bot hallucinations. the default is 40. higher will take longer and will give "better" image. range is hardcoded 1-70. - note3: on `steps` - step of 1 will generate only the first "step" of bot hallucinations. the default is 40. higher will take longer and will give "better" image. range is hardcoded 1-70.
see ![video](https://user-images.githubusercontent.com/57876960/212490617-f0444799-50e5-485e-bc5d-9c24a9146d38.mp4) see ![video](https://user-images.githubusercontent.com/57876960/212490617-f0444799-50e5-485e-bc5d-9c24a9146d38.mp4)
### img2img
`/img <prompt> ds:<0.0-1.0>` - reply to an image with a prompt text and it will draw an image
you can add `denoising_strength` using `ds:<float>`
Set that low (like 0.2) if you just want to slightly change things. defaults to 0.4
basicly anything the `/controlnet/img2img` API payload supports
### general
`X/Y/Z script` [link](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#xyz-plot), one powerfull thing
for prompt we use the Serach Replace option (a.k.a `prompt s/r`) [exaplined](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-sr)
## Setup ## Setup
Install requirements Install requirements using venv
```bash
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```
Install requirements using conda
```bash ```bash
conda create -n sdw python=3.8 conda create -n sdw python=3.8
conda activate sdw conda activate sdw
pip install -r requirements.txt pip install -r requirements.txt
``` ```
(note: conda is not strictly necessary, but it is recommended)
## Original readme ## Original README
My Bot uses [Automatic1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as the backend. My Bot uses [Automatic1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as the backend.
Follow the directions on their repo for setup instructions. Follow the directions on their repo for setup instructions.
@@ -62,15 +82,23 @@ arguments such as `--xformers` to use xformers memory efficient attention.
You can use the web ui interface that Automatic1111 provides to select the model and VAE to use. You can use the web ui interface that Automatic1111 provides to select the model and VAE to use.
Their repo has documentation on how to do so. I also recommend doing a test generation Their repo has documentation on how to do so. I also recommend doing a test generation
Create a file called `.env` in the same folder as `main.py`. Inside the `.env` file, Copy a file called `.env_template` into `.env_template` in the same folder as `main.py`.
create a line `TOKEN = xxxx`, where xxxx is your telegram bot token. In the `.env` file fill out the following environment variables:
create a line `API_ID = xxxx`, where xxxx is your telegram id api id. `TOKEN = xxxx`, where xxxx is your telegram bot token.
create a line `API_HASH = xxxx`, where xxxx is your telegram id api hash. `API_ID = xxxx`, where xxxx is your telegram id api id.
create a line `SD_URL = xxxx`, where xxxx is your sd api url. `API_HASH = xxxx`, where xxxx is your telegram id api hash.
`SD_URL = xxxx`, where xxxx is your sd api url.
To get the API_ID and API_HASH, you need to create a new application on the Telegram's developer website. Here are the steps:
1. Open browser and visit https://my.telegram.org and log in with your Telegram account.
2. Click on "API development tools".
3. Fill out the form to create a new application. You can enter any valid details you want.
4. After you've created the application, you'll be given the API_ID and API_HASH.
5. Once you have these, you can add them to your .env file:
Now, you can run the bot Now, you can run the bot
`python main.py` `python main.py`

170
localAPIRun.py Normal file
View File

@@ -0,0 +1,170 @@
from datetime import datetime
import urllib.request
import base64
import json
import time
import os
url="pop-os.local"
webui_server_url = f'http://{url}:7860'
out_dir = 'api_out'
out_dir_t2i = os.path.join(out_dir, 'txt2img')
out_dir_i2i = os.path.join(out_dir, 'img2img')
os.makedirs(out_dir_t2i, exist_ok=True)
os.makedirs(out_dir_i2i, exist_ok=True)
def timestamp():
return datetime.fromtimestamp(time.time()).strftime("%Y%m%d-%H%M%S")
def encode_file_to_base64(path):
with open(path, 'rb') as file:
return base64.b64encode(file.read()).decode('utf-8')
def decode_and_save_base64(base64_str, save_path):
with open(save_path, "wb") as file:
file.write(base64.b64decode(base64_str))
def call_api(api_endpoint, **payload):
data = json.dumps(payload).encode('utf-8')
request = urllib.request.Request(
f'{webui_server_url}/{api_endpoint}',
headers={'Content-Type': 'application/json'},
data=data,
)
response = urllib.request.urlopen(request)
return json.loads(response.read().decode('utf-8'))
def call_txt2img_api(**payload):
response = call_api('sdapi/v1/txt2img', **payload)
for index, image in enumerate(response.get('images')):
save_path = os.path.join(out_dir_t2i, f'txt2img-{timestamp()}-{index}.png')
decode_and_save_base64(image, save_path)
def call_img2img_api(**payload):
response = call_api('sdapi/v1/img2img', **payload)
for index, image in enumerate(response.get('images')):
save_path = os.path.join(out_dir_i2i, f'img2img-{timestamp()}-{index}.png')
decode_and_save_base64(image, save_path)
if __name__ == '__main__':
payload = {
"prompt": "masterpiece, (best quality:1.1), 1girl <lora:lora_model:1>", # extra networks also in prompts
"negative_prompt": "",
"seed": 1,
"steps": 20,
"width": 512,
"height": 512,
"cfg_scale": 7,
"sampler_name": "DPM++ SDE Karras",
"n_iter": 1,
"batch_size": 1,
# example args for x/y/z plot
#steps 4,"20,30"
#denoising==22
# S/R 7,"X,united states,china",
"script_args": [
4,
"20,30,40",
[],
0,
"",
[],
0,
"",
[],
True,
False,
False,
False,
False,
False,
False,
0,
False
],
"script_name": "x/y/z plot",
# example args for Refiner and ControlNet
# "alwayson_scripts": {
# "ControlNet": {
# "args": [
# {
# "batch_images": "",
# "control_mode": "Balanced",
# "enabled": True,
# "guidance_end": 1,
# "guidance_start": 0,
# "image": {
# "image": encode_file_to_base64(r"B:\path\to\control\img.png"),
# "mask": None # base64, None when not need
# },
# "input_mode": "simple",
# "is_ui": True,
# "loopback": False,
# "low_vram": False,
# "model": "control_v11p_sd15_canny [d14c016b]",
# "module": "canny",
# "output_dir": "",
# "pixel_perfect": False,
# "processor_res": 512,
# "resize_mode": "Crop and Resize",
# "threshold_a": 100,
# "threshold_b": 200,
# "weight": 1
# }
# ]
# },
# "Refiner": {
# "args": [
# True,
# "sd_xl_refiner_1.0",
# 0.5
# ]
# }
# },
# "enable_hr": True,
# "hr_upscaler": "R-ESRGAN 4x+ Anime6B",
# "hr_scale": 2,
# "denoising_strength": 0.5,
# "styles": ['style 1', 'style 2'],
# "override_settings": {
# 'sd_model_checkpoint': "sd_xl_base_1.0", # this can use to switch sd model
# },
}
call_txt2img_api(**payload)
init_images = [
encode_file_to_base64(r"../stable-diffusion-webui/output/img2img-images/2024-05-15/00012-357584826.png"),
# encode_file_to_base64(r"B:\path\to\img_2.png"),
# "https://image.can/also/be/a/http/url.png",
]
batch_size = 2
payload = {
"prompt": "1girl, blue hair",
"seed": 1,
"steps": 20,
"width": 512,
"height": 512,
"denoising_strength": 0.5,
"n_iter": 1,
"init_images": init_images,
"batch_size": batch_size if len(init_images) == 1 else len(init_images),
# "mask": encode_file_to_base64(r"B:\path\to\mask.png")
}
# if len(init_images) > 1 then batch_size should be == len(init_images)
# else if len(init_images) == 1 then batch_size can be any value int >= 1
call_img2img_api(**payload)
# there exist a useful extension that allows converting of webui calls to api payload
# particularly useful when you wish setup arguments of extensions and scripts
# https://github.com/huchenlei/sd-webui-api-payload-display

416
main.py
View File

@@ -1,239 +1,323 @@
import json
import requests
import io
import re
import os import os
import re
import io
import uuid import uuid
import base64 import base64
import requests
from datetime import datetime
from PIL import Image, PngImagePlugin from PIL import Image, PngImagePlugin
from pyrogram import Client, filters from pyrogram import Client, filters
from pyrogram.types import * from pyrogram.types import InlineKeyboardButton, InlineKeyboardMarkup
from dotenv import load_dotenv from dotenv import load_dotenv
# Done! Congratulations on your new bot. You will find it at # Load environment variables
# t.me/gootmornbot
# You can now add a description, about section and profile picture for your bot, see /help for a list of commands. By the way, when you've finished creating your cool bot, ping our Bot Support if you want a better username for it. Just make sure the bot is fully operational before you do this.
# Use this token to access the HTTP API:
# Keep your token secure and store it safely, it can be used by anyone to control your bot.
# For a description of the Bot API, see this page: https://core.telegram.org/bots/api
load_dotenv() load_dotenv()
API_ID = os.environ.get("API_ID", None) API_ID = os.environ.get("API_ID")
API_HASH = os.environ.get("API_HASH", None) API_HASH = os.environ.get("API_HASH")
TOKEN = os.environ.get("TOKEN", None) TOKEN = os.environ.get("TOKEN_givemtxt2img")
SD_URL = os.environ.get("SD_URL", None) SD_URL = os.environ.get("SD_URL")
print(SD_URL)
app = Client("stable", api_id=API_ID, api_hash=API_HASH, bot_token=TOKEN) app = Client("stable", api_id=API_ID, api_hash=API_HASH, bot_token=TOKEN)
IMAGE_PATH = 'images'
# default params # Ensure IMAGE_PATH directory exists
steps_value_default = 40 os.makedirs(IMAGE_PATH, exist_ok=True)
# Model-specific emmbedings for negative prompts
# see civit.ai model page for specific emmbedings recommnded for each model
model_negative_prompts = {
"Anything-Diffusion": "",
"Deliberate": "",
"Dreamshaper": "",
"DreamShaperXL_Lightning": "",
"icbinp": "",
"realisticVisionV60B1_v51VAE": "realisticvision-negative-embedding",
"v1-5-pruned-emaonly": ""
}
def parse_input(input_string): def encode_file_to_base64(path):
default_payload = { with open(path, 'rb') as file:
return base64.b64encode(file.read()).decode('utf-8')
def decode_and_save_base64(base64_str, save_path):
with open(save_path, "wb") as file:
file.write(base64.b64decode(base64_str))
# Set default payload values
default_payload = {
"prompt": "", "prompt": "",
"negative_prompt": "", "seed": -1, # Random seed
"controlnet_input_image": [], "negative_prompt": "extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs",
"controlnet_mask": [],
"controlnet_module": "",
"controlnet_model": "",
"controlnet_weight": 1,
"controlnet_resize_mode": "Scale to Fit (Inner Fit)",
"controlnet_lowvram": False,
"controlnet_processor_res": 64,
"controlnet_threshold_a": 64,
"controlnet_threshold_b": 64,
"controlnet_guidance": 1,
"controlnet_guessmode": True,
"enable_hr": False, "enable_hr": False,
"denoising_strength": 0.5, "Sampler": "DPM++ SDE Karras",
"hr_scale": 1.5, "denoising_strength": 0.35,
"hr_upscale": "Latent",
"seed": -1,
"subseed": -1,
"subseed_strength": -1,
"sampler_index": "",
"batch_size": 1, "batch_size": 1,
"n_iter": 1, "n_iter": 1,
"steps": 20, "steps": 35,
"cfg_scale": 7, "cfg_scale": 7,
"width": 512, "width": 512,
"height": 512, "height": 512,
"restore_faces": True, "restore_faces": False,
"override_settings": {}, "override_settings": {},
"override_settings_restore_afterwards": True, "override_settings_restore_afterwards": True,
} }
# Initialize an empty payload with the 'prompt' key
payload = {"prompt": ""}
def update_negative_prompt(model_name):
if model_name in model_negative_prompts:
suffix = model_negative_prompts[model_name]
default_payload["negative_prompt"] += f", {suffix}"
def parse_input(input_string):
payload = default_payload.copy()
prompt = [] prompt = []
# Find all occurrences of keys (words ending with a colon)
matches = re.finditer(r"(\w+):", input_string) matches = re.finditer(r"(\w+):", input_string)
last_index = 0 last_index = 0
# Iterate over the found keys script_args = [0, "", [], 0, "", [], 0, "", [], True, False, False, False, False, False, False, 0, False]
script_name = None
script_args = [0, "", [], 0, "", [], 0, "", [], True, False, False, False, False, False, False, 0, False]
script_name = None
slot_mapping = {0: (0, 1), 1: (3, 4), 2: (6, 7)}
slot_index = 0
for match in matches: for match in matches:
key = match.group(1).lower() # Convert the key to lowercase key = match.group(1).lower()
value_start_index = match.end() value_start_index = match.end()
# If there's text between the last key and the current key, add it to the prompt
if last_index != match.start(): if last_index != match.start():
prompt.append(input_string[last_index : match.start()].strip()) prompt.append(input_string[last_index: match.start()].strip())
last_index = value_start_index last_index = value_start_index
value_end_match = re.search(r"(?=\s+\w+:|$)", input_string[value_start_index:])
# Check if the key is in the default payload if value_end_match:
if key in default_payload: value_end_index = value_end_match.start() + value_start_index
# Extract the value for the current key
value_end_index = re.search(
r"(?=\s+\w+:|$)", input_string[value_start_index:]
).start()
value = input_string[
value_start_index : value_start_index + value_end_index
].strip()
# Check if the default value for the key is an integer
if isinstance(default_payload[key], int):
# If the value is a valid integer, store it as an integer in the payload
if value.isdigit():
payload[key] = int(value)
else:
# If the default value for the key is not an integer, store the value as is in the payload
payload[key] = value
last_index += value_end_index
else: else:
# If the key is not in the default payload, add it to the prompt value_end_index = len(input_string)
prompt.append(f"{key}:") value = input_string[value_start_index: value_end_index].strip()
if key == "ds":
key = "denoising_strength"
if key == "ng":
key = "negative_prompt"
if key == "cfg":
key = "cfg_scale"
# Join the prompt words and store it in the payload if key in default_payload:
payload["prompt"] = " ".join(prompt) payload[key] = value
elif key in ["xsr", "xsteps", "xds", "xcfg", "nl", "ks", "rs"]:
script_name = "x/y/z plot"
if slot_index < 3:
script_slot = slot_mapping[slot_index]
if key == "xsr":
script_args[script_slot[0]] = 7 # Enum value for xsr
script_args[script_slot[1]] = value
elif key == "xsteps":
script_args[script_slot[0]] = 4 # Enum value for xsteps
script_args[script_slot[1]] = value
elif key == "xds":
script_args[script_slot[0]] = 22 # Enum value for xds
script_args[script_slot[1]] = value
elif key == "xcfg":
script_args[script_slot[0]] = 6 # Enum value for CFG Scale
script_args[script_slot[1]] = value
slot_index += 1
elif key == "nl":
script_args[9] = False # Draw legend
elif key == "ks":
script_args[10] = True # Keep sub images
elif key == "rs":
script_args[11] = True # Set random seed to sub images
else:
prompt.append(f"{key}:{value}")
# If the prompt is empty, set the input string as the prompt last_index = value_end_index
payload["prompt"] = " ".join(prompt).strip()
if not payload["prompt"]: if not payload["prompt"]:
payload["prompt"] = input_string.strip() payload["prompt"] = input_string.strip()
# Return the final payload if script_name:
payload["script_name"] = script_name
payload["script_args"] = script_args
return payload return payload
def create_caption(payload, user_name, user_id, info):
caption = f"**[{user_name}](tg://user?id={user_id})**\n\n"
prompt = payload["prompt"]
print(payload["prompt"])
print(info)
# Steps: 3, Sampler: Euler, CFG scale: 7.0, Seed: 4094161400, Size: 512x512, Model hash: 15012c538f, Model: realisticVisionV60B1_v51VAE, Denoising strength: 0.35, Version: v1.8.0-1-g20cdc7c
# Define a regular expression pattern to match the seed value
seed_pattern = r"Seed: (\d+)"
# Search for the pattern in the info string
match = re.search(seed_pattern, info)
# Check if a match was found and extract the seed value
if match:
seed_value = match.group(1)
print(f"Seed value: {seed_value}")
caption += f"**{seed_value}**\n"
else:
print("Seed value not found in the info string.")
caption += f"**{prompt}**\n"
if len(caption) > 1024:
caption = caption[:1021] + "..."
return caption
def call_api(api_endpoint, payload):
try:
response = requests.post(f'{SD_URL}/{api_endpoint}', json=payload)
response.raise_for_status()
return response.json()
except requests.RequestException as e:
print(f"API call failed: {e}")
return None
def process_images(images, user_id, user_name):
def generate_unique_name():
unique_id = str(uuid.uuid4())[:7]
return f"{user_name}-{unique_id}"
word = generate_unique_name()
for i in images:
image = Image.open(io.BytesIO(base64.b64decode(i.split(",", 1)[0])))
png_payload = {"image": "data:image/png;base64," + i}
response2 = requests.post(f"{SD_URL}/sdapi/v1/png-info", json=png_payload)
response2.raise_for_status()
pnginfo = PngImagePlugin.PngInfo()
pnginfo.add_text("parameters", response2.json().get("info"))
image.save(f"{IMAGE_PATH}/{word}.png", pnginfo=pnginfo)
return word, response2.json().get("info")
@app.on_message(filters.command(["draw"])) @app.on_message(filters.command(["draw"]))
def draw(client, message): def draw(client, message):
msgs = message.text.split(" ", 1) msgs = message.text.split(" ", 1)
if len(msgs) == 1: if len(msgs) == 1:
message.reply_text( message.reply_text("Format :\n/draw < text to image >\nng: < negative (optional) >\nsteps: < steps value (1-70, optional) >")
"Format :\n/draw < text to image >\nng: < negative (optional) >\nsteps: < steps value (1-70, optional) >"
)
return return
payload = parse_input(msgs[1]) payload = parse_input(msgs[1])
print(payload) print(payload)
# The rest of the draw function remains unchanged # Check if xds is used in the payload
if "xds" in msgs[1].lower():
message.reply_text("`xds` key cannot be used in the `/draw` command. Use `/img` instead.")
return
K = message.reply_text("Please Wait 10-15 Second") K = message.reply_text("Please Wait 10-15 Seconds")
r = requests.post(url=f"{SD_URL}/sdapi/v1/txt2img", json=payload).json() r = call_api('sdapi/v1/txt2img', payload)
def genr(): if r:
unique_id = str(uuid.uuid4())[:7] for i in r["images"]:
return f"{message.from_user.first_name}-{unique_id}" word, info = process_images([i], message.from_user.id, message.from_user.first_name)
caption = create_caption(payload, message.from_user.first_name, message.from_user.id, info)
word = genr() message.reply_photo(photo=f"{IMAGE_PATH}/{word}.png", caption=caption)
K.delete()
for i in r["images"]: else:
image = Image.open(io.BytesIO(base64.b64decode(i.split(",", 1)[0]))) message.reply_text("Failed to generate image. Please try again later.")
K.delete()
png_payload = {"image": "data:image/png;base64," + i}
response2 = requests.post(url=f"{SD_URL}/sdapi/v1/png-info", json=png_payload)
pnginfo = PngImagePlugin.PngInfo()
pnginfo.add_text("parameters", response2.json().get("info"))
image.save(f"{word}.png", pnginfo=pnginfo)
# Add a flag to check if the user provided a seed value
user_provided_seed = "seed" in payload
info_dict = response2.json()
seed_value = info_dict['info'].split(", Seed: ")[1].split(",")[0]
# print(seed_value)
caption = f"**[{message.from_user.first_name}-Kun](tg://user?id={message.from_user.id})**\n\n"
for key, value in payload.items():
caption += f"{key.capitalize()} - **{value}**\n"
caption += f"Seed - **{seed_value}**\n"
message.reply_photo(
photo=f"{word}.png",
caption=caption,
)
# os.remove(f"{word}.png") @app.on_message(filters.command(["img"]))
def img2img(client, message):
if not message.reply_to_message or not message.reply_to_message.photo:
message.reply_text("Reply to an image with\n`/img < prompt > ds:0-1.0`\n\nds stands for `Denoising_strength` parameter. Set that low (like 0.2) if you just want to slightly change things. defaults to 0.35\n\nExample: `/img murder on the dance floor ds:0.2`")
return
msgs = message.text.split(" ", 1)
if len(msgs) == 1:
message.reply_text("dont FAIL in life")
return
payload = parse_input(msgs[1])
print(f"input:\n{payload}")
photo = message.reply_to_message.photo
# prompt_from_reply = message.reply_to_message.
# orginal_prompt = app.reply_to_message.message
# print(orginal_prompt)
photo_file = app.download_media(photo)
init_image = encode_file_to_base64(photo_file)
os.remove(photo_file) # Clean up downloaded image file
payload["init_images"] = [init_image]
K = message.reply_text("Please Wait 10-15 Seconds")
r = call_api('sdapi/v1/img2img', payload)
if r:
for i in r["images"]:
word, info = process_images([i], message.from_user.id, message.from_user.first_name)
caption = create_caption(payload, message.from_user.first_name, message.from_user.id, info)
message.reply_photo(photo=f"{IMAGE_PATH}/{word}.png", caption=caption)
K.delete()
else:
message.reply_text("Failed to process image. Please try again later.")
K.delete() K.delete()
@app.on_message(filters.command(["getmodels"])) @app.on_message(filters.command(["getmodels"]))
async def get_models(client, message): async def get_models(client, message):
response = requests.get(url=f"{SD_URL}/sdapi/v1/sd-models") try:
if response.status_code == 200: response = requests.get(f"{SD_URL}/sdapi/v1/sd-models")
response.raise_for_status()
models_json = response.json() models_json = response.json()
# create buttons for each model name print(models_json)
buttons = [] buttons = [
for model in models_json: [InlineKeyboardButton(model["title"], callback_data=model["model_name"])]
buttons.append( for model in models_json
[ ]
InlineKeyboardButton( await message.reply_text("Select a model [checkpoint] to use", reply_markup=InlineKeyboardMarkup(buttons))
model["title"], callback_data=model["model_name"] except requests.RequestException as e:
) await message.reply_text(f"Failed to get models: {e}")
]
)
# send the message
await message.reply_text(
text="Select a model [checkpoint] to use",
reply_markup=InlineKeyboardMarkup(buttons),
)
@app.on_callback_query() @app.on_callback_query()
async def process_callback(client, callback_query): async def process_callback(client, callback_query):
# if a model button is clicked, set sd_model_checkpoint to the selected model's title
sd_model_checkpoint = callback_query.data sd_model_checkpoint = callback_query.data
# The sd_model_checkpoint needs to be set to the title from /sdapi/v1/sd-models
# post using /sdapi/v1/options
options = {"sd_model_checkpoint": sd_model_checkpoint} options = {"sd_model_checkpoint": sd_model_checkpoint}
# post the options try:
response = requests.post(url=f"{SD_URL}/sdapi/v1/options", json=options) response = requests.post(f"{SD_URL}/sdapi/v1/options", json=options)
if response.status_code == 200: response.raise_for_status()
# if the post was successful, send a message
await callback_query.message.reply_text( # Update the negative prompt based on the selected model
"checpoint set to " + sd_model_checkpoint update_negative_prompt(sd_model_checkpoint)
)
else: await callback_query.message.reply_text(f"Checkpoint set to {sd_model_checkpoint}")
# if the post was unsuccessful, send an error message except requests.RequestException as e:
await callback_query.message.reply_text("Error setting options") await callback_query.message.reply_text(f"Failed to set checkpoint: {e}")
print(f"Error setting checkpoint: {e}")
@app.on_message(filters.command(["start"], prefixes=["/", "!"])) @app.on_message(filters.command(["info_sd_bot"]))
async def start(client, message): async def info(client, message):
# Photo = "https://i.imgur.com/79hHVX6.png" await message.reply_text("""
now support for xyz scripts, see [sd wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#xyz-plot) !
currently supported
`xsr` - search replace text/emoji in the prompt, more info [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-sr)
`xds` - denoise strength, only valid for img2img
`xsteps` - steps
**note** limit the overall `steps:` to lower value (10-20) for big xyz plots
buttons = [ aside from that you can use the usual `ng`, `ds`, `cfg`, `steps` for single image generation.
[ """, disable_web_page_preview=True)
InlineKeyboardButton(
"Add to your group", url="https://t.me/gootmornbot?startgroup=true"
)
]
]
await message.reply_text(
# photo=Photo,
text=f"Hello!\nask me to imagine anything\n\n/draw text to image",
reply_markup=InlineKeyboardMarkup(buttons),
)
app.run() app.run()

View File

@@ -1,4 +1,4 @@
pyrogram==1.4.16 pyrogram
requests requests
tgcrypto==1.2.2 tgcrypto==1.2.2
Pillow Pillow