mirror of
https://github.com/5shekel/stable-diffusion-telegram-bot.git
synced 2024-05-22 19:33:14 +03:00
Compare commits
17 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8fd4b07559 | ||
|
|
9f8bff3540 | ||
|
|
ec900759a1 | ||
|
|
49c9f6337a | ||
|
|
f62d6a6dbc | ||
|
|
5baffd360b | ||
|
|
814a779b47 | ||
|
|
ca1c34b91b | ||
|
|
de2badd5a0 | ||
|
|
3fa661a54c | ||
|
|
c8e825b247 | ||
|
|
9e73316ab8 | ||
|
|
7991f74a39 | ||
|
|
1aacd87547 | ||
|
|
a206a1103c | ||
|
|
31907abe33 | ||
|
|
ec813f571e |
4
.env_template
Normal file
4
.env_template
Normal file
@@ -0,0 +1,4 @@
|
||||
TOKEN=<telegram-bot-token >
|
||||
API_ID=<telegram-id-api-id >
|
||||
API_HASH=<telegram-id-api-hash>
|
||||
SD_URL=<stable-diffusion-api-url>
|
||||
8
.gitignore
vendored
8
.gitignore
vendored
@@ -1,3 +1,9 @@
|
||||
*.png
|
||||
.env
|
||||
.session
|
||||
*.session
|
||||
vscode/
|
||||
venv/
|
||||
*.session-journal
|
||||
logs/stable_diff_telegram_bot.log
|
||||
*.session
|
||||
images/
|
||||
@@ -1,41 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"this came from upstream, but it is not yet fixed"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15+e163309.d20230103-cp38-cp38-linux_x86_64.whl\n",
|
||||
"\n",
|
||||
"!git clone https://github.com/camenduru/stable-diffusion-webui\n",
|
||||
"!git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /content/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui\n",
|
||||
"!git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /content/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser\n",
|
||||
"!git clone https://github.com/camenduru/stable-diffusion-webui-huggingface /content/stable-diffusion-webui/extensions/stable-diffusion-webui-huggingface\n",
|
||||
"!git clone https://github.com/Vetchems/sd-civitai-browser /content/stable-diffusion-webui/extensions/sd-civitai-browser\n",
|
||||
"%cd /content/stable-diffusion-webui\n",
|
||||
"\n",
|
||||
"!wget https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /content/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt\n",
|
||||
"!wget https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /content/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt\n",
|
||||
"\n",
|
||||
"!python launch.py --share --xformers --api\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
62
README.md
62
README.md
@@ -1,16 +1,16 @@
|
||||
# AI Powered Art in a Telegram Bot!
|
||||
|
||||
this is a txt2img bot to converse with SDweb bot [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) running on tami telegram channel
|
||||
this is a txt2img/img2img bot to converse with SDweb bot [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) running on tami telegram channel
|
||||
|
||||
# How to
|
||||
|
||||
supported invocation:
|
||||
## How to
|
||||
### txt2img
|
||||
`/draw <text>` - send prompt text to the bot and it will draw an image
|
||||
you can add `negative_prompt` using `ng: <text>`
|
||||
you can add `denoised intermediate steps` using `steps: <text>`
|
||||
|
||||
basicly anything the `/controlnet/txt2img` API payload supports
|
||||
like,
|
||||
|
||||
```json
|
||||
{
|
||||
"prompt": "",
|
||||
@@ -22,6 +22,7 @@ like,
|
||||
"cfg_scale": 7
|
||||
}
|
||||
```
|
||||
|
||||
examples:
|
||||
`/draw a city street`
|
||||
and without people
|
||||
@@ -32,26 +33,45 @@ with more steps
|
||||
to change the model use:
|
||||
`/getmodels` - to get a list of models and then click to set it.
|
||||
|
||||
|
||||
|
||||
- note1: Anything after ng will be considered as nergative prompt. a.k.a things you do not want to see in your diffusion!
|
||||
- note2: on [negative_prompt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt) (aka ng):
|
||||
thia is a bit of a black art. i took the recommended defaults for the `Deliberate` model from this fun [alt-model spreadsheet](https://docs.google.com/spreadsheets/d/1Q0bYKRfVOTUHQbUsIISCztpdZXzfo9kOoAy17Qhz3hI/edit#gid=797387129).
|
||||
~~and you (currntly) can only ADD to it, not replace.~~
|
||||
- note3: on `steps` - step of 1 will generate only the first "step" of bot hallucinations. the default is 40. higher will take longer and will give "better" image. range is hardcoded 1-70.
|
||||
see 
|
||||
|
||||
### img2img
|
||||
`/img <prompt> ds:<0.0-1.0>` - reply to an image with a prompt text and it will draw an image
|
||||
|
||||
you can add `denoising_strength` using `ds:<float>`
|
||||
Set that low (like 0.2) if you just want to slightly change things. defaults to 0.4
|
||||
|
||||
basicly anything the `/controlnet/img2img` API payload supports
|
||||
|
||||
### general
|
||||
`X/Y/Z script` [link](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#xyz-plot), one powerfull thing
|
||||
|
||||
for prompt we use the Serach Replace option (a.k.a `prompt s/r`) [exaplined](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-sr)
|
||||
|
||||
## Setup
|
||||
|
||||
Install requirements
|
||||
Install requirements using venv
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
|
||||
Install requirements using conda
|
||||
|
||||
```bash
|
||||
conda create -n sdw python=3.8
|
||||
conda activate sdw
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
(note: conda is not strictly necessary, but it is recommended)
|
||||
|
||||
## Original readme
|
||||
## Original README
|
||||
|
||||
My Bot uses [Automatic1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as the backend.
|
||||
Follow the directions on their repo for setup instructions.
|
||||
@@ -62,15 +82,23 @@ arguments such as `--xformers` to use xformers memory efficient attention.
|
||||
You can use the web ui interface that Automatic1111 provides to select the model and VAE to use.
|
||||
Their repo has documentation on how to do so. I also recommend doing a test generation
|
||||
|
||||
Create a file called `.env` in the same folder as `main.py`. Inside the `.env` file,
|
||||
create a line `TOKEN = xxxx`, where xxxx is your telegram bot token.
|
||||
create a line `API_ID = xxxx`, where xxxx is your telegram id api id.
|
||||
create a line `API_HASH = xxxx`, where xxxx is your telegram id api hash.
|
||||
create a line `SD_URL = xxxx`, where xxxx is your sd api url.
|
||||
Copy a file called `.env_template` into `.env_template` in the same folder as `main.py`.
|
||||
In the `.env` file fill out the following environment variables:
|
||||
`TOKEN = xxxx`, where xxxx is your telegram bot token.
|
||||
`API_ID = xxxx`, where xxxx is your telegram id api id.
|
||||
`API_HASH = xxxx`, where xxxx is your telegram id api hash.
|
||||
`SD_URL = xxxx`, where xxxx is your sd api url.
|
||||
|
||||
|
||||
To get the API_ID and API_HASH, you need to create a new application on the Telegram's developer website. Here are the steps:
|
||||
|
||||
1. Open browser and visit https://my.telegram.org and log in with your Telegram account.
|
||||
2. Click on "API development tools".
|
||||
3. Fill out the form to create a new application. You can enter any valid details you want.
|
||||
4. After you've created the application, you'll be given the API_ID and API_HASH.
|
||||
5. Once you have these, you can add them to your .env file:
|
||||
|
||||
Now, you can run the bot
|
||||
|
||||
`python main.py`
|
||||
|
||||
|
||||
|
||||
|
||||
170
localAPIRun.py
Normal file
170
localAPIRun.py
Normal file
@@ -0,0 +1,170 @@
|
||||
from datetime import datetime
|
||||
import urllib.request
|
||||
import base64
|
||||
import json
|
||||
import time
|
||||
import os
|
||||
url="pop-os.local"
|
||||
webui_server_url = f'http://{url}:7860'
|
||||
|
||||
out_dir = 'api_out'
|
||||
out_dir_t2i = os.path.join(out_dir, 'txt2img')
|
||||
out_dir_i2i = os.path.join(out_dir, 'img2img')
|
||||
os.makedirs(out_dir_t2i, exist_ok=True)
|
||||
os.makedirs(out_dir_i2i, exist_ok=True)
|
||||
|
||||
|
||||
def timestamp():
|
||||
return datetime.fromtimestamp(time.time()).strftime("%Y%m%d-%H%M%S")
|
||||
|
||||
|
||||
def encode_file_to_base64(path):
|
||||
with open(path, 'rb') as file:
|
||||
return base64.b64encode(file.read()).decode('utf-8')
|
||||
|
||||
|
||||
def decode_and_save_base64(base64_str, save_path):
|
||||
with open(save_path, "wb") as file:
|
||||
file.write(base64.b64decode(base64_str))
|
||||
|
||||
|
||||
def call_api(api_endpoint, **payload):
|
||||
data = json.dumps(payload).encode('utf-8')
|
||||
request = urllib.request.Request(
|
||||
f'{webui_server_url}/{api_endpoint}',
|
||||
headers={'Content-Type': 'application/json'},
|
||||
data=data,
|
||||
)
|
||||
response = urllib.request.urlopen(request)
|
||||
return json.loads(response.read().decode('utf-8'))
|
||||
|
||||
|
||||
def call_txt2img_api(**payload):
|
||||
response = call_api('sdapi/v1/txt2img', **payload)
|
||||
for index, image in enumerate(response.get('images')):
|
||||
save_path = os.path.join(out_dir_t2i, f'txt2img-{timestamp()}-{index}.png')
|
||||
decode_and_save_base64(image, save_path)
|
||||
|
||||
|
||||
def call_img2img_api(**payload):
|
||||
response = call_api('sdapi/v1/img2img', **payload)
|
||||
for index, image in enumerate(response.get('images')):
|
||||
save_path = os.path.join(out_dir_i2i, f'img2img-{timestamp()}-{index}.png')
|
||||
decode_and_save_base64(image, save_path)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
payload = {
|
||||
"prompt": "masterpiece, (best quality:1.1), 1girl <lora:lora_model:1>", # extra networks also in prompts
|
||||
"negative_prompt": "",
|
||||
"seed": 1,
|
||||
"steps": 20,
|
||||
"width": 512,
|
||||
"height": 512,
|
||||
"cfg_scale": 7,
|
||||
"sampler_name": "DPM++ SDE Karras",
|
||||
"n_iter": 1,
|
||||
"batch_size": 1,
|
||||
|
||||
# example args for x/y/z plot
|
||||
#steps 4,"20,30"
|
||||
#denoising==22
|
||||
# S/R 7,"X,united states,china",
|
||||
"script_args": [
|
||||
4,
|
||||
"20,30,40",
|
||||
[],
|
||||
0,
|
||||
"",
|
||||
[],
|
||||
0,
|
||||
"",
|
||||
[],
|
||||
True,
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
False,
|
||||
0,
|
||||
False
|
||||
],
|
||||
"script_name": "x/y/z plot",
|
||||
|
||||
|
||||
# example args for Refiner and ControlNet
|
||||
# "alwayson_scripts": {
|
||||
# "ControlNet": {
|
||||
# "args": [
|
||||
# {
|
||||
# "batch_images": "",
|
||||
# "control_mode": "Balanced",
|
||||
# "enabled": True,
|
||||
# "guidance_end": 1,
|
||||
# "guidance_start": 0,
|
||||
# "image": {
|
||||
# "image": encode_file_to_base64(r"B:\path\to\control\img.png"),
|
||||
# "mask": None # base64, None when not need
|
||||
# },
|
||||
# "input_mode": "simple",
|
||||
# "is_ui": True,
|
||||
# "loopback": False,
|
||||
# "low_vram": False,
|
||||
# "model": "control_v11p_sd15_canny [d14c016b]",
|
||||
# "module": "canny",
|
||||
# "output_dir": "",
|
||||
# "pixel_perfect": False,
|
||||
# "processor_res": 512,
|
||||
# "resize_mode": "Crop and Resize",
|
||||
# "threshold_a": 100,
|
||||
# "threshold_b": 200,
|
||||
# "weight": 1
|
||||
# }
|
||||
# ]
|
||||
# },
|
||||
# "Refiner": {
|
||||
# "args": [
|
||||
# True,
|
||||
# "sd_xl_refiner_1.0",
|
||||
# 0.5
|
||||
# ]
|
||||
# }
|
||||
# },
|
||||
# "enable_hr": True,
|
||||
# "hr_upscaler": "R-ESRGAN 4x+ Anime6B",
|
||||
# "hr_scale": 2,
|
||||
# "denoising_strength": 0.5,
|
||||
# "styles": ['style 1', 'style 2'],
|
||||
# "override_settings": {
|
||||
# 'sd_model_checkpoint': "sd_xl_base_1.0", # this can use to switch sd model
|
||||
# },
|
||||
}
|
||||
call_txt2img_api(**payload)
|
||||
|
||||
init_images = [
|
||||
encode_file_to_base64(r"../stable-diffusion-webui/output/img2img-images/2024-05-15/00012-357584826.png"),
|
||||
# encode_file_to_base64(r"B:\path\to\img_2.png"),
|
||||
# "https://image.can/also/be/a/http/url.png",
|
||||
]
|
||||
|
||||
batch_size = 2
|
||||
payload = {
|
||||
"prompt": "1girl, blue hair",
|
||||
"seed": 1,
|
||||
"steps": 20,
|
||||
"width": 512,
|
||||
"height": 512,
|
||||
"denoising_strength": 0.5,
|
||||
"n_iter": 1,
|
||||
"init_images": init_images,
|
||||
"batch_size": batch_size if len(init_images) == 1 else len(init_images),
|
||||
# "mask": encode_file_to_base64(r"B:\path\to\mask.png")
|
||||
}
|
||||
# if len(init_images) > 1 then batch_size should be == len(init_images)
|
||||
# else if len(init_images) == 1 then batch_size can be any value int >= 1
|
||||
call_img2img_api(**payload)
|
||||
|
||||
# there exist a useful extension that allows converting of webui calls to api payload
|
||||
# particularly useful when you wish setup arguments of extensions and scripts
|
||||
# https://github.com/huchenlei/sd-webui-api-payload-display
|
||||
505
main.py
505
main.py
@@ -1,239 +1,424 @@
|
||||
import json
|
||||
import requests
|
||||
import io
|
||||
import re
|
||||
import os
|
||||
import re
|
||||
import io
|
||||
import uuid
|
||||
import base64
|
||||
import json
|
||||
import requests
|
||||
from datetime import datetime
|
||||
from PIL import Image, PngImagePlugin
|
||||
from pyrogram import Client, filters
|
||||
from pyrogram.types import *
|
||||
from pyrogram.types import InlineKeyboardButton, InlineKeyboardMarkup
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Done! Congratulations on your new bot. You will find it at
|
||||
# t.me/gootmornbot
|
||||
# You can now add a description, about section and profile picture for your bot, see /help for a list of commands. By the way, when you've finished creating your cool bot, ping our Bot Support if you want a better username for it. Just make sure the bot is fully operational before you do this.
|
||||
|
||||
# Use this token to access the HTTP API:
|
||||
# Keep your token secure and store it safely, it can be used by anyone to control your bot.
|
||||
|
||||
# For a description of the Bot API, see this page: https://core.telegram.org/bots/api
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
API_ID = os.environ.get("API_ID", None)
|
||||
API_HASH = os.environ.get("API_HASH", None)
|
||||
TOKEN = os.environ.get("TOKEN", None)
|
||||
SD_URL = os.environ.get("SD_URL", None)
|
||||
print(SD_URL)
|
||||
API_ID = os.environ.get("API_ID")
|
||||
API_HASH = os.environ.get("API_HASH")
|
||||
TOKEN = os.environ.get("TOKEN_givemtxt2img")
|
||||
SD_URL = os.environ.get("SD_URL")
|
||||
|
||||
# Ensure all required environment variables are loaded
|
||||
if not all([API_ID, API_HASH, TOKEN, SD_URL]):
|
||||
raise EnvironmentError("Missing one or more required environment variables: API_ID, API_HASH, TOKEN, SD_URL")
|
||||
|
||||
app = Client("stable", api_id=API_ID, api_hash=API_HASH, bot_token=TOKEN)
|
||||
IMAGE_PATH = 'images'
|
||||
|
||||
# default params
|
||||
steps_value_default = 40
|
||||
# Ensure IMAGE_PATH directory exists
|
||||
os.makedirs(IMAGE_PATH, exist_ok=True)
|
||||
|
||||
|
||||
def parse_input(input_string):
|
||||
default_payload = {
|
||||
def get_current_model_name():
|
||||
try:
|
||||
response = requests.get(f"{SD_URL}/sdapi/v1/options")
|
||||
response.raise_for_status()
|
||||
options = response.json()
|
||||
current_model_name = options.get("sd_model_checkpoint", "Unknown")
|
||||
return current_model_name
|
||||
except requests.RequestException as e:
|
||||
print(f"API call failed: {e}")
|
||||
return None
|
||||
|
||||
# Fetch the current model name at the start
|
||||
current_model_name = get_current_model_name()
|
||||
if current_model_name:
|
||||
print(f"Current model name: {current_model_name}")
|
||||
else:
|
||||
print("Failed to fetch the current model name.")
|
||||
|
||||
def encode_file_to_base64(path):
|
||||
with open(path, 'rb') as file:
|
||||
return base64.b64encode(file.read()).decode('utf-8')
|
||||
|
||||
def decode_and_save_base64(base64_str, save_path):
|
||||
with open(save_path, "wb") as file:
|
||||
file.write(base64.b64decode(base64_str))
|
||||
|
||||
# Set default payload values
|
||||
default_payload = {
|
||||
"prompt": "",
|
||||
"negative_prompt": "",
|
||||
"controlnet_input_image": [],
|
||||
"controlnet_mask": [],
|
||||
"controlnet_module": "",
|
||||
"controlnet_model": "",
|
||||
"controlnet_weight": 1,
|
||||
"controlnet_resize_mode": "Scale to Fit (Inner Fit)",
|
||||
"controlnet_lowvram": False,
|
||||
"controlnet_processor_res": 64,
|
||||
"controlnet_threshold_a": 64,
|
||||
"controlnet_threshold_b": 64,
|
||||
"controlnet_guidance": 1,
|
||||
"controlnet_guessmode": True,
|
||||
"seed": -1, # Random seed
|
||||
"negative_prompt": "extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs",
|
||||
"enable_hr": False,
|
||||
"denoising_strength": 0.5,
|
||||
"hr_scale": 1.5,
|
||||
"hr_upscale": "Latent",
|
||||
"seed": -1,
|
||||
"subseed": -1,
|
||||
"subseed_strength": -1,
|
||||
"sampler_index": "",
|
||||
"Sampler": "DPM++ SDE Karras",
|
||||
"denoising_strength": 0.35,
|
||||
"batch_size": 1,
|
||||
"n_iter": 1,
|
||||
"steps": 20,
|
||||
"steps": 35,
|
||||
"cfg_scale": 7,
|
||||
"width": 512,
|
||||
"height": 512,
|
||||
"restore_faces": True,
|
||||
"restore_faces": False,
|
||||
"override_settings": {},
|
||||
"override_settings_restore_afterwards": True,
|
||||
}
|
||||
# Initialize an empty payload with the 'prompt' key
|
||||
payload = {"prompt": ""}
|
||||
}
|
||||
|
||||
# Model-specific embeddings for negative prompts
|
||||
model_negative_prompts = {
|
||||
"coloringPage_v10": "fake",
|
||||
"Anything-Diffusion": "",
|
||||
"Deliberate": "",
|
||||
"Dreamshaper": "",
|
||||
"DreamShaperXL_Lightning": "",
|
||||
"realisticVisionV60B1_v51VAE": "realisticvision-negative-embedding",
|
||||
"v1-5-pruned-emaonly": "",
|
||||
"Juggernaut-XL_v9_RunDiffusionPhoto_v2": "bad eyes, cgi, airbrushed, plastic, watermark"
|
||||
}
|
||||
|
||||
def update_negative_prompt(model_name):
|
||||
"""Update the negative prompt for a given model."""
|
||||
if model_name in model_negative_prompts:
|
||||
suffix = model_negative_prompts[model_name]
|
||||
default_payload["negative_prompt"] += f", {suffix}"
|
||||
print(f"Updated negative prompt to: {default_payload['negative_prompt']}")
|
||||
|
||||
def update_resolution(model_name):
|
||||
"""Update resolution based on the selected model."""
|
||||
if model_name == "Juggernaut-XL_v9_RunDiffusionPhoto_v2":
|
||||
default_payload["width"] = 832
|
||||
default_payload["height"] = 1216
|
||||
else:
|
||||
default_payload["width"] = 512
|
||||
default_payload["height"] = 512
|
||||
print(f"Updated resolution to {default_payload['width']}x{default_payload['height']}")
|
||||
|
||||
def update_steps(model_name):
|
||||
"""Update CFG scale based on the selected model."""
|
||||
if model_name == "Juggernaut-XL_v9_RunDiffusionPhoto_v2":
|
||||
default_payload["steps"] = 15
|
||||
else:
|
||||
default_payload["steps"] = 35
|
||||
print(f"Updated steps to {default_payload['cfg_scale']}")
|
||||
|
||||
def update_cfg_scale(model_name):
|
||||
"""Update CFG scale based on the selected model."""
|
||||
if model_name == "Juggernaut-XL_v9_RunDiffusionPhoto_v2":
|
||||
default_payload["cfg_scale"] = 2.5
|
||||
else:
|
||||
default_payload["cfg_scale"] = 7
|
||||
print(f"Updated CFG scale to {default_payload['cfg_scale']}")
|
||||
|
||||
# Update configurations based on the current model name
|
||||
if current_model_name:
|
||||
update_negative_prompt(current_model_name)
|
||||
update_resolution(current_model_name)
|
||||
update_cfg_scale(current_model_name)
|
||||
update_steps(current_model_name)
|
||||
else:
|
||||
print("Failed to update configurations as the current model name is not available.")
|
||||
|
||||
def parse_input(input_string):
|
||||
"""Parse the input string and create a payload."""
|
||||
payload = default_payload.copy()
|
||||
prompt = []
|
||||
include_info = "info:" in input_string
|
||||
input_string = input_string.replace("info:", "").strip()
|
||||
|
||||
# Find all occurrences of keys (words ending with a colon)
|
||||
matches = re.finditer(r"(\w+):", input_string)
|
||||
last_index = 0
|
||||
|
||||
# Iterate over the found keys
|
||||
script_args = [0, "", [], 0, "", [], 0, "", [], True, False, False, False, False, False, False, 0, False]
|
||||
script_name = None
|
||||
|
||||
slot_mapping = {0: (0, 1), 1: (3, 4), 2: (6, 7)}
|
||||
slot_index = 0
|
||||
|
||||
for match in matches:
|
||||
key = match.group(1).lower() # Convert the key to lowercase
|
||||
key = match.group(1).lower()
|
||||
value_start_index = match.end()
|
||||
|
||||
# If there's text between the last key and the current key, add it to the prompt
|
||||
if last_index != match.start():
|
||||
prompt.append(input_string[last_index : match.start()].strip())
|
||||
prompt.append(input_string[last_index: match.start()].strip())
|
||||
last_index = value_start_index
|
||||
value_end_match = re.search(r"(?=\s+\w+:|$)", input_string[value_start_index:])
|
||||
if value_end_match:
|
||||
value_end_index = value_end_match.start() + value_start_index
|
||||
else:
|
||||
value_end_index = len(input_string)
|
||||
value = input_string[value_start_index: value_end_index].strip()
|
||||
if key == "ds":
|
||||
key = "denoising_strength"
|
||||
if key == "ng":
|
||||
key = "negative_prompt"
|
||||
if key == "cfg":
|
||||
key = "cfg_scale"
|
||||
|
||||
# Check if the key is in the default payload
|
||||
if key in default_payload:
|
||||
# Extract the value for the current key
|
||||
value_end_index = re.search(
|
||||
r"(?=\s+\w+:|$)", input_string[value_start_index:]
|
||||
).start()
|
||||
value = input_string[
|
||||
value_start_index : value_start_index + value_end_index
|
||||
].strip()
|
||||
|
||||
# Check if the default value for the key is an integer
|
||||
if isinstance(default_payload[key], int):
|
||||
# If the value is a valid integer, store it as an integer in the payload
|
||||
if value.isdigit():
|
||||
payload[key] = int(value)
|
||||
else:
|
||||
# If the default value for the key is not an integer, store the value as is in the payload
|
||||
payload[key] = value
|
||||
|
||||
last_index += value_end_index
|
||||
elif key in ["xsr", "xsteps", "xds", "xcfg", "nl", "ks", "rs"]:
|
||||
script_name = "x/y/z plot"
|
||||
if slot_index < 3:
|
||||
script_slot = slot_mapping[slot_index]
|
||||
if key == "xsr":
|
||||
script_args[script_slot[0]] = 7 # Enum value for xsr
|
||||
script_args[script_slot[1]] = value
|
||||
elif key == "xsteps":
|
||||
script_args[script_slot[0]] = 4 # Enum value for xsteps
|
||||
script_args[script_slot[1]] = value
|
||||
elif key == "xds":
|
||||
script_args[script_slot[0]] = 22 # Enum value for xds
|
||||
script_args[script_slot[1]] = value
|
||||
elif key == "xcfg":
|
||||
script_args[script_slot[0]] = 6 # Enum value for CFG Scale
|
||||
script_args[script_slot[1]] = value
|
||||
slot_index += 1
|
||||
elif key == "nl":
|
||||
script_args[9] = False # Draw legend
|
||||
elif key == "ks":
|
||||
script_args[10] = True # Keep sub images
|
||||
elif key == "rs":
|
||||
script_args[11] = True # Set random seed to sub images
|
||||
else:
|
||||
# If the key is not in the default payload, add it to the prompt
|
||||
prompt.append(f"{key}:")
|
||||
prompt.append(f"{key}:{value}")
|
||||
|
||||
# Join the prompt words and store it in the payload
|
||||
payload["prompt"] = " ".join(prompt)
|
||||
last_index = value_end_index
|
||||
|
||||
# If the prompt is empty, set the input string as the prompt
|
||||
payload["prompt"] = " ".join(prompt).strip()
|
||||
if not payload["prompt"]:
|
||||
payload["prompt"] = input_string.strip()
|
||||
|
||||
# Return the final payload
|
||||
return payload
|
||||
if script_name:
|
||||
payload["script_name"] = script_name
|
||||
payload["script_args"] = script_args
|
||||
print(f"Generated payload: {payload}")
|
||||
return payload, include_info
|
||||
|
||||
def create_caption(payload, user_name, user_id, info, include_info):
|
||||
"""Create a caption for the generated image."""
|
||||
caption = f"**[{user_name}](tg://user?id={user_id})**\n\n"
|
||||
prompt = payload["prompt"]
|
||||
|
||||
@app.on_message(filters.command(["draw"]))
|
||||
def draw(client, message):
|
||||
msgs = message.text.split(" ", 1)
|
||||
if len(msgs) == 1:
|
||||
message.reply_text(
|
||||
"Format :\n/draw < text to image >\nng: < negative (optional) >\nsteps: < steps value (1-70, optional) >"
|
||||
)
|
||||
return
|
||||
seed_pattern = r"Seed: (\d+)"
|
||||
match = re.search(seed_pattern, info)
|
||||
if match:
|
||||
seed_value = match.group(1)
|
||||
caption += f"**{seed_value}**\n"
|
||||
else:
|
||||
print("Seed value not found in the info string.")
|
||||
|
||||
payload = parse_input(msgs[1])
|
||||
print(payload)
|
||||
caption += f"**{prompt}**\n"
|
||||
|
||||
# The rest of the draw function remains unchanged
|
||||
if include_info:
|
||||
caption += f"\nFull Payload:\n`{payload}`\n"
|
||||
|
||||
K = message.reply_text("Please Wait 10-15 Second")
|
||||
r = requests.post(url=f"{SD_URL}/sdapi/v1/txt2img", json=payload).json()
|
||||
if len(caption) > 1024:
|
||||
caption = caption[:1021] + "..."
|
||||
|
||||
def genr():
|
||||
return caption
|
||||
|
||||
def call_api(api_endpoint, payload):
|
||||
"""Call the API with the provided payload."""
|
||||
try:
|
||||
response = requests.post(f'{SD_URL}/{api_endpoint}', json=payload)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except requests.RequestException as e:
|
||||
print(f"API call failed: {e}")
|
||||
return {"error": str(e)}
|
||||
|
||||
def process_images(images, user_id, user_name):
|
||||
"""Process and save generated images."""
|
||||
def generate_unique_name():
|
||||
unique_id = str(uuid.uuid4())[:7]
|
||||
return f"{message.from_user.first_name}-{unique_id}"
|
||||
date = datetime.now().strftime("%Y-%m-%d-%H-%M")
|
||||
return f"{date}-{user_name}-{unique_id}"
|
||||
|
||||
word = genr()
|
||||
word = generate_unique_name()
|
||||
|
||||
for i in r["images"]:
|
||||
for i in images:
|
||||
image = Image.open(io.BytesIO(base64.b64decode(i.split(",", 1)[0])))
|
||||
|
||||
png_payload = {"image": "data:image/png;base64," + i}
|
||||
response2 = requests.post(url=f"{SD_URL}/sdapi/v1/png-info", json=png_payload)
|
||||
response2 = requests.post(f"{SD_URL}/sdapi/v1/png-info", json=png_payload)
|
||||
response2.raise_for_status()
|
||||
|
||||
# Write response2 json next to the image
|
||||
with open(f"{IMAGE_PATH}/{word}.json", "w") as json_file:
|
||||
json.dump(response2.json(), json_file)
|
||||
|
||||
pnginfo = PngImagePlugin.PngInfo()
|
||||
pnginfo.add_text("parameters", response2.json().get("info"))
|
||||
image.save(f"{word}.png", pnginfo=pnginfo)
|
||||
image.save(f"{IMAGE_PATH}/{word}.png", pnginfo=pnginfo)
|
||||
|
||||
# Add a flag to check if the user provided a seed value
|
||||
user_provided_seed = "seed" in payload
|
||||
# Save as JPG
|
||||
jpg_path = f"{IMAGE_PATH}/{word}.jpg"
|
||||
image.convert("RGB").save(jpg_path, "JPEG")
|
||||
|
||||
info_dict = response2.json()
|
||||
seed_value = info_dict['info'].split(", Seed: ")[1].split(",")[0]
|
||||
# print(seed_value)
|
||||
return word, response2.json().get("info")
|
||||
|
||||
caption = f"**[{message.from_user.first_name}-Kun](tg://user?id={message.from_user.id})**\n\n"
|
||||
for key, value in payload.items():
|
||||
caption += f"{key.capitalize()} - **{value}**\n"
|
||||
caption += f"Seed - **{seed_value}**\n"
|
||||
@app.on_message(filters.command(["draw"]))
|
||||
def draw(client, message):
|
||||
"""Handle /draw command to generate images from text prompts."""
|
||||
msgs = message.text.split(" ", 1)
|
||||
if len(msgs) == 1:
|
||||
message.reply_text("Format :\n/draw < text to image >\nng: < negative (optional) >\nsteps: < steps value (1-70, optional) >")
|
||||
return
|
||||
|
||||
message.reply_photo(
|
||||
photo=f"{word}.png",
|
||||
caption=caption,
|
||||
)
|
||||
payload, include_info = parse_input(msgs[1])
|
||||
|
||||
if "xds" in msgs[1].lower():
|
||||
message.reply_text("`xds` key cannot be used in the `/draw` command. Use `/img` instead.")
|
||||
return
|
||||
|
||||
# os.remove(f"{word}.png")
|
||||
K = message.reply_text("Please Wait 10-15 Seconds")
|
||||
r = call_api('sdapi/v1/txt2img', payload)
|
||||
|
||||
if r and "images" in r:
|
||||
for i in r["images"]:
|
||||
word, info = process_images([i], message.from_user.id, message.from_user.first_name)
|
||||
caption = create_caption(payload, message.from_user.first_name, message.from_user.id, info, include_info)
|
||||
message.reply_photo(photo=f"{IMAGE_PATH}/{word}.jpg", caption=caption)
|
||||
K.delete()
|
||||
else:
|
||||
error_message = r.get("error", "Failed to generate image. Please try again later.")
|
||||
message.reply_text(error_message)
|
||||
K.delete()
|
||||
|
||||
@app.on_message(filters.command(["img"]))
|
||||
def img2img(client, message):
|
||||
"""Handle /img command to generate images from existing images."""
|
||||
if not message.reply_to_message or not message.reply_to_message.photo:
|
||||
message.reply_text("Reply to an image with\n`/img < prompt > ds:0-1.0`\n\nds stands for `Denoising_strength` parameter. Set that low (like 0.2) if you just want to slightly change things. defaults to 0.35\n\nExample: `/img murder on the dance floor ds:0.2`")
|
||||
return
|
||||
|
||||
msgs = message.text.split(" ", 1)
|
||||
if len(msgs) == 1:
|
||||
message.reply_text("Don't FAIL in life")
|
||||
return
|
||||
|
||||
payload, include_info = parse_input(msgs[1])
|
||||
photo = message.reply_to_message.photo
|
||||
photo_file = app.download_media(photo)
|
||||
init_image = encode_file_to_base64(photo_file)
|
||||
os.remove(photo_file) # Clean up downloaded image file
|
||||
|
||||
payload["init_images"] = [init_image]
|
||||
|
||||
K = message.reply_text("Please Wait 10-15 Seconds")
|
||||
r = call_api('sdapi/v1/img2img', payload)
|
||||
|
||||
if r and "images" in r:
|
||||
for i in r["images"]:
|
||||
word, info = process_images([i], message.from_user.id, message.from_user.first_name)
|
||||
caption = create_caption(payload, message.from_user.first_name, message.from_user.id, info, include_info)
|
||||
message.reply_photo(photo=f"{IMAGE_PATH}/{word}.jpg", caption=caption)
|
||||
K.delete()
|
||||
else:
|
||||
error_message = r.get("error", "Failed to process image. Please try again later.")
|
||||
message.reply_text(error_message)
|
||||
K.delete()
|
||||
|
||||
@app.on_message(filters.command(["getmodels"]))
|
||||
async def get_models(client, message):
|
||||
response = requests.get(url=f"{SD_URL}/sdapi/v1/sd-models")
|
||||
if response.status_code == 200:
|
||||
"""Handle /getmodels command to list available models."""
|
||||
try:
|
||||
response = requests.get(f"{SD_URL}/sdapi/v1/sd-models")
|
||||
response.raise_for_status()
|
||||
models_json = response.json()
|
||||
# create buttons for each model name
|
||||
buttons = []
|
||||
for model in models_json:
|
||||
buttons.append(
|
||||
[
|
||||
InlineKeyboardButton(
|
||||
model["title"], callback_data=model["model_name"]
|
||||
)
|
||||
buttons = [
|
||||
[InlineKeyboardButton(model["title"], callback_data=model["model_name"])]
|
||||
for model in models_json
|
||||
]
|
||||
)
|
||||
# send the message
|
||||
await message.reply_text(
|
||||
text="Select a model [checkpoint] to use",
|
||||
reply_markup=InlineKeyboardMarkup(buttons),
|
||||
)
|
||||
|
||||
await message.reply_text("Select a model [checkpoint] to use", reply_markup=InlineKeyboardMarkup(buttons))
|
||||
except requests.RequestException as e:
|
||||
await message.reply_text(f"Failed to get models: {e}")
|
||||
|
||||
@app.on_callback_query()
|
||||
async def process_callback(client, callback_query):
|
||||
# if a model button is clicked, set sd_model_checkpoint to the selected model's title
|
||||
"""Process model selection from callback queries."""
|
||||
sd_model_checkpoint = callback_query.data
|
||||
|
||||
# The sd_model_checkpoint needs to be set to the title from /sdapi/v1/sd-models
|
||||
# post using /sdapi/v1/options
|
||||
|
||||
options = {"sd_model_checkpoint": sd_model_checkpoint}
|
||||
|
||||
# post the options
|
||||
response = requests.post(url=f"{SD_URL}/sdapi/v1/options", json=options)
|
||||
if response.status_code == 200:
|
||||
# if the post was successful, send a message
|
||||
await callback_query.message.reply_text(
|
||||
"checpoint set to " + sd_model_checkpoint
|
||||
)
|
||||
else:
|
||||
# if the post was unsuccessful, send an error message
|
||||
await callback_query.message.reply_text("Error setting options")
|
||||
try:
|
||||
response = requests.post(f"{SD_URL}/sdapi/v1/options", json=options)
|
||||
response.raise_for_status()
|
||||
|
||||
update_negative_prompt(sd_model_checkpoint)
|
||||
update_resolution(sd_model_checkpoint)
|
||||
update_cfg_scale(sd_model_checkpoint)
|
||||
|
||||
@app.on_message(filters.command(["start"], prefixes=["/", "!"]))
|
||||
async def start(client, message):
|
||||
# Photo = "https://i.imgur.com/79hHVX6.png"
|
||||
await callback_query.message.reply_text(f"Checkpoint set to {sd_model_checkpoint}")
|
||||
except requests.RequestException as e:
|
||||
await callback_query.message.reply_text(f"Failed to set checkpoint: {e}")
|
||||
print(f"Error setting checkpoint: {e}")
|
||||
|
||||
buttons = [
|
||||
[
|
||||
InlineKeyboardButton(
|
||||
"Add to your group", url="https://t.me/gootmornbot?startgroup=true"
|
||||
)
|
||||
]
|
||||
]
|
||||
await message.reply_text(
|
||||
# photo=Photo,
|
||||
text=f"Hello!\nask me to imagine anything\n\n/draw text to image",
|
||||
reply_markup=InlineKeyboardMarkup(buttons),
|
||||
)
|
||||
@app.on_message(filters.command(["info_sd_bot"]))
|
||||
async def info(client, message):
|
||||
"""Provide information about the bot's commands and options."""
|
||||
await message.reply_text("""
|
||||
**Stable Diffusion Bot Commands and Options:**
|
||||
|
||||
1. **/draw <prompt> [options]**
|
||||
- Generates an image based on the provided text prompt.
|
||||
- **Options:**
|
||||
- `ng:<negative_prompt>` - Add a negative prompt to avoid specific features.
|
||||
- `steps:<value>` - Number of steps for generation (1-70).
|
||||
- `ds:<value>` - Denoising strength (0-1.0).
|
||||
- `cfg:<value>` - CFG scale (1-30).
|
||||
- `width:<value>` - Width of the generated image.
|
||||
- `height:<value>` - Height of the generated image.
|
||||
- `info:` - Include full payload information in the caption.
|
||||
|
||||
**Example:** `/draw beautiful sunset ng:ugly steps:30 ds:0.5 info:`
|
||||
|
||||
2. **/img <prompt> [options]**
|
||||
- Generates an image based on an existing image and the provided text prompt.
|
||||
- **Options:**
|
||||
- `ds:<value>` - Denoising strength (0-1.0).
|
||||
- `steps:<value>` - Number of steps for generation (1-70).
|
||||
- `cfg:<value>` - CFG scale (1-30).
|
||||
- `width:<value>` - Width of the generated image.
|
||||
- `height:<value>` - Height of the generated image.
|
||||
- `info:` - Include full payload information in the caption.
|
||||
|
||||
**Example:** Reply to an image with `/img modern art ds:0.2 info:`
|
||||
|
||||
3. **/getmodels**
|
||||
- Retrieves and lists all available models for the user to select.
|
||||
- User can then choose a model to set as the current model for image generation.
|
||||
|
||||
4. **/info_sd_bot**
|
||||
- Provides detailed information about the bot's commands and options.
|
||||
|
||||
**Additional Options for Advanced Users:**
|
||||
- **x/y/z plot options** for advanced generation:
|
||||
- `xsr:<value>` - Search and replace text/emoji in the prompt.
|
||||
- `xsteps:<value>` - Steps value for x/y/z plot.
|
||||
- `xds:<value>` - Denoising strength for x/y/z plot.
|
||||
- `xcfg:<value>` - CFG scale for x/y/z plot.
|
||||
- `nl:` - No legend in x/y/z plot.
|
||||
- `ks:` - Keep sub-images in x/y/z plot.
|
||||
- `rs:` - Set random seed for sub-images in x/y/z plot.
|
||||
|
||||
**Notes:**
|
||||
- Use lower step values (10-20) for large x/y/z plots to avoid long processing times.
|
||||
- Use `info:` option to include full payload details in the caption of generated images for better troubleshooting and analysis.
|
||||
|
||||
**Example for Advanced Users:** `/draw beautiful landscape xsteps:10 xds:0.5 xcfg:7 nl: ks: rs: info:`
|
||||
|
||||
For the bot code visit: [Stable Diffusion Bot](https://git.telavivmakers.space/ro/stable-diffusion-telegram-bot)
|
||||
For more details, visit the [Stable Diffusion Wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#xyz-plot).
|
||||
|
||||
Enjoy creating with Stable Diffusion Bot!
|
||||
""", disable_web_page_preview=True)
|
||||
|
||||
app.run()
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
pyrogram==1.4.16
|
||||
pyrogram
|
||||
requests
|
||||
tgcrypto==1.2.2
|
||||
Pillow
|
||||
|
||||
Reference in New Issue
Block a user