This commit is contained in:
tami-p40 2024-05-18 13:18:58 +03:00
parent 1aacd87547
commit 7991f74a39
2 changed files with 15 additions and 7 deletions

View File

@ -1,10 +1,9 @@
# AI Powered Art in a Telegram Bot! # AI Powered Art in a Telegram Bot!
this is a txt2img bot to converse with SDweb bot [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) running on tami telegram channel this is a txt2img/img2img bot to converse with SDweb bot [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) running on tami telegram channel
## How to ## How to
### txt2img
supported invocation:
`/draw <text>` - send prompt text to the bot and it will draw an image `/draw <text>` - send prompt text to the bot and it will draw an image
you can add `negative_prompt` using `ng: <text>` you can add `negative_prompt` using `ng: <text>`
you can add `denoised intermediate steps` using `steps: <text>` you can add `denoised intermediate steps` using `steps: <text>`
@ -37,10 +36,17 @@ to change the model use:
- note1: Anything after ng will be considered as nergative prompt. a.k.a things you do not want to see in your diffusion! - note1: Anything after ng will be considered as nergative prompt. a.k.a things you do not want to see in your diffusion!
- note2: on [negative_prompt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt) (aka ng): - note2: on [negative_prompt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt) (aka ng):
thia is a bit of a black art. i took the recommended defaults for the `Deliberate` model from this fun [alt-model spreadsheet](https://docs.google.com/spreadsheets/d/1Q0bYKRfVOTUHQbUsIISCztpdZXzfo9kOoAy17Qhz3hI/edit#gid=797387129). thia is a bit of a black art. i took the recommended defaults for the `Deliberate` model from this fun [alt-model spreadsheet](https://docs.google.com/spreadsheets/d/1Q0bYKRfVOTUHQbUsIISCztpdZXzfo9kOoAy17Qhz3hI/edit#gid=797387129).
~~and you (currntly) can only ADD to it, not replace.~~
- note3: on `steps` - step of 1 will generate only the first "step" of bot hallucinations. the default is 40. higher will take longer and will give "better" image. range is hardcoded 1-70. - note3: on `steps` - step of 1 will generate only the first "step" of bot hallucinations. the default is 40. higher will take longer and will give "better" image. range is hardcoded 1-70.
see ![video](https://user-images.githubusercontent.com/57876960/212490617-f0444799-50e5-485e-bc5d-9c24a9146d38.mp4) see ![video](https://user-images.githubusercontent.com/57876960/212490617-f0444799-50e5-485e-bc5d-9c24a9146d38.mp4)
### img2img
`/img <prompt> ds:<0.0-1.0>` - reply to an image with a prompt text and it will draw an image
you can add `denoising_strength` using `ds:<float>`
Set that low (like 0.2) if you just want to slightly change things. defaults to 0.4
basicly anything the `/controlnet/img2img` API payload supports
## Setup ## Setup
Install requirements using venv Install requirements using venv

View File

@ -133,8 +133,10 @@ def draw(client, message):
seed_value = info.split(", Seed: ")[1].split(",")[0] seed_value = info.split(", Seed: ")[1].split(",")[0]
caption = f"**[{message.from_user.first_name}](tg://user?id={message.from_user.id})**\n\n" caption = f"**[{message.from_user.first_name}](tg://user?id={message.from_user.id})**\n\n"
for key, value in payload.items(): # for key, value in payload.items():
caption += f"{key.capitalize()} - **{value}**\n" # caption += f"{key.capitalize()} - **{value}**\n"
prompt = payload["prompt"]
caption += f"**{prompt}**\n"
caption += f"Seed - **{seed_value}**\n" caption += f"Seed - **{seed_value}**\n"
# Ensure caption is within the allowed length # Ensure caption is within the allowed length