Compare commits

...

2 Commits

3 changed files with 43 additions and 19 deletions

View File

@ -75,15 +75,15 @@ To install Devika, follow these steps:
1. Clone the Devika repository:
```bash
git clone https://github.com/stitionai/devika.git
git clone https://git.telavivmakers.space/ro/prompt2code.git
```
2. Navigate to the project directory:
```bash
cd devika
cd prompt2code
```
3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager):
```bash
uv venv
python -m venv venv
# On macOS and Linux.
source .venv/bin/activate

22
challange.md Normal file
View File

@ -0,0 +1,22 @@
# Challenge: Generate img2vid using GenAI only!
To generate a video from images using GenAI, we must first set up and Devika IDE for the TAMI server and then fix it to generate the code for the img2vid task.
Tech specs:
Find the Tesla-P40 spec on the TAMI server using the following command:
```bash
nvidia-smi
```
Steps:
1. Install and set up Devika on TAMI computer with llama
2. How does Devika work under the hood?
- Is it using the same LLM thread or is it different for each iteration?
- How does it generate the plan?
- Fix : the bug that saves the file names wraped with `` !!!
- Add logs to all files

View File

@ -1,27 +1,29 @@
version: "3.9"
services:
ollama-service:
image: ollama/ollama:latest
expose:
- 11434
ports:
- 11434:11434
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"]
interval: 5s
timeout: 30s
retries: 5
start_period: 30s
networks:
- devika-subnetwork
# ollama is running locally
# ollama-service:
# image: ollama/ollama:latest
# expose:
# - 11434
# ports:
# - 11434:11434
# healthcheck:
# test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"]
# interval: 5s
# timeout: 30s
# retries: 5
# start_period: 30s
# networks:
# - devika-subnetwork
devika-backend-engine:
build:
context: .
dockerfile: devika.dockerfile
depends_on:
- ollama-service
# ollama is running locally
# depends_on:
# - ollama-service
expose:
- 1337
ports: