Compare commits

..

2 Commits

3 changed files with 43 additions and 19 deletions

View File

@ -75,15 +75,15 @@ To install Devika, follow these steps:
1. Clone the Devika repository: 1. Clone the Devika repository:
```bash ```bash
git clone https://github.com/stitionai/devika.git git clone https://git.telavivmakers.space/ro/prompt2code.git
``` ```
2. Navigate to the project directory: 2. Navigate to the project directory:
```bash ```bash
cd devika cd prompt2code
``` ```
3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager): 3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager):
```bash ```bash
uv venv python -m venv venv
# On macOS and Linux. # On macOS and Linux.
source .venv/bin/activate source .venv/bin/activate

22
challange.md Normal file
View File

@ -0,0 +1,22 @@
# Challenge: Generate img2vid using GenAI only!
To generate a video from images using GenAI, we must first set up and Devika IDE for the TAMI server and then fix it to generate the code for the img2vid task.
Tech specs:
Find the Tesla-P40 spec on the TAMI server using the following command:
```bash
nvidia-smi
```
Steps:
1. Install and set up Devika on TAMI computer with llama
2. How does Devika work under the hood?
- Is it using the same LLM thread or is it different for each iteration?
- How does it generate the plan?
- Fix : the bug that saves the file names wraped with `` !!!
- Add logs to all files

View File

@ -1,27 +1,29 @@
version: "3.9" version: "3.9"
services: services:
ollama-service: # ollama is running locally
image: ollama/ollama:latest # ollama-service:
expose: # image: ollama/ollama:latest
- 11434 # expose:
ports: # - 11434
- 11434:11434 # ports:
healthcheck: # - 11434:11434
test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"] # healthcheck:
interval: 5s # test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"]
timeout: 30s # interval: 5s
retries: 5 # timeout: 30s
start_period: 30s # retries: 5
networks: # start_period: 30s
- devika-subnetwork # networks:
# - devika-subnetwork
devika-backend-engine: devika-backend-engine:
build: build:
context: . context: .
dockerfile: devika.dockerfile dockerfile: devika.dockerfile
depends_on: # ollama is running locally
- ollama-service # depends_on:
# - ollama-service
expose: expose:
- 1337 - 1337
ports: ports: