Compare commits

..

No commits in common. "fd6c942ee7ebded28e6df518680efd126cc665c6" and "f0b94ab9bd8bc78d59ef8079ee862042d074b4e6" have entirely different histories.

3 changed files with 19 additions and 43 deletions

View File

@ -75,15 +75,15 @@ To install Devika, follow these steps:
1. Clone the Devika repository:
```bash
git clone https://git.telavivmakers.space/ro/prompt2code.git
git clone https://github.com/stitionai/devika.git
```
2. Navigate to the project directory:
```bash
cd prompt2code
cd devika
```
3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager):
```bash
python -m venv venv
uv venv
# On macOS and Linux.
source .venv/bin/activate

View File

@ -1,22 +0,0 @@
# Challenge: Generate img2vid using GenAI only!
To generate a video from images using GenAI, we must first set up and Devika IDE for the TAMI server and then fix it to generate the code for the img2vid task.
Tech specs:
Find the Tesla-P40 spec on the TAMI server using the following command:
```bash
nvidia-smi
```
Steps:
1. Install and set up Devika on TAMI computer with llama
2. How does Devika work under the hood?
- Is it using the same LLM thread or is it different for each iteration?
- How does it generate the plan?
- Fix : the bug that saves the file names wraped with `` !!!
- Add logs to all files

View File

@ -1,29 +1,27 @@
version: "3.9"
services:
# ollama is running locally
# ollama-service:
# image: ollama/ollama:latest
# expose:
# - 11434
# ports:
# - 11434:11434
# healthcheck:
# test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"]
# interval: 5s
# timeout: 30s
# retries: 5
# start_period: 30s
# networks:
# - devika-subnetwork
ollama-service:
image: ollama/ollama:latest
expose:
- 11434
ports:
- 11434:11434
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"]
interval: 5s
timeout: 30s
retries: 5
start_period: 30s
networks:
- devika-subnetwork
devika-backend-engine:
build:
context: .
dockerfile: devika.dockerfile
# ollama is running locally
# depends_on:
# - ollama-service
depends_on:
- ollama-service
expose:
- 1337
ports: