Compare commits

..

No commits in common. "fd6c942ee7ebded28e6df518680efd126cc665c6" and "f0b94ab9bd8bc78d59ef8079ee862042d074b4e6" have entirely different histories.

3 changed files with 19 additions and 43 deletions

View File

@ -75,15 +75,15 @@ To install Devika, follow these steps:
1. Clone the Devika repository: 1. Clone the Devika repository:
```bash ```bash
git clone https://git.telavivmakers.space/ro/prompt2code.git git clone https://github.com/stitionai/devika.git
``` ```
2. Navigate to the project directory: 2. Navigate to the project directory:
```bash ```bash
cd prompt2code cd devika
``` ```
3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager): 3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager):
```bash ```bash
python -m venv venv uv venv
# On macOS and Linux. # On macOS and Linux.
source .venv/bin/activate source .venv/bin/activate

View File

@ -1,22 +0,0 @@
# Challenge: Generate img2vid using GenAI only!
To generate a video from images using GenAI, we must first set up and Devika IDE for the TAMI server and then fix it to generate the code for the img2vid task.
Tech specs:
Find the Tesla-P40 spec on the TAMI server using the following command:
```bash
nvidia-smi
```
Steps:
1. Install and set up Devika on TAMI computer with llama
2. How does Devika work under the hood?
- Is it using the same LLM thread or is it different for each iteration?
- How does it generate the plan?
- Fix : the bug that saves the file names wraped with `` !!!
- Add logs to all files

View File

@ -1,29 +1,27 @@
version: "3.9" version: "3.9"
services: services:
# ollama is running locally ollama-service:
# ollama-service: image: ollama/ollama:latest
# image: ollama/ollama:latest expose:
# expose: - 11434
# - 11434 ports:
# ports: - 11434:11434
# - 11434:11434 healthcheck:
# healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"]
# test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"] interval: 5s
# interval: 5s timeout: 30s
# timeout: 30s retries: 5
# retries: 5 start_period: 30s
# start_period: 30s networks:
# networks: - devika-subnetwork
# - devika-subnetwork
devika-backend-engine: devika-backend-engine:
build: build:
context: . context: .
dockerfile: devika.dockerfile dockerfile: devika.dockerfile
# ollama is running locally depends_on:
# depends_on: - ollama-service
# - ollama-service
expose: expose:
- 1337 - 1337
ports: ports: