chore: Update project directory name to prompt2code

This commit is contained in:
Ariel R.E.L 2024-07-03 19:58:09 +03:00
parent ad8eb978d9
commit fd6c942ee7
3 changed files with 22 additions and 18 deletions

View File

@ -75,7 +75,7 @@ To install Devika, follow these steps:
1. Clone the Devika repository: 1. Clone the Devika repository:
```bash ```bash
git clone https://github.com/stitionai/devika.git git clone https://git.telavivmakers.space/ro/prompt2code.git
``` ```
2. Navigate to the project directory: 2. Navigate to the project directory:
```bash ```bash
@ -83,7 +83,7 @@ To install Devika, follow these steps:
``` ```
3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager): 3. Create a virtual environment and install the required dependencies (you can use any virtual environment manager):
```bash ```bash
uv venv python -m venv venv
# On macOS and Linux. # On macOS and Linux.
source .venv/bin/activate source .venv/bin/activate

View File

@ -1,5 +1,7 @@
# Challenge: Generate img2vid using GenAI only! # Challenge: Generate img2vid using GenAI only!
To generate a video from images using GenAI, we must first set up and Devika IDE for the TAMI server and then fix it to generate the code for the img2vid task.
Tech specs: Tech specs:
Find the Tesla-P40 spec on the TAMI server using the following command: Find the Tesla-P40 spec on the TAMI server using the following command:

View File

@ -1,27 +1,29 @@
version: "3.9" version: "3.9"
services: services:
ollama-service: # ollama is running locally
image: ollama/ollama:latest # ollama-service:
expose: # image: ollama/ollama:latest
- 11434 # expose:
ports: # - 11434
- 11434:11434 # ports:
healthcheck: # - 11434:11434
test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"] # healthcheck:
interval: 5s # test: ["CMD-SHELL", "curl -f http://localhost:11434/ || exit 1"]
timeout: 30s # interval: 5s
retries: 5 # timeout: 30s
start_period: 30s # retries: 5
networks: # start_period: 30s
- devika-subnetwork # networks:
# - devika-subnetwork
devika-backend-engine: devika-backend-engine:
build: build:
context: . context: .
dockerfile: devika.dockerfile dockerfile: devika.dockerfile
depends_on: # ollama is running locally
- ollama-service # depends_on:
# - ollama-service
expose: expose:
- 1337 - 1337
ports: ports: