Compare commits

...

5 Commits

Author SHA1 Message Date
yair
fef3f0baad Add UDP control protocol and IDS camera scripts
- Added UDP_CONTROL_PROTOCOL.md documenting the UDP control interface
- Added launch-ids.py for IDS camera control
- Added test_exposure_control.py for testing exposure settings
- Added udp_backup.reg for UDP configuration backup
- Added visualize_line_realtime.py for real-time visualization
- Updated .gitignore and ROLLINGSUM_GUIDE.md
- Removed ini/200fps-2456x4pix-cw.ini configuration file
2025-11-15 14:00:36 +02:00
yair
743bfb8323 Decouple display and recording in recv_raw_rolling.py
- Add --record-fps parameter for independent recording frame rate control
- Separate display and recording buffers (display_buffer_obj, record_buffer_obj)
- Enable recording without display and vice versa
- Independent throttling for display and recording operations
- Improve code organization and cleanup handling
2025-11-15 00:48:09 +02:00
yair
d3ee5d998e Fix image orientation in recv_raw_rolling.py - correct 180deg rotation and flip 2025-11-15 00:35:49 +02:00
yair
bdb89b2632 Add UDP traffic analysis tools for GStreamer video debugging 2025-11-14 19:52:11 +02:00
yair
3a799c0a65 Adapt pipeline to transmit single line (2456x1) instead of 2456x4
- Modified recv_raw_rolling.py to handle 2456x1 BGR line format
- Fixed display dimensions (2456 tall x 800 wide)
- Updated 200fps-2456x4pix-cw.ini to start at Y=500
- Added detailed single line transmission docs to network_guide.md
- Updated README.md with quick start example using videocrop
2025-11-14 19:32:14 +02:00
13 changed files with 1570 additions and 110 deletions

18
.gitignore vendored
View File

@@ -5,23 +5,6 @@
Thumbs.db
#ignore build folder
[Bb]uild*/
#Ignore files build by Visual Studio
*.obj
*.exe
*.pdb
*.user
*.aps
*.pch
*.vspscc
*_i.c
*_p.c
*.ncb
*.suo
*.tlb
*.tlh
*.bak
*.cache
*.ilk
*.log
.vscode
[Bb]in
@@ -39,5 +22,6 @@ ipch/
*.mkv
*.raw
*.dot
*.avi
gst_plugs/
results/

View File

@@ -49,30 +49,30 @@ gst-launch-1.0 idsueyesrc config-file=ini/whole-presacler64_autoexp-binningx2.in
```
## Network Streaming
see more at network_guide.md
### Sending Line Scan Data Over UDP
### Quick Start - Single Line Transmission (2456x1)
#### Real Data Pipeline
Send camera data as raw UDP stream (note: 5ms exposure is too fast):
#### Send Single Line via UDP
Extract and transmit one line from camera (daytime, 200fps):
```powershell
gst-launch-1.0 idsueyesrc config-file=ini/200fps-2456x4pix-cw.ini exposure=5 framerate=300 `
gst-launch-1.0 idsueyesrc config-file=ini/200fps-2456x4pix-cw.ini exposure=5 framerate=200 `
! videocrop bottom=3 `
! queue `
! udpsink host=127.0.0.1 port=5000
```
#### Python/OpenCV Receiver
Receive and process raw column data:
#### Receive and Display
```pwsh
uv run scripts/recv_raw_column.py
uv run .\scripts\recv_raw_rolling.py --display-fps 60
```
Or with rolling analysis:
```pwsh
uv run .\scripts\recv_raw_rolling.py
```
**What's happening:**
- Camera captures 2456x4 pixels at row 500 of the sensor
- `videocrop bottom=3` extracts only the top line (2456x1)
- 7368 bytes transmitted per frame (2456 × 1 × 3 BGR channels)
- Receiver displays as a rolling vertical scan
See [`scripts/recv_raw_column.py`](scripts/recv_raw_column.py) for the Python implementation with debug options.
See [network_guide.md](network_guide.md) for detailed configuration options, nighttime settings, and recording.
### Demo/Test Data Streaming

View File

@@ -100,7 +100,7 @@ struct _GstRollingSum
};
```
### Algorithm (Simplified from cli.py)
### Algorithm (Simplified from wissotsky's cli.py)
**Per Frame Processing:**

View File

@@ -15,9 +15,9 @@ Sensor digital gain=0
[Image size]
Start X=0
Start Y=0
Start X absolute=0
Start Y absolute=0
Start Y=500
Start X absolute=1
Start Y absolute=1
Width=2456
Height=4
Binning=0
@@ -56,8 +56,8 @@ Manual gain=0
[Timing]
Pixelclock=237
Extended pixelclock range=0
Framerate=200.151466
Exposure=4.903189
Framerate=99.968929
Exposure=9.910081
Long exposure=0
Dual exposure ratio=0
@@ -98,7 +98,7 @@ IS_CM_RGB8_PLANAR=2
[Parameters]
Colormode=1
Gamma=1.000000
Gamma=1.200000
Hardware Gamma=0
Blacklevel Mode=0
Blacklevel Offset=4
@@ -113,7 +113,7 @@ AllowRawWithLut=0
[Gain]
Master=0
Master=52
Red=19
Green=0
Blue=33
@@ -152,9 +152,9 @@ Brightness control once=0
Brightness reference=128
Brightness speed=50
Brightness max gain=100
Brightness max exposure=4.903189
Brightness max exposure=2.511838
Brightness Aoi Left=0
Brightness Aoi Top=0
Brightness Aoi Top=500
Brightness Aoi Width=2456
Brightness Aoi Height=4
Brightness Hysteresis=2
@@ -173,7 +173,7 @@ Auto WB gainMin=0
Auto WB gainMax=100
Auto WB speed=50
Auto WB Aoi Left=0
Auto WB Aoi Top=0
Auto WB Aoi Top=500
Auto WB Aoi Width=2456
Auto WB Aoi Height=4
Auto WB Once=0

View File

@@ -1,26 +1,116 @@
# how to send a line
# How to Send a Single Line (2456x1)
real data
## Real Data - Single Line Transmission
The camera captures 2456x4 pixels, but we extract and transmit only **one line (2456x1)** using `videocrop`.
### Daytime Configuration (200fps)
```powershell
gst-launch-1.0 idsueyesrc config-file=ini/200fps-2456x4pix-cw.ini exposure=5 framerate=300 `
gst-launch-1.0 idsueyesrc config-file=ini/200fps-2456x4pix-cw.ini exposure=5 framerate=200 `
! videocrop bottom=3 `
! queue `
! udpsink host=127.0.0.1 port=5000
```
note: 5ms is bit too fast for us
### Nighttime Configuration (100fps, extra gain)
```powershell
gst-launch-1.0 idsueyesrc config-file=ini/100fps-10exp-2456x4pix-500top-cw-extragain.ini exposure=10 framerate=100 `
! videocrop bottom=3 `
! queue `
! udpsink host=127.0.0.1 port=5000
```
**Key Parameters:**
- `videocrop bottom=3` - Extracts only the top line (removes bottom 3 rows from 2456x4 image)
- Input: 2456x4 BGR from camera
- Output: 2456x1 BGR line transmitted via UDP
- Frame size: 7368 bytes (2456 × 1 × 3 channels)
**Alternative:** To extract the bottom line instead, use `videocrop top=3`
### Python/OpenCV Receiver
```pwsh
uv run scripts/recv_raw_column.py
```
or rolling like
```pwsh
# Basic rolling display
uv run .\scripts\recv_raw_rolling.py
```
See [`scripts/recv_raw_column.py`](scripts/recv_raw_column.py) for the Python implementation with debug options.
```pwsh
# With display throttling and recording
uv run .\scripts\recv_raw_rolling.py --display-fps 60 --save-mjpeg .\results\output_60fps.avi
```
# demo data
```pwsh
# Max performance (no display, stats only)
uv run .\scripts\recv_raw_rolling.py --no-display
```
See [`scripts/recv_raw_rolling.py`](scripts/recv_raw_rolling.py) for the Python implementation with debug options.
### UDP Traffic Analysis & Debugging
To inspect and analyze the raw UDP packets being transmitted:
```pwsh
# Detailed payload analyzer - shows format, dimensions, pixel statistics
uv run .\scripts\udp_payload_analyzer.py
```
**Example Output:**
```
================================================================================
PACKET #1 @ 17:45:23.456
================================================================================
Source: 127.0.0.1:52341
Total Size: 7368 bytes
PROTOCOL ANALYSIS:
--------------------------------------------------------------------------------
protocol : RAW
header_size : 0
payload_size : 7368
VIDEO PAYLOAD ANALYSIS:
--------------------------------------------------------------------------------
📹 Real camera data - Single line 2456x1 BGR
Format: BGR
Dimensions: 2456x1
Channels: 3
PIXEL STATISTICS:
--------------------------------------------------------------------------------
Channel 0 (B/R) : min= 0, max=110, mean= 28.63, std= 16.16
Channel 1 (G) : min= 17, max=233, mean= 62.39, std= 36.93
Channel 2 (R/B) : min= 25, max=255, mean= 99.76, std= 49.81
HEX PREVIEW (first 32 bytes):
--------------------------------------------------------------------------------
19 2e 4a 12 30 41 0a 2f 3f 01 32 3e 00 32 40 00 31 45 18 2d 4c 1e 2d...
SESSION SUMMARY:
Total Packets: 235
Total Bytes: 1,731,480 (7368 bytes/packet)
```
The analyzer automatically detects the format, shows pixel statistics per color channel, and provides a hex preview for debugging. Perfect for verifying data transmission and diagnosing issues.
```pwsh
# Simple packet receiver (no analysis, just basic info)
uv run .\scripts\udp_sniffer_raw.py
```
## Configuration Notes
Both INI files are configured with:
- Start Y = 500 (captures from row 500 of the sensor)
- Height = 4 pixels
- Width = 2456 pixels
- This optimizes for the center region of the sensor
**Note:** `exposure=5` (5ms) may be too fast for some applications. Adjust based on your requirements.
---
# Demo Data (Testing)
## Sender (crop to first column, send raw over UDP)
```pwsh
gst-launch-1.0 -v `

View File

@@ -0,0 +1,288 @@
# UDP Control Protocol Specification
## Overview
This document describes the UDP-based control protocol for dynamically controlling the IDS uEye camera exposure during runtime.
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ launch-ids.py Process │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ ┌──────────────────────────┐ │
│ │ Main Thread │ │ Control Server Thread │ │
│ │ │ │ │ │
│ │ GStreamer │◄────────┤ UDP Socket (Port 5001) │ │
│ │ Pipeline │ Thread- │ Command Parser │ │
│ │ - idsueyesrc │ Safe │ Property Setter │ │
│ │ - videocrop │ Updates │ Response Handler │ │
│ │ - queue │ │ │ │
│ │ - udpsink:5000 │ └──────────────────────────┘ │
│ └──────────────────┘ ▲ │
│ │ │
└───────────────────────────────────────────┼───────────────────┘
│ UDP Commands
┌────────┴────────┐
│ Control Client │
│ (Any UDP tool) │
└─────────────────┘
```
## Connection Details
- **Control Port**: 5001 (UDP)
- **Bind Address**: 0.0.0.0 (accepts from any interface)
- **Video Port**: 5000 (UDP) - existing video stream, unchanged
- **Protocol**: UDP (connectionless, stateless)
- **Encoding**: ASCII text
- **Delimiter**: Newline (`\n`)
## Command Format
### General Structure
```
COMMAND [PARAMETERS]\n
```
Commands are case-insensitive, but UPPERCASE is recommended for clarity.
## Supported Commands
### 1. SET_EXPOSURE
**Description**: Sets the camera exposure time.
**Syntax**:
```
SET_EXPOSURE <value>
```
**Parameters**:
- `<value>`: Exposure time in seconds (float)
- Range: 0.001 to 1.0 seconds (1ms to 1000ms)
- Examples: `0.016` (16ms), `0.001` (1ms), `0.100` (100ms)
**Response**:
```
OK <actual_value>
```
or
```
ERROR <error_message>
```
**Examples**:
```
Client: SET_EXPOSURE 0.016\n
Server: OK 0.016\n
Client: SET_EXPOSURE 2.0\n
Server: ERROR Value out of range (0.001-1.0)\n
```
### 2. GET_EXPOSURE
**Description**: Retrieves the current exposure time.
**Syntax**:
```
GET_EXPOSURE
```
**Parameters**: None
**Response**:
```
OK <current_value>
```
**Example**:
```
Client: GET_EXPOSURE\n
Server: OK 0.016\n
```
### 3. SET_FRAMERATE
**Description**: Sets the camera frame rate.
**Syntax**:
```
SET_FRAMERATE <value>
```
**Parameters**:
- `<value>`: Frame rate in Hz (float)
- Range: 1.0 to 500.0 fps
- Examples: `22`, `30.5`, `100`
**Response**:
```
OK <actual_value>
```
or
```
ERROR <error_message>
```
**Example**:
```
Client: SET_FRAMERATE 30\n
Server: OK 30.0\n
```
### 4. GET_FRAMERATE
**Description**: Retrieves the current frame rate.
**Syntax**:
```
GET_FRAMERATE
```
**Parameters**: None
**Response**:
```
OK <current_value>
```
**Example**:
```
Client: GET_FRAMERATE\n
Server: OK 22.0\n
```
### 5. STATUS
**Description**: Get overall pipeline status and current settings.
**Syntax**:
```
STATUS
```
**Parameters**: None
**Response**:
```
OK exposure=<value> framerate=<value> state=<PLAYING|PAUSED|NULL>
```
**Example**:
```
Client: STATUS\n
Server: OK exposure=0.016 framerate=22.0 state=PLAYING\n
```
## Error Handling
### Error Response Format
```
ERROR <error_code>: <error_message>
```
### Common Error Codes
| Code | Description | Example |
|------|-------------|---------|
| `INVALID_COMMAND` | Unknown command | `ERROR INVALID_COMMAND: Unknown command 'FOO'` |
| `INVALID_SYNTAX` | Malformed command | `ERROR INVALID_SYNTAX: Missing parameter` |
| `OUT_OF_RANGE` | Value out of valid range | `ERROR OUT_OF_RANGE: Exposure must be 0.001-1.0` |
| `PIPELINE_ERROR` | Pipeline not running | `ERROR PIPELINE_ERROR: Pipeline not in PLAYING state` |
## Implementation Notes
### Thread Safety
- The control server runs in a separate daemon thread
- GStreamer properties are inherently thread-safe (GObject properties)
- The `src.set_property()` method can be safely called from the control thread
### Non-Blocking Operation
- Control server uses non-blocking socket with timeout
- Does not interfere with GStreamer pipeline operation
- Minimal latency for command processing
### Response Timing
- Responses are sent immediately after processing
- Property changes take effect on the next frame capture
- No guaranteed synchronization with video stream
## Usage Examples
### Python Client Example
```python
import socket
def send_command(command):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(command.encode() + b'\n', ('127.0.0.1', 5001))
sock.settimeout(1.0)
response, _ = sock.recvfrom(1024)
sock.close()
return response.decode().strip()
# Set exposure to 10ms
print(send_command("SET_EXPOSURE 0.010"))
# Get current exposure
print(send_command("GET_EXPOSURE"))
# Set framerate to 30fps
print(send_command("SET_FRAMERATE 30"))
```
### Command Line (netcat/nc)
```bash
# Set exposure
echo "SET_EXPOSURE 0.020" | nc -u 127.0.0.1 5001
# Get exposure
echo "GET_EXPOSURE" | nc -u 127.0.0.1 5001
# Get status
echo "STATUS" | nc -u 127.0.0.1 5001
```
### PowerShell Client
```powershell
$udpClient = New-Object System.Net.Sockets.UdpClient
$endpoint = New-Object System.Net.IPEndPoint([System.Net.IPAddress]::Parse("127.0.0.1"), 5001)
# Send command
$bytes = [System.Text.Encoding]::ASCII.GetBytes("SET_EXPOSURE 0.015`n")
$udpClient.Send($bytes, $bytes.Length, $endpoint)
# Receive response
$udpClient.Client.ReceiveTimeout = 1000
$receiveBytes = $udpClient.Receive([ref]$endpoint)
$response = [System.Text.Encoding]::ASCII.GetString($receiveBytes)
Write-Host $response
$udpClient.Close()
```
## Testing
A test client script is provided: `scripts/test_exposure_control.py`
```bash
# Run the camera pipeline
uv run scripts/launch-ids.py
# In another terminal, test exposure control
uv run scripts/test_exposure_control.py
```
## Future Enhancements
Possible extensions to the protocol:
- Add `SET_GAIN` / `GET_GAIN` commands
- Add `SAVE_CONFIG` to save current settings to INI file
- Add `RESET` to restore default settings
- Support batch commands (multiple commands in one packet)
- Add authentication/security for production use

341
scripts/launch-ids.py Normal file
View File

@@ -0,0 +1,341 @@
#!/usr/bin/env python3
# /// script
# requires-python = "==3.13"
# dependencies = []
# ///
#
# IDS uEye Camera Control Script with UDP Exposure Control
#
# This script streams video from an IDS uEye camera via UDP and provides
# a UDP control interface for dynamically adjusting exposure and framerate.
#
# Setup:
# Run with: . .\scripts\setup_gstreamer_env.ps1 && uv run .\scripts\launch-ids.py
#
# Features:
# - Video streaming on UDP port 5000 (127.0.0.1)
# - Control interface on UDP port 5001 (0.0.0.0)
# - Dynamic exposure control (0.001-1.0 seconds)
# - Dynamic framerate control (1-500 fps)
#
# Control Commands:
# SET_EXPOSURE <value> - Set exposure in seconds (e.g., 0.016)
# GET_EXPOSURE - Get current exposure value
# SET_FRAMERATE <value> - Set framerate in Hz (e.g., 30)
# GET_FRAMERATE - Get current framerate
# STATUS - Get pipeline status and current settings
#
# Example Usage:
# echo "SET_EXPOSURE 0.010" | nc -u 127.0.0.1 5001
# echo "GET_EXPOSURE" | nc -u 127.0.0.1 5001
#
# Testing:
# Run test client: uv run .\scripts\test_exposure_control.py
#
# Documentation:
# See scripts/UDP_CONTROL_PROTOCOL.md for full protocol details
#
# Add GStreamer Python packages
import os
import sys
import socket
import threading
# Check for required environment variable
gst_root = os.environ.get("GSTREAMER_1_0_ROOT_MSVC_X86_64")
if not gst_root:
print("ERROR: GSTREAMER_1_0_ROOT_MSVC_X86_64 environment variable is not set")
print("Expected: C:\\bin\\gstreamer\\1.0\\msvc_x86_64\\")
print("Please run: . .\\scripts\\setup_gstreamer_env.ps1")
sys.exit(1)
else:
# Remove trailing backslash if present
gst_root = gst_root.rstrip("\\")
gst_site_packages = os.path.join(gst_root, "lib", "site-packages")
sys.path.insert(0, gst_site_packages)
# Add GI typelibs
os.environ["GI_TYPELIB_PATH"] = os.path.join(gst_root, "lib", "girepository-1.0")
# Add GStreamer DLL bin directory
os.environ["PATH"] = os.path.join(gst_root, "bin") + ";" + os.environ["PATH"]
import gi
gi.require_version("Gst", "1.0")
from gi.repository import Gst
class ControlServer:
"""UDP server for controlling camera parameters during runtime"""
def __init__(self, src, pipeline=None, port=5001):
self.src = src
self.pipeline = pipeline
self.port = port
self.running = False
self.sock = None
self.thread = None
def start(self):
"""Start the control server in a separate thread"""
self.running = True
self.thread = threading.Thread(target=self.run, daemon=True)
self.thread.start()
def stop(self):
"""Stop the control server"""
self.running = False
if self.sock:
try:
self.sock.close()
except:
pass
if self.thread:
self.thread.join(timeout=2.0)
def run(self):
"""Main server loop"""
try:
# Create UDP socket
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.settimeout(0.5) # Non-blocking with timeout
try:
self.sock.bind(("0.0.0.0", self.port))
except OSError as e:
print(f"ERROR: Could not bind control server to port {self.port}: {e}")
print("Control server disabled. Video streaming will continue.")
return
print(f"Control server listening on UDP port {self.port}")
print(" Commands: SET_EXPOSURE <val>, GET_EXPOSURE, SET_FRAMERATE <val>, GET_FRAMERATE, STATUS")
while self.running:
try:
# Receive command
data, addr = self.sock.recvfrom(1024)
command = data.decode('utf-8', errors='ignore').strip()
if command:
# Process command
response = self.process_command(command, addr)
# Send response
self.send_response(response, addr)
except socket.timeout:
# Normal timeout, continue loop
continue
except Exception as e:
if self.running:
print(f"Control server error: {e}")
finally:
if self.sock:
try:
self.sock.close()
except:
pass
def process_command(self, command, addr):
"""Process incoming command and return response"""
try:
parts = command.strip().upper().split()
if not parts:
return "ERROR INVALID_SYNTAX: Empty command"
cmd = parts[0]
if cmd == "SET_EXPOSURE":
return self.handle_set_exposure(parts)
elif cmd == "GET_EXPOSURE":
return self.handle_get_exposure()
elif cmd == "SET_FRAMERATE":
return self.handle_set_framerate(parts)
elif cmd == "GET_FRAMERATE":
return self.handle_get_framerate()
elif cmd == "STATUS":
return self.handle_status()
else:
return f"ERROR INVALID_COMMAND: Unknown command '{cmd}'"
except Exception as e:
return f"ERROR PROCESSING: {str(e)}"
def handle_set_exposure(self, parts):
"""Handle SET_EXPOSURE command"""
if len(parts) != 2:
return "ERROR INVALID_SYNTAX: Usage: SET_EXPOSURE <value>"
try:
value = float(parts[1])
if value < 0.001 or value > 1.0:
return "ERROR OUT_OF_RANGE: Exposure must be 0.001-1.0 seconds"
self.src.set_property("exposure", value)
# Verify the value was set
actual = self.src.get_property("exposure")
return f"OK {actual}"
except ValueError:
return "ERROR INVALID_SYNTAX: Exposure must be a number"
except Exception as e:
return f"ERROR: {str(e)}"
def handle_get_exposure(self):
"""Handle GET_EXPOSURE command"""
try:
value = self.src.get_property("exposure")
return f"OK {value}"
except Exception as e:
return f"ERROR: {str(e)}"
def handle_set_framerate(self, parts):
"""Handle SET_FRAMERATE command"""
if len(parts) != 2:
return "ERROR INVALID_SYNTAX: Usage: SET_FRAMERATE <value>"
try:
value = float(parts[1])
if value < 1.0 or value > 500.0:
return "ERROR OUT_OF_RANGE: Framerate must be 1.0-500.0 Hz"
self.src.set_property("framerate", value)
actual = self.src.get_property("framerate")
return f"OK {actual}"
except ValueError:
return "ERROR INVALID_SYNTAX: Framerate must be a number"
except Exception as e:
return f"ERROR: {str(e)}"
def handle_get_framerate(self):
"""Handle GET_FRAMERATE command"""
try:
value = self.src.get_property("framerate")
return f"OK {value}"
except Exception as e:
return f"ERROR: {str(e)}"
def handle_status(self):
"""Handle STATUS command"""
try:
exposure = self.src.get_property("exposure")
framerate = self.src.get_property("framerate")
# Get pipeline state
state = "UNKNOWN"
if self.pipeline:
_, current_state, _ = self.pipeline.get_state(0)
state = current_state.value_nick.upper()
return f"OK exposure={exposure} framerate={framerate} state={state}"
except Exception as e:
return f"ERROR: {str(e)}"
def send_response(self, response, addr):
"""Send response back to client"""
try:
self.sock.sendto((response + '\n').encode(), addr)
except Exception as e:
print(f"Failed to send response: {e}")
Gst.init(None)
pipeline = Gst.Pipeline()
src = Gst.ElementFactory.make("idsueyesrc", "src")
src.set_property("config-file", "ini/100fps-10exp-2456x4pix-500top-cw-extragain.ini")
# Exposure in seconds (e.g., 0.016)
src.set_property("exposure", 0.016)
# Frame rate
src.set_property("framerate", 22)
# Video crop to remove bottom 3 pixels
videocrop = Gst.ElementFactory.make("videocrop", "crop")
videocrop.set_property("bottom", 3)
# Queue for buffering
queue = Gst.ElementFactory.make("queue", "queue")
# UDP sink to send the raw data
udpsink = Gst.ElementFactory.make("udpsink", "sink")
udpsink.set_property("host", "127.0.0.1")
udpsink.set_property("port", 5000)
# Add elements to pipeline
pipeline.add(src)
pipeline.add(videocrop)
pipeline.add(queue)
pipeline.add(udpsink)
# Link elements: src -> videocrop -> queue -> udpsink
if not src.link(videocrop):
print("ERROR: Failed to link src to videocrop")
exit(1)
if not videocrop.link(queue):
print("ERROR: Failed to link videocrop to queue")
exit(1)
if not queue.link(udpsink):
print("ERROR: Failed to link queue to udpsink")
exit(1)
print("Pipeline created successfully")
print(f"Video stream: UDP port 5000 (host: 127.0.0.1)")
print("Pipeline: idsueyesrc -> videocrop (bottom=3) -> queue -> udpsink")
print()
# Create and start control server
control_server = ControlServer(src, pipeline, port=5001)
control_server.start()
# Start the pipeline
ret = pipeline.set_state(Gst.State.PLAYING)
if ret == Gst.StateChangeReturn.FAILURE:
print("ERROR: Unable to set the pipeline to the playing state")
exit(1)
print()
print("Pipeline is PLAYING...")
print("Press Ctrl+C to stop")
# Wait until error or EOS
bus = pipeline.get_bus()
try:
while True:
# Use timeout to allow Ctrl+C to be caught quickly
msg = bus.timed_pop_filtered(
100 * Gst.MSECOND, # 100ms timeout
Gst.MessageType.ERROR | Gst.MessageType.EOS | Gst.MessageType.STATE_CHANGED
)
if msg:
t = msg.type
if t == Gst.MessageType.ERROR:
err, debug = msg.parse_error()
print(f"ERROR: {err.message}")
print(f"Debug info: {debug}")
break
elif t == Gst.MessageType.EOS:
print("End-Of-Stream reached")
break
elif t == Gst.MessageType.STATE_CHANGED:
if msg.src == pipeline:
old_state, new_state, pending_state = msg.parse_state_changed()
print(f"Pipeline state changed from {old_state.value_nick} to {new_state.value_nick}")
except KeyboardInterrupt:
print("\nInterrupted by user")
# Cleanup
print("Stopping control server...")
control_server.stop()
print("Stopping pipeline...")
pipeline.set_state(Gst.State.NULL)
print("Pipeline stopped")

View File

@@ -29,36 +29,39 @@ parser.add_argument('--no-display', action='store_true',
parser.add_argument('--display-fps', type=int, default=0,
help='Limit display refresh rate (0=every frame, 60=60fps, etc). Reduces cv2.imshow() overhead while receiving all frames')
parser.add_argument('--save-mjpeg', type=str, default=None,
help='Save rolling display to MJPEG video file (e.g., output.avi). Uses display-fps if set, otherwise 30 fps')
help='Save rolling display to MJPEG video file (e.g., output.avi). Works independently of display')
parser.add_argument('--record-fps', type=int, default=30,
help='Recording frame rate for --save-mjpeg (default: 30 fps). Independent of display-fps')
args = parser.parse_args()
# Import OpenCV only if display is enabled
# Import OpenCV only if display or recording is enabled
ENABLE_DISPLAY = not args.no_display
if ENABLE_DISPLAY:
ENABLE_RECORDING = args.save_mjpeg is not None
if ENABLE_DISPLAY or ENABLE_RECORDING:
import cv2
# Debug flag - set to True to see frame reception details
DEBUG = False
# Line drop detection parameters
EXPECTED_FPS = 200 # Expected frame rate (from 200fps ini file)
EXPECTED_INTERVAL_MS = 1000.0 / EXPECTED_FPS # 5ms for 200fps
DROP_THRESHOLD_MS = EXPECTED_INTERVAL_MS * 2.5 # Alert if gap > 2.5x expected (12.5ms)
# Frame statistics parameters
STATS_WINDOW_SIZE = 100 # Track stats over last N frames
STATUS_INTERVAL = 100 # Print status every N frames
DROP_THRESHOLD_MULTIPLIER = 2.5 # Alert if gap > 2.5x rolling average
MIN_SAMPLES_FOR_DROP_DETECTION = 10 # Need at least N samples to detect drops
# OPTIMIZED: Using NumPy indexing instead of cv2.rotate() for better performance
# Extracting first row and reversing it is equivalent to ROTATE_90_COUNTERCLOCKWISE + first column
# Stream parameters (match your GStreamer sender)
COLUMN_WIDTH = 4 # Width from 200fps-2456x4pix-cw.ini
COLUMN_HEIGHT = 2456 # Height from 200fps-2456x4pix-cw.ini
# Modified to receive single line: 2456x1 instead of 4x2456
COLUMN_WIDTH = 2456 # One line width
COLUMN_HEIGHT = 1 # One line height
CHANNELS = 3
FRAME_SIZE = COLUMN_WIDTH * COLUMN_HEIGHT * CHANNELS # bytes (29472)
FRAME_SIZE = COLUMN_WIDTH * COLUMN_HEIGHT * CHANNELS # bytes (7368)
# Display parameters
DISPLAY_WIDTH = 800 # Width of rolling display in pixels
DISPLAY_HEIGHT = COLUMN_HEIGHT
DISPLAY_HEIGHT = COLUMN_WIDTH # 2456 pixels tall (the line width becomes display height)
UDP_IP = "0.0.0.0"
UDP_PORT = 5000
@@ -68,40 +71,52 @@ sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 16777216) # 16MB buffer
sock.bind((UDP_IP, UDP_PORT))
print(f"Receiving raw {COLUMN_WIDTH}x{COLUMN_HEIGHT} RGB columns on UDP port {UDP_PORT}")
print(f"Receiving raw {COLUMN_WIDTH}x{COLUMN_HEIGHT} BGR line on UDP port {UDP_PORT}")
if ENABLE_DISPLAY:
if args.display_fps > 0:
print(f"Display: ENABLED - Rolling display ({DISPLAY_WIDTH}x{DISPLAY_HEIGHT}) @ {args.display_fps} Hz (throttled)")
else:
print(f"Display: ENABLED - Rolling display ({DISPLAY_WIDTH}x{DISPLAY_HEIGHT}) @ full rate")
else:
print(f"Display: DISABLED - Stats only mode (max performance)")
print(f"Display: DISABLED")
if ENABLE_RECORDING:
print(f"Recording: ENABLED - {args.save_mjpeg} @ {args.record_fps} fps")
else:
print(f"Recording: DISABLED")
if DEBUG:
print(f"Expected frame size: {FRAME_SIZE} bytes")
# Initialize display if enabled
display_buffer_obj = None
display_current_column = 0
last_display_time = 0
display_interval = 0
if ENABLE_DISPLAY:
cv2.namedWindow("Rolling Column Stream", cv2.WINDOW_NORMAL)
rolling_buffer = np.zeros((DISPLAY_HEIGHT, DISPLAY_WIDTH, CHANNELS), dtype=np.uint8)
current_column = 0
display_buffer_obj = np.zeros((DISPLAY_HEIGHT, DISPLAY_WIDTH, CHANNELS), dtype=np.uint8)
# Display throttling support
if args.display_fps > 0:
display_interval = 1.0 / args.display_fps # seconds between display updates
last_display_time = 0
else:
display_interval = 0 # Update every frame
last_display_time = 0
# MJPEG video writer setup
# Initialize recording if enabled (independent of display)
record_buffer_obj = None
record_current_column = 0
last_record_time = 0
record_interval = 0
video_writer = None
if args.save_mjpeg:
# Use display-fps if set, otherwise default to 30 fps for video
video_fps = args.display_fps if args.display_fps > 0 else 30
if ENABLE_RECORDING:
record_buffer_obj = np.zeros((DISPLAY_HEIGHT, DISPLAY_WIDTH, CHANNELS), dtype=np.uint8)
record_interval = 1.0 / args.record_fps if args.record_fps > 0 else 0
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
video_writer = cv2.VideoWriter(args.save_mjpeg, fourcc, video_fps,
video_writer = cv2.VideoWriter(args.save_mjpeg, fourcc, args.record_fps,
(DISPLAY_WIDTH, DISPLAY_HEIGHT))
print(f"Recording to: {args.save_mjpeg} @ {video_fps} fps")
print(f"Recording initialized: {args.save_mjpeg} @ {args.record_fps} fps")
frame_count = 0
@@ -125,13 +140,16 @@ while True:
if first_frame_time is None:
first_frame_time = current_time
# Line drop detection
# Frame interval tracking and drop detection
if last_frame_time is not None:
interval_ms = (current_time - last_frame_time) * 1000
frame_intervals.append(interval_ms)
# Detect line drop
if interval_ms > DROP_THRESHOLD_MS:
# Detect drops based on rolling average (only after we have enough samples)
if len(frame_intervals) >= MIN_SAMPLES_FOR_DROP_DETECTION:
avg_interval = np.mean(frame_intervals)
drop_threshold = avg_interval * DROP_THRESHOLD_MULTIPLIER
if interval_ms > drop_threshold:
total_drops += 1
drops_since_last_status += 1
@@ -154,24 +172,22 @@ while True:
print(status)
# Parse the incoming data - process for display and/or recording
if ENABLE_DISPLAY or ENABLE_RECORDING:
# Receiving 2456x1 line directly - reshape as a vertical column
# Input is 2456 pixels wide x 1 pixel tall, we want it as 2456 tall x 1 wide
frame = np.frombuffer(data, dtype=np.uint8).reshape((COLUMN_HEIGHT, COLUMN_WIDTH, CHANNELS))
# Transpose to vertical and flip to correct 180-degree rotation: (1, 2456, 3) -> (2456, 1, 3) flipped
column = frame.transpose(1, 0, 2)[::-1]
# Update display buffer and show if enabled
if ENABLE_DISPLAY:
# Parse the incoming data - ALWAYS process every frame
frame = np.frombuffer(data, dtype=np.uint8).reshape((COLUMN_WIDTH, COLUMN_HEIGHT, CHANNELS))
# OPTIMIZED: Extract first row and transpose to column (equivalent to rotating and taking first column)
# This avoids expensive cv2.rotate() - uses NumPy indexing instead
# For ROTATE_90_COUNTERCLOCKWISE: first column of rotated = first row reversed
column = frame[0, ::-1, :].reshape(COLUMN_HEIGHT, 1, CHANNELS)
# Insert the single column into the rolling buffer at the current position
# This happens for EVERY received frame
rolling_buffer[:, current_column:current_column+1, :] = column
# Move to the next column position, wrapping around when reaching the end
current_column = (current_column + 1) % DISPLAY_WIDTH
# Insert the single column into the display rolling buffer
display_buffer_obj[:, display_current_column:display_current_column+1, :] = column
display_current_column = (display_current_column - 1) % DISPLAY_WIDTH
# Display throttling: only refresh display at specified rate
# This reduces cv2.imshow() / cv2.waitKey() overhead while keeping all data
should_display = True
if args.display_fps > 0:
if current_time - last_display_time >= display_interval:
@@ -181,21 +197,41 @@ while True:
should_display = False
if should_display:
# Display the rolling buffer (clean, no overlays)
cv2.imshow("Rolling Column Stream", rolling_buffer)
# Write frame to video if recording
if video_writer is not None:
video_writer.write(rolling_buffer)
# Flip horizontally for display to correct orientation (using efficient slicing)
display_frame = display_buffer_obj[:, ::-1]
cv2.imshow("Rolling Column Stream", display_frame)
if cv2.waitKey(1) == 27: # ESC to quit
break
else:
# No display mode - just validate the data can be reshaped
frame = np.frombuffer(data, dtype=np.uint8).reshape((COLUMN_WIDTH, COLUMN_HEIGHT, CHANNELS))
if ENABLE_DISPLAY:
# Update recording buffer and write if enabled (independent of display)
if ENABLE_RECORDING:
# Insert the single column into the recording rolling buffer
record_buffer_obj[:, record_current_column:record_current_column+1, :] = column
record_current_column = (record_current_column - 1) % DISPLAY_WIDTH
# Recording throttling: only write frames at specified rate
should_record = True
if record_interval > 0:
if current_time - last_record_time >= record_interval:
last_record_time = current_time
should_record = True
else:
should_record = False
if should_record and video_writer is not None:
# Flip horizontally for recording to correct orientation
record_frame = record_buffer_obj[:, ::-1]
video_writer.write(record_frame)
# For stats-only mode, just validate the data can be reshaped
if not ENABLE_DISPLAY and not ENABLE_RECORDING:
frame = np.frombuffer(data, dtype=np.uint8).reshape((COLUMN_HEIGHT, COLUMN_WIDTH, CHANNELS))
# Cleanup
if video_writer is not None:
video_writer.release()
print(f"Video saved: {args.save_mjpeg}")
if ENABLE_DISPLAY:
cv2.destroyAllWindows()

View File

@@ -0,0 +1,159 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.8"
# dependencies = []
# ///
"""
Test client for UDP exposure control
Usage: uv run scripts/test_exposure_control.py
This script tests the UDP control interface for the IDS uEye camera.
Make sure launch-ids.py is running before executing this test.
"""
import socket
import time
import sys
def send_command(command, host="127.0.0.1", port=5001, timeout=1.0):
"""Send a command and return the response"""
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(timeout)
try:
# Send command
sock.sendto(command.encode() + b'\n', (host, port))
# Receive response
response, _ = sock.recvfrom(1024)
return response.decode().strip()
except socket.timeout:
return "ERROR: Timeout waiting for response (is launch-ids.py running?)"
except Exception as e:
return f"ERROR: {e}"
finally:
sock.close()
def print_test(test_num, description, command, response):
"""Print formatted test result"""
print(f"\nTest {test_num}: {description}")
print(f" Command: {command}")
print(f" Response: {response}")
# Check if response indicates success
if response.startswith("OK"):
print(" ✓ PASS")
elif response.startswith("ERROR"):
if "OUT_OF_RANGE" in response or "INVALID" in response:
print(" ✓ PASS (Expected error)")
else:
print(" ✗ FAIL (Unexpected error)")
else:
print(" ? UNKNOWN")
def main():
print("=" * 70)
print("UDP Exposure Control Test Client")
print("=" * 70)
print("Testing UDP control interface on 127.0.0.1:5001")
print()
# Check if server is reachable
print("Checking if control server is reachable...")
response = send_command("STATUS", timeout=2.0)
if "Timeout" in response:
print("✗ FAILED: Control server not responding")
print(" Make sure launch-ids.py is running first!")
sys.exit(1)
print("✓ Control server is reachable\n")
time.sleep(0.2)
# Test 1: Get current exposure
response = send_command("GET_EXPOSURE")
print_test(1, "Get current exposure", "GET_EXPOSURE", response)
time.sleep(0.2)
# Test 2: Set exposure to 10ms
response = send_command("SET_EXPOSURE 0.110")
print_test(2, "Set exposure to 10ms", "SET_EXPOSURE 0.010", response)
time.sleep(5.2)
# Test 3: Verify exposure was set
response = send_command("GET_EXPOSURE")
print_test(3, "Verify exposure changed", "GET_EXPOSURE", response)
time.sleep(0.2)
# Test 4: Set exposure to 20ms
response = send_command("SET_EXPOSURE 0.020")
print_test(4, "Set exposure to 20ms", "SET_EXPOSURE 0.020", response)
time.sleep(0.2)
# Test 5: Get framerate
response = send_command("GET_FRAMERATE")
print_test(5, "Get current framerate", "GET_FRAMERATE", response)
time.sleep(0.2)
# Test 6: Set framerate
response = send_command("SET_FRAMERATE 30")
print_test(6, "Set framerate to 30 fps", "SET_FRAMERATE 30", response)
time.sleep(0.2)
# Test 7: Verify framerate
response = send_command("GET_FRAMERATE")
print_test(7, "Verify framerate changed", "GET_FRAMERATE", response)
time.sleep(0.2)
# Test 8: Get status
response = send_command("STATUS")
print_test(8, "Get pipeline status", "STATUS", response)
time.sleep(0.2)
# Test 9: Invalid command
response = send_command("INVALID_CMD")
print_test(9, "Send invalid command", "INVALID_CMD", response)
time.sleep(0.2)
# Test 10: Out of range exposure (too high)
response = send_command("SET_EXPOSURE 5.0")
print_test(10, "Out of range exposure (5.0s)", "SET_EXPOSURE 5.0", response)
time.sleep(0.2)
# Test 11: Out of range exposure (too low)
response = send_command("SET_EXPOSURE 0.0001")
print_test(11, "Out of range exposure (0.1ms)", "SET_EXPOSURE 0.0001", response)
time.sleep(0.2)
# Test 12: Invalid syntax (missing parameter)
response = send_command("SET_EXPOSURE")
print_test(12, "Invalid syntax (missing param)", "SET_EXPOSURE", response)
time.sleep(0.2)
# Test 13: Invalid syntax (non-numeric)
response = send_command("SET_EXPOSURE abc")
print_test(13, "Invalid syntax (non-numeric)", "SET_EXPOSURE abc", response)
time.sleep(0.2)
# Test 14: Restore original exposure (16ms)
response = send_command("SET_EXPOSURE 0.016")
print_test(14, "Restore exposure to 16ms", "SET_EXPOSURE 0.016", response)
time.sleep(0.2)
# Test 15: Restore original framerate (22 fps)
response = send_command("SET_FRAMERATE 22")
print_test(15, "Restore framerate to 22 fps", "SET_FRAMERATE 22", response)
print()
print("=" * 70)
print("Test completed!")
print()
print("Quick reference:")
print(" echo 'SET_EXPOSURE 0.010' | nc -u 127.0.0.1 5001")
print(" echo 'GET_EXPOSURE' | nc -u 127.0.0.1 5001")
print(" echo 'STATUS' | nc -u 127.0.0.1 5001")
print("=" * 70)
if __name__ == "__main__":
main()

24
scripts/udp_backup.reg Normal file
View File

@@ -0,0 +1,24 @@
Windows Registry Editor Version 5.00
; [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters]
; "DefaultReceiveWindow" was not set
; [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters]
; "LargeBufferSize" was not set
; [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters]
; "MediumBufferSize" was not set
; [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
; "TcpWindowSize" was not set
; [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
; "MaxConnectionsPerServer" was not set
; [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
; "MaxFreeTcbs" was not set
; [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
; "DefaultTTL" was not set

View File

@@ -0,0 +1,265 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.8"
# dependencies = [
# "numpy",
# ]
# ///
"""
UDP Payload Analyzer for GStreamer Raw Video
Analyzes UDP packets on port 5000 and reports on payload structure
Based on network_guide.md:
- Real data: 2456x1 BGR (7368 bytes per line)
- Demo data: 1x640 RGB (1920 bytes per frame)
Usage: uv run scripts/udp_payload_analyzer.py
"""
import socket
import sys
import struct
import numpy as np
from datetime import datetime
from collections import defaultdict
class PayloadAnalyzer:
def __init__(self):
self.packet_sizes = defaultdict(int)
self.total_packets = 0
self.total_bytes = 0
def analyze_gstreamer_header(self, data):
"""Try to detect and parse GStreamer RTP/UDP headers"""
info = {}
# Check if it's RTP (GStreamer sometimes uses RTP)
if len(data) >= 12:
# RTP header format
byte0 = data[0]
version = (byte0 >> 6) & 0x03
padding = (byte0 >> 5) & 0x01
extension = (byte0 >> 4) & 0x01
csrc_count = byte0 & 0x0F
if version == 2: # RTP version 2
info['protocol'] = 'RTP'
info['version'] = version
info['padding'] = bool(padding)
info['extension'] = bool(extension)
byte1 = data[1]
info['marker'] = bool(byte1 >> 7)
info['payload_type'] = byte1 & 0x7F
info['sequence'] = struct.unpack('!H', data[2:4])[0]
info['timestamp'] = struct.unpack('!I', data[4:8])[0]
info['ssrc'] = struct.unpack('!I', data[8:12])[0]
payload_offset = 12 + (csrc_count * 4)
info['header_size'] = payload_offset
info['payload_size'] = len(data) - payload_offset
return info, payload_offset
# Raw video data (no RTP)
info['protocol'] = 'RAW'
info['header_size'] = 0
info['payload_size'] = len(data)
return info, 0
def analyze_video_payload(self, data, offset=0):
"""Analyze raw video data"""
payload = data[offset:]
size = len(payload)
analysis = {
'size': size,
'format': 'unknown'
}
# Check for known video formats from network_guide.md
if size == 7368: # 2456 × 1 × 3 (BGR)
analysis['format'] = 'BGR'
analysis['width'] = 2456
analysis['height'] = 1
analysis['channels'] = 3
analysis['description'] = 'Real camera data - Single line 2456x1 BGR'
elif size == 1920: # 1 × 640 × 3 (RGB)
analysis['format'] = 'RGB'
analysis['width'] = 1
analysis['height'] = 640
analysis['channels'] = 3
analysis['description'] = 'Demo data - Single column 1x640 RGB'
else:
# Try to guess format
# Common raw video sizes
possible_formats = []
# Try BGR/RGB (3 channels)
if size % 3 == 0:
pixels = size // 3
possible_formats.append(f'{pixels} pixels @ 3 channels (BGR/RGB)')
# Try GRAY (1 channel)
possible_formats.append(f'{size} pixels @ 1 channel (GRAY)')
# Try RGBA (4 channels)
if size % 4 == 0:
pixels = size // 4
possible_formats.append(f'{pixels} pixels @ 4 channels (RGBA)')
analysis['possible_formats'] = possible_formats
# Pixel statistics (if manageable size)
if size <= 100000: # Only analyze if < 100KB
try:
if 'channels' in analysis and analysis['channels'] == 3:
# Reshape as color image
pixels = size // 3
arr = np.frombuffer(payload, dtype=np.uint8).reshape(-1, 3)
analysis['pixel_stats'] = {
'min': [int(arr[:, i].min()) for i in range(3)],
'max': [int(arr[:, i].max()) for i in range(3)],
'mean': [float(arr[:, i].mean()) for i in range(3)],
'std': [float(arr[:, i].std()) for i in range(3)]
}
else:
# Treat as grayscale
arr = np.frombuffer(payload, dtype=np.uint8)
analysis['pixel_stats'] = {
'min': int(arr.min()),
'max': int(arr.max()),
'mean': float(arr.mean()),
'std': float(arr.std())
}
except:
pass
# First 32 bytes in hex
hex_preview = ' '.join(f'{b:02x}' for b in payload[:32])
analysis['hex_preview'] = hex_preview + ('...' if size > 32 else '')
return analysis
def print_report(self, packet_num, addr, data):
"""Print detailed analysis report"""
self.total_packets += 1
self.total_bytes += len(data)
self.packet_sizes[len(data)] += 1
timestamp = datetime.now().strftime("%H:%M:%S.%f")[:-3]
print("=" * 80)
print(f"PACKET #{packet_num} @ {timestamp}")
print("=" * 80)
print(f"Source: {addr[0]}:{addr[1]}")
print(f"Total Size: {len(data)} bytes")
print()
# Analyze header
header_info, payload_offset = self.analyze_gstreamer_header(data)
print("PROTOCOL ANALYSIS:")
print("-" * 80)
for key, value in header_info.items():
print(f" {key:20s}: {value}")
print()
# Analyze video payload
video_info = self.analyze_video_payload(data, payload_offset)
print("VIDEO PAYLOAD ANALYSIS:")
print("-" * 80)
if 'description' in video_info:
print(f" 📹 {video_info['description']}")
print(f" Format: {video_info['format']}")
print(f" Dimensions: {video_info['width']}x{video_info['height']}")
print(f" Channels: {video_info['channels']}")
else:
print(f" Size: {video_info['size']} bytes")
if 'possible_formats' in video_info:
print(f" Possible formats:")
for fmt in video_info['possible_formats']:
print(f" - {fmt}")
print()
if 'pixel_stats' in video_info:
print("PIXEL STATISTICS:")
print("-" * 80)
stats = video_info['pixel_stats']
if isinstance(stats['min'], list):
# Color image (BGR/RGB)
channels = ['Channel 0 (B/R)', 'Channel 1 (G)', 'Channel 2 (R/B)']
for i, ch in enumerate(channels):
print(f" {ch:20s}: min={stats['min'][i]:3d}, max={stats['max'][i]:3d}, mean={stats['mean'][i]:6.2f}, std={stats['std'][i]:6.2f}")
else:
# Grayscale
print(f" Grayscale: min={stats['min']}, max={stats['max']}, mean={stats['mean']:.2f}, std={stats['std']:.2f}")
print()
print("HEX PREVIEW (first 32 bytes):")
print("-" * 80)
print(f" {video_info['hex_preview']}")
print()
def print_summary(self):
"""Print statistics summary"""
print("\n" + "=" * 80)
print("SESSION SUMMARY")
print("=" * 80)
print(f"Total Packets: {self.total_packets}")
print(f"Total Bytes: {self.total_bytes:,}")
print(f"Average Packet Size: {self.total_bytes / max(self.total_packets, 1):.2f} bytes")
print()
print("Packet Size Distribution:")
for size in sorted(self.packet_sizes.keys()):
count = self.packet_sizes[size]
print(f" {size:6d} bytes: {count:4d} packets")
print()
def main():
print("=" * 80)
print("UDP PAYLOAD ANALYZER - Port 5000, 127.0.0.1")
print("Specialized for GStreamer Raw Video Analysis")
print("=" * 80)
print("Press Ctrl+C to stop and see summary\n")
analyzer = PayloadAnalyzer()
# Create UDP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
sock.bind(("127.0.0.1", 5000))
print(f"Listening on 127.0.0.1:5000...\n")
print("Waiting for packets...\n")
packet_count = 0
while True:
data, addr = sock.recvfrom(65535)
packet_count += 1
analyzer.print_report(packet_count, addr, data)
# Print summary every 100 packets
if packet_count % 100 == 0:
print(f"\n[Received {packet_count} packets so far... continuing capture]\n")
except KeyboardInterrupt:
print("\n\nCapture stopped by user.")
analyzer.print_summary()
except Exception as e:
print(f"\n[ERROR] {e}")
sys.exit(1)
finally:
sock.close()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,67 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.8"
# dependencies = []
# ///
"""
Simple UDP Receiver for port 5000 on 127.0.0.1
This uses raw sockets (built-in) - no external dependencies needed
Usage: uv run scripts/udp_sniffer_raw.py
Note: This RECEIVES UDP packets (not sniffing like pcap/scapy)
"""
import socket
import sys
from datetime import datetime
def main():
print("=" * 70)
print("UDP Receiver - Port 5000, 127.0.0.1")
print("=" * 70)
print("Press Ctrl+C to stop\n")
# Create UDP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# Bind to localhost port 5000
sock.bind(("127.0.0.1", 5000))
print(f"Listening on 127.0.0.1:5000...\n")
packet_count = 0
while True:
# Receive data
data, addr = sock.recvfrom(65535) # Max UDP packet size
packet_count += 1
timestamp = datetime.now().strftime("%H:%M:%S.%f")[:-3]
print(f"[{timestamp}] Packet #{packet_count}")
print(f" From: {addr[0]}:{addr[1]}")
print(f" Size: {len(data)} bytes")
# Print first 64 bytes in hex
hex_str = ' '.join(f'{b:02x}' for b in data[:64])
print(f" Data: {hex_str}{'...' if len(data) > 64 else ''}")
# Try to decode as ASCII (for text data)
try:
text = data[:100].decode('ascii', errors='ignore').strip()
if text and text.isprintable():
print(f" Text: {text[:80]}{'...' if len(text) > 80 else ''}")
except:
pass
print()
except KeyboardInterrupt:
print(f"\n\nReceived {packet_count} packets. Stopped by user.")
except Exception as e:
print(f"\n[ERROR] {e}")
sys.exit(1)
finally:
sock.close()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,206 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.8"
# dependencies = [
# "numpy>=1.24.0",
# "matplotlib>=3.7.0",
# ]
# ///
"""
Real-time Line Visualization for Camera Data
Displays RGB/BGR channel values across the line width in real-time
Usage: uv run visualize_line_realtime.py [--format BGR|RGB] [--port 5000]
"""
import socket
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import argparse
from collections import deque
# Parse arguments
parser = argparse.ArgumentParser(description='Real-time line channel visualization')
parser.add_argument('--format', type=str, default='BGR', choices=['BGR', 'RGB'],
help='Input format (default: BGR)')
parser.add_argument('--port', type=int, default=5000,
help='UDP port (default: 5000)')
parser.add_argument('--width', type=int, default=2456,
help='Line width in pixels (default: 2456)')
parser.add_argument('--fps-limit', type=int, default=30,
help='Maximum display fps (default: 30)')
args = parser.parse_args()
# Stream parameters
LINE_WIDTH = args.width
LINE_HEIGHT = 1
CHANNELS = 3
FRAME_SIZE = LINE_WIDTH * LINE_HEIGHT * CHANNELS
UDP_IP = "0.0.0.0"
UDP_PORT = args.port
# Create UDP socket with minimal buffer to avoid buffering old packets
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 65536) # Minimal buffer (64KB)
sock.setblocking(False) # Non-blocking for animation
sock.bind((UDP_IP, UDP_PORT))
print(f"Receiving {LINE_WIDTH}x{LINE_HEIGHT} {args.format} on UDP port {UDP_PORT}")
print(f"Display update rate: {args.fps_limit} fps max")
print("Close the plot window to exit")
# Initialize plot
fig, axes = plt.subplots(2, 1, figsize=(15, 8))
fig.suptitle(f'Real-time {args.format} Channel Visualization - Line Sensor',
fontsize=14, fontweight='bold')
# Channel order based on format
if args.format == 'BGR':
channel_names = ['Blue', 'Green', 'Red']
channel_colors = ['b', 'g', 'r']
channel_indices = [0, 1, 2] # BGR order
else: # RGB
channel_names = ['Red', 'Green', 'Blue']
channel_colors = ['r', 'g', 'b']
channel_indices = [0, 1, 2] # RGB order
# Initialize line data
x_data = np.arange(LINE_WIDTH)
y_data = [np.zeros(LINE_WIDTH) for _ in range(CHANNELS)]
y_grayscale = np.zeros(LINE_WIDTH) # Combined grayscale
# Top plot - GRAYSCALE ONLY
line_gray, = axes[0].plot(x_data, y_grayscale, 'k-', linewidth=1.0)
axes[0].set_xlim(0, LINE_WIDTH)
axes[0].set_ylim(0, 255)
axes[0].set_xlabel('Pixel Position')
axes[0].set_ylabel('Grayscale Value')
axes[0].set_title('Grayscale (Luminance-weighted)')
axes[0].grid(True, alpha=0.3)
# Bottom plot - RGB/BGR channels with color
lines_separate = []
for i in range(CHANNELS):
line, = axes[1].plot(x_data, y_data[i], channel_colors[i] + '-',
label=channel_names[i], alpha=0.7, linewidth=0.8)
lines_separate.append(line)
axes[1].set_xlim(0, LINE_WIDTH)
axes[1].set_ylim(0, 255)
axes[1].set_xlabel('Pixel Position')
axes[1].set_ylabel('Pixel Value')
axes[1].set_title(f'{args.format} Channels: {" | ".join(channel_names)}')
axes[1].legend(loc='upper right')
axes[1].grid(True, alpha=0.3)
# Statistics text
stats_text = axes[0].text(0.02, 0.98, '', transform=axes[0].transAxes,
verticalalignment='top', fontfamily='monospace',
fontsize=9, bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
# Frame counter
frame_count = [0]
last_update = [0]
fps_buffer = deque(maxlen=30)
# Animation update function
def update_plot(frame):
"""Update plot with new UDP data"""
import time
current_time = time.time()
# Rate limiting
if args.fps_limit > 0:
min_interval = 1.0 / args.fps_limit
if current_time - last_update[0] < min_interval:
return [line_gray] + lines_separate + [stats_text]
# Drain all buffered packets and only use the latest one
latest_data = None
packets_drained = 0
try:
# Read all available packets, keep only the last one
while True:
try:
data, addr = sock.recvfrom(65536)
if len(data) == FRAME_SIZE:
latest_data = data
packets_drained += 1
except BlockingIOError:
# No more packets available
break
# Only process if we got valid data
if latest_data is None:
return [line_gray] + lines_separate + [stats_text]
# Parse frame
line_data = np.frombuffer(latest_data, dtype=np.uint8).reshape((LINE_HEIGHT, LINE_WIDTH, CHANNELS))
# Extract channels based on format
for i in range(CHANNELS):
y_data[i] = line_data[0, :, channel_indices[i]]
# Calculate grayscale (luminance using standard weights for RGB)
# For BGR: weights are [0.114, 0.587, 0.299]
# For RGB: weights are [0.299, 0.587, 0.114]
if args.format == 'BGR':
y_grayscale = (0.114 * y_data[0] + 0.587 * y_data[1] + 0.299 * y_data[2])
else: # RGB
y_grayscale = (0.299 * y_data[0] + 0.587 * y_data[1] + 0.114 * y_data[2])
# Update top plot (grayscale only)
line_gray.set_ydata(y_grayscale)
# Update bottom plot (RGB/BGR channels)
for i, line in enumerate(lines_separate):
line.set_ydata(y_data[i])
# Calculate statistics
stats = []
for i in range(CHANNELS):
ch_data = y_data[i]
stats.append(f"{channel_names[i]:5s}: min={ch_data.min():3d} max={ch_data.max():3d} "
f"mean={ch_data.mean():6.2f} std={ch_data.std():6.2f}")
# Add grayscale stats
stats.append(f"Gray : min={y_grayscale.min():6.2f} max={y_grayscale.max():6.2f} "
f"mean={y_grayscale.mean():6.2f} std={y_grayscale.std():6.2f}")
# Calculate FPS
frame_count[0] += 1
if last_update[0] > 0:
fps = 1.0 / (current_time - last_update[0])
fps_buffer.append(fps)
avg_fps = np.mean(fps_buffer)
else:
avg_fps = 0
last_update[0] = current_time
# Update stats text
stats_str = f"Frame: {frame_count[0]} FPS: {avg_fps:.1f}\n" + "\n".join(stats)
stats_text.set_text(stats_str)
except BlockingIOError:
# No data available
pass
except Exception as e:
print(f"Error: {e}")
return [line_gray] + lines_separate + [stats_text]
# Set up animation with blit for better performance
ani = animation.FuncAnimation(fig, update_plot, interval=10, blit=True, cache_frame_data=False)
plt.tight_layout()
plt.show()
# Cleanup
sock.close()
print(f"\nReceived {frame_count[0]} frames total")