Compare commits

...

11 Commits

Author SHA1 Message Date
463228baf5 Enhance feature tracking to handle descriptor dimension mismatches
This commit updates the feature tracking logic to check for descriptor dimension mismatches when adding features to existing entries. If a mismatch is detected, existing features are replaced instead of concatenated, ensuring data integrity. Additionally, debug messages have been added to provide better insights during the feature extraction process, improving overall user experience.
2025-09-26 14:04:25 +02:00
e7571a78f4 Refactor feature extraction to handle SURF fallback and add features from regions
This commit updates the feature extraction process to provide a fallback to SIFT when SURF is not available, enhancing robustness. It introduces a new method to extract features from specific regions of a frame, allowing for the addition of features to existing ones. Feedback messages have been improved to reflect these changes, ensuring better user experience and clarity during feature extraction.
2025-09-26 14:00:01 +02:00
ea008ba23c Implement shift-right click and ctrl-right click for feature extraction 2025-09-26 13:57:00 +02:00
366c338c5d Finally reinvent the whole mapping procedure again! 2025-09-26 13:54:01 +02:00
0d26ffaca4 Refactor feature extraction to directly use rotated frame coordinates in VideoEditor
This commit updates the feature extraction process to store and utilize features in rotated frame coordinates, aligning with existing motion tracking methods. It simplifies the coordinate mapping logic by removing unnecessary transformations, enhancing the accuracy of feature tracking. Additionally, feedback messages have been refined to reflect the changes in feature extraction, improving user experience.
2025-09-26 13:43:42 +02:00
aaf78bf0da Refactor feature extraction to utilize transformed frames in VideoEditor
This commit enhances the feature extraction process by ensuring that features are extracted from the transformed frame that users see, accounting for all transformations such as cropping, zooming, and rotation. It simplifies the coordinate mapping logic, allowing for accurate feature tracking without the need for manual coordinate adjustments. Feedback messages have been updated to reflect the extraction from the visible area, improving user experience.
2025-09-26 13:40:10 +02:00
43d350fff2 Refactor feature extraction to support cropping in VideoEditor
This commit updates the feature extraction process to handle cropped regions of the original frame, allowing for more accurate feature tracking based on the visible area. It introduces logic to extract features from both cropped and full frames, ensuring that coordinates are correctly mapped back to the original frame space. Feedback messages have been enhanced to inform users about the success or failure of feature extraction, improving the overall user experience.
2025-09-26 13:37:41 +02:00
d1b9e7c470 Enhance feature extraction with coordinate mapping in VideoEditor
This commit modifies the feature extraction process to include a coordinate mapper, allowing for accurate mapping of features from transformed frames back to their original coordinates. It introduces new methods for handling coordinate transformations during cropping and rotation, ensuring that features are correctly stored and retrieved in the appropriate frame space. This improvement enhances the tracking accuracy and overall functionality of the VideoEditor.
2025-09-26 13:24:48 +02:00
c50234f5c1 Refactor feature extraction to use transformed frames in VideoEditor
This commit updates the feature extraction process to utilize the transformed frames that users see, rather than the original frames. It includes a new method for mapping coordinates from transformed frames back to original coordinates, ensuring accurate tracking. Additionally, feedback messages have been improved to reflect the success or failure of feature extraction based on the visible area.
2025-09-26 13:21:59 +02:00
171155e528 Change some fucking keys it invented 2025-09-26 13:13:53 +02:00
710a1f7de3 Maybe implement a feature tracker 2025-09-26 13:10:47 +02:00
2 changed files with 605 additions and 5 deletions

View File

@@ -4,7 +4,7 @@ import cv2
import argparse import argparse
import numpy as np import numpy as np
from pathlib import Path from pathlib import Path
from typing import List from typing import List, Optional, Tuple, Dict, Any
import time import time
import re import re
import threading import threading
@@ -25,6 +25,236 @@ def load_image_utf8(image_path):
except Exception as e: except Exception as e:
raise ValueError(f"Could not load image file: {image_path} - {e}") raise ValueError(f"Could not load image file: {image_path} - {e}")
class FeatureTracker:
"""Semi-automatic feature tracking with SIFT/SURF/ORB support and full state serialization"""
def __init__(self):
# Feature detection parameters
self.detector_type = 'SIFT' # 'SIFT', 'SURF', 'ORB'
self.max_features = 1000
self.match_threshold = 0.7
# Tracking state
self.features = {} # {frame_number: {'keypoints': [...], 'descriptors': [...], 'positions': [...]}}
self.tracking_enabled = False
self.auto_tracking = False
# Initialize detectors
self._init_detectors()
def _init_detectors(self):
"""Initialize feature detectors based on type"""
try:
if self.detector_type == 'SIFT':
self.detector = cv2.SIFT_create(nfeatures=self.max_features)
elif self.detector_type == 'SURF':
# SURF requires opencv-contrib-python, fallback to SIFT
print("Warning: SURF requires opencv-contrib-python package. Using SIFT instead.")
self.detector = cv2.SIFT_create(nfeatures=self.max_features)
self.detector_type = 'SIFT'
elif self.detector_type == 'ORB':
self.detector = cv2.ORB_create(nfeatures=self.max_features)
else:
raise ValueError(f"Unknown detector type: {self.detector_type}")
except Exception as e:
print(f"Warning: Could not initialize {self.detector_type} detector: {e}")
# Fallback to ORB
self.detector_type = 'ORB'
self.detector = cv2.ORB_create(nfeatures=self.max_features)
def set_detector_type(self, detector_type: str):
"""Change detector type and reinitialize"""
if detector_type in ['SIFT', 'SURF', 'ORB']:
self.detector_type = detector_type
self._init_detectors()
print(f"Switched to {detector_type} detector")
else:
print(f"Invalid detector type: {detector_type}")
def extract_features(self, frame: np.ndarray, frame_number: int, coord_mapper=None) -> bool:
"""Extract features from a frame and store them"""
try:
# Convert to grayscale if needed
if len(frame.shape) == 3:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
else:
gray = frame
# Extract keypoints and descriptors
keypoints, descriptors = self.detector.detectAndCompute(gray, None)
if keypoints is None or descriptors is None:
return False
# Map coordinates back to original frame space if mapper provided
if coord_mapper:
mapped_positions = []
for kp in keypoints:
orig_x, orig_y = coord_mapper(kp.pt[0], kp.pt[1])
mapped_positions.append((int(orig_x), int(orig_y)))
else:
mapped_positions = [(int(kp.pt[0]), int(kp.pt[1])) for kp in keypoints]
# Store features
self.features[frame_number] = {
'keypoints': keypoints,
'descriptors': descriptors,
'positions': mapped_positions
}
print(f"Extracted {len(keypoints)} features from frame {frame_number}")
return True
except Exception as e:
print(f"Error extracting features from frame {frame_number}: {e}")
return False
def extract_features_from_region(self, frame: np.ndarray, frame_number: int, coord_mapper=None) -> bool:
"""Extract features from a frame and ADD them to existing features"""
try:
# Convert to grayscale if needed
if len(frame.shape) == 3:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
else:
gray = frame
# Extract keypoints and descriptors
keypoints, descriptors = self.detector.detectAndCompute(gray, None)
if keypoints is None or descriptors is None:
return False
# Map coordinates back to original frame space if mapper provided
if coord_mapper:
mapped_positions = []
for kp in keypoints:
orig_x, orig_y = coord_mapper(kp.pt[0], kp.pt[1])
mapped_positions.append((int(orig_x), int(orig_y)))
else:
mapped_positions = [(int(kp.pt[0]), int(kp.pt[1])) for kp in keypoints]
# Add to existing features or create new entry
if frame_number in self.features:
# Check if descriptor dimensions match
existing_features = self.features[frame_number]
if existing_features['descriptors'].shape[1] != descriptors.shape[1]:
print(f"Warning: Descriptor dimension mismatch ({existing_features['descriptors'].shape[1]} vs {descriptors.shape[1]}). Cannot concatenate. Replacing features.")
# Replace instead of concatenate when dimensions don't match
existing_features['keypoints'] = keypoints
existing_features['descriptors'] = descriptors
existing_features['positions'] = mapped_positions
else:
# Append to existing features
existing_features['keypoints'] = np.concatenate([existing_features['keypoints'], keypoints])
existing_features['descriptors'] = np.concatenate([existing_features['descriptors'], descriptors])
existing_features['positions'].extend(mapped_positions)
print(f"Added {len(keypoints)} features to frame {frame_number} (total: {len(existing_features['positions'])})")
else:
# Create new features entry
self.features[frame_number] = {
'keypoints': keypoints,
'descriptors': descriptors,
'positions': mapped_positions
}
print(f"Extracted {len(keypoints)} features from frame {frame_number}")
return True
except Exception as e:
print(f"Error extracting features from frame {frame_number}: {e}")
return False
def get_tracking_position(self, frame_number: int) -> Optional[Tuple[float, float]]:
"""Get the average tracking position for a frame"""
if frame_number not in self.features or not self.features[frame_number]['positions']:
return None
positions = self.features[frame_number]['positions']
if not positions:
return None
avg_x = sum(pos[0] for pos in positions) / len(positions)
avg_y = sum(pos[1] for pos in positions) / len(positions)
return (avg_x, avg_y)
def clear_features(self):
"""Clear all stored features"""
self.features.clear()
print("All features cleared")
def get_feature_count(self, frame_number: int) -> int:
"""Get number of features for a frame"""
if frame_number in self.features:
return len(self.features[frame_number]['positions'])
return 0
def serialize_features(self) -> Dict[str, Any]:
"""Serialize features for state saving"""
serialized = {}
for frame_num, frame_data in self.features.items():
frame_key = str(frame_num)
serialized[frame_key] = {
'positions': frame_data['positions'],
'keypoints': None, # Keypoints are not serialized (too large)
'descriptors': None # Descriptors are not serialized (too large)
}
return serialized
def deserialize_features(self, serialized_data: Dict[str, Any]):
"""Deserialize features from state loading"""
self.features.clear()
for frame_key, frame_data in serialized_data.items():
frame_num = int(frame_key)
self.features[frame_num] = {
'positions': frame_data['positions'],
'keypoints': None,
'descriptors': None
}
print(f"Deserialized features for {len(self.features)} frames")
def get_state_dict(self) -> Dict[str, Any]:
"""Get complete state for serialization"""
return {
'detector_type': self.detector_type,
'max_features': self.max_features,
'match_threshold': self.match_threshold,
'tracking_enabled': self.tracking_enabled,
'auto_tracking': self.auto_tracking,
'features': self.serialize_features()
}
def load_state_dict(self, state_dict: Dict[str, Any]):
"""Load complete state from serialization"""
if 'detector_type' in state_dict:
self.detector_type = state_dict['detector_type']
self._init_detectors()
if 'max_features' in state_dict:
self.max_features = state_dict['max_features']
if 'match_threshold' in state_dict:
self.match_threshold = state_dict['match_threshold']
if 'tracking_enabled' in state_dict:
self.tracking_enabled = state_dict['tracking_enabled']
if 'auto_tracking' in state_dict:
self.auto_tracking = state_dict['auto_tracking']
if 'features' in state_dict:
self.deserialize_features(state_dict['features'])
print("Feature tracker state loaded")
class Cv2BufferedCap: class Cv2BufferedCap:
"""Buffered wrapper around cv2.VideoCapture that handles frame loading, seeking, and caching correctly""" """Buffered wrapper around cv2.VideoCapture that handles frame loading, seeking, and caching correctly"""
@@ -595,6 +825,15 @@ class VideoEditor:
self.tracking_points = {} # {frame_number: [(x, y), ...]} in original frame coords self.tracking_points = {} # {frame_number: [(x, y), ...]} in original frame coords
self.tracking_enabled = False self.tracking_enabled = False
# Feature tracking system
self.feature_tracker = FeatureTracker()
# Initialize selective feature extraction/deletion
self.selective_feature_extraction_start = None
self.selective_feature_extraction_rect = None
self.selective_feature_deletion_start = None
self.selective_feature_deletion_rect = None
# Project view mode # Project view mode
self.project_view_mode = False self.project_view_mode = False
self.project_view = None self.project_view = None
@@ -639,7 +878,8 @@ class VideoEditor:
'seek_multiplier': getattr(self, 'seek_multiplier', 1.0), 'seek_multiplier': getattr(self, 'seek_multiplier', 1.0),
'is_playing': getattr(self, 'is_playing', False), 'is_playing': getattr(self, 'is_playing', False),
'tracking_enabled': self.tracking_enabled, 'tracking_enabled': self.tracking_enabled,
'tracking_points': {str(k): v for k, v in self.tracking_points.items()} 'tracking_points': {str(k): v for k, v in self.tracking_points.items()},
'feature_tracker': self.feature_tracker.get_state_dict()
} }
with open(state_file, 'w') as f: with open(state_file, 'w') as f:
@@ -721,6 +961,11 @@ class VideoEditor:
if 'tracking_points' in state and isinstance(state['tracking_points'], dict): if 'tracking_points' in state and isinstance(state['tracking_points'], dict):
self.tracking_points = {int(k): v for k, v in state['tracking_points'].items()} self.tracking_points = {int(k): v for k, v in state['tracking_points'].items()}
print(f"Loaded tracking_points: {sum(len(v) for v in self.tracking_points.values())} points") print(f"Loaded tracking_points: {sum(len(v) for v in self.tracking_points.values())} points")
# Load feature tracker state
if 'feature_tracker' in state:
self.feature_tracker.load_state_dict(state['feature_tracker'])
print(f"Loaded feature tracker state")
# Validate cut markers against current video length # Validate cut markers against current video length
if self.cut_start_frame is not None and self.cut_start_frame >= self.total_frames: if self.cut_start_frame is not None and self.cut_start_frame >= self.total_frames:
@@ -1054,6 +1299,42 @@ class VideoEditor:
"""Seek to specific frame""" """Seek to specific frame"""
self.current_frame = max(0, min(frame_number, self.total_frames - 1)) self.current_frame = max(0, min(frame_number, self.total_frames - 1))
self.load_current_frame() self.load_current_frame()
# Auto-extract features if feature tracking is enabled and auto-tracking is on
print(f"DEBUG: seek_to_frame {frame_number}: is_image_mode={self.is_image_mode}, tracking_enabled={self.feature_tracker.tracking_enabled}, auto_tracking={self.feature_tracker.auto_tracking}, display_frame={self.current_display_frame is not None}")
if (not self.is_image_mode and
self.feature_tracker.tracking_enabled and
self.feature_tracker.auto_tracking and
self.current_display_frame is not None):
print(f"DEBUG: Auto-tracking conditions met for frame {self.current_frame}")
# Only extract if we don't already have features for this frame
if self.current_frame not in self.feature_tracker.features:
print(f"DEBUG: Extracting features for frame {self.current_frame}")
# Extract features from the transformed frame (what user sees)
# This handles all transformations (crop, zoom, rotation) correctly
display_frame = self.apply_crop_zoom_and_rotation(self.current_display_frame)
if display_frame is not None:
# Map coordinates from transformed frame to rotated frame coordinates
# Use the existing coordinate transformation system
def coord_mapper(x, y):
# Map from transformed frame coordinates to screen coordinates
frame_height, frame_width = display_frame.shape[:2]
available_height = self.window_height - (0 if self.is_image_mode else self.TIMELINE_HEIGHT)
start_y = (available_height - frame_height) // 2
start_x = (self.window_width - frame_width) // 2
# Convert to screen coordinates
screen_x = x + start_x
screen_y = y + start_y
# Use the existing coordinate transformation system
return self._map_screen_to_rotated(screen_x, screen_y)
self.feature_tracker.extract_features(display_frame, self.current_frame, coord_mapper)
else:
print(f"DEBUG: Frame {self.current_frame} already has features, skipping")
def jump_to_previous_marker(self): def jump_to_previous_marker(self):
"""Jump to the previous tracking marker (frame with tracking points).""" """Jump to the previous tracking marker (frame with tracking points)."""
@@ -1268,8 +1549,18 @@ class VideoEditor:
h = min(h, rot_h - y) h = min(h, rot_h - y)
return (x, y, w, h) return (x, y, w, h)
def _get_interpolated_tracking_position(self, frame_number): def _get_interpolated_tracking_position(self, frame_number):
"""Linear interpolation in ROTATED frame coords. Returns (rx, ry) or None.""" """Linear interpolation in ROTATED frame coords. Returns (rx, ry) or None."""
# First try feature tracking if enabled
if self.feature_tracker.tracking_enabled:
feature_pos = self.feature_tracker.get_tracking_position(frame_number)
if feature_pos:
# Features are stored in rotated frame coordinates (like existing motion tracking)
# We can use them directly for the tracking system
return (feature_pos[0], feature_pos[1])
# Fall back to manual tracking points
if not self.tracking_points: if not self.tracking_points:
return None return None
frames = sorted(self.tracking_points.keys()) frames = sorted(self.tracking_points.keys())
@@ -1390,6 +1681,110 @@ class VideoEditor:
self.cached_transformed_frame = None self.cached_transformed_frame = None
self.cached_frame_number = None self.cached_frame_number = None
self.cached_transform_hash = None self.cached_transform_hash = None
def _extract_features_from_region(self, screen_rect):
"""Extract features from a specific screen region"""
x, y, w, h = screen_rect
print(f"DEBUG: Extracting features from region ({x}, {y}, {w}, {h})")
# Map screen coordinates to rotated frame coordinates
rx1, ry1 = self._map_screen_to_rotated(x, y)
rx2, ry2 = self._map_screen_to_rotated(x + w, y + h)
# Get the region in rotated frame coordinates
left_r = min(rx1, rx2)
top_r = min(ry1, ry2)
right_r = max(rx1, rx2)
bottom_r = max(ry1, ry2)
# Extract features from this region of the original frame
if self.rotation_angle in (90, 270):
# For rotated frames, we need to map back to original frame coordinates
if self.rotation_angle == 90:
orig_x = top_r
orig_y = self.frame_height - right_r
orig_w = bottom_r - top_r
orig_h = right_r - left_r
else: # 270
orig_x = self.frame_width - bottom_r
orig_y = left_r
orig_w = bottom_r - top_r
orig_h = right_r - left_r
else:
orig_x, orig_y = left_r, top_r
orig_w, orig_h = right_r - left_r, bottom_r - top_r
# Extract features from this region
if (orig_x >= 0 and orig_y >= 0 and
orig_x + orig_w <= self.frame_width and
orig_y + orig_h <= self.frame_height):
region_frame = self.current_display_frame[orig_y:orig_y+orig_h, orig_x:orig_x+orig_w]
if region_frame.size > 0:
# Map coordinates from region to rotated frame coordinates
def coord_mapper(px, py):
# Map from region coordinates to rotated frame coordinates
if self.rotation_angle == 90:
rot_x = orig_x + py
rot_y = self.frame_height - (orig_y + px)
elif self.rotation_angle == 270:
rot_x = self.frame_width - (orig_y + py)
rot_y = orig_x + px
else:
rot_x = orig_x + px
rot_y = orig_y + py
return (int(rot_x), int(rot_y))
# Extract features and add them to existing features
success = self.feature_tracker.extract_features_from_region(region_frame, self.current_frame, coord_mapper)
if success:
count = self.feature_tracker.get_feature_count(self.current_frame)
self.show_feedback_message(f"Added features from selected region (total: {count})")
else:
self.show_feedback_message("Failed to extract features from region")
else:
self.show_feedback_message("Region too small")
else:
self.show_feedback_message("Region outside frame bounds")
def _delete_features_from_region(self, screen_rect):
"""Delete features from a specific screen region"""
x, y, w, h = screen_rect
print(f"DEBUG: Deleting features from region ({x}, {y}, {w}, {h})")
if self.current_frame not in self.feature_tracker.features:
self.show_feedback_message("No features to delete")
return
# Map screen coordinates to rotated frame coordinates
rx1, ry1 = self._map_screen_to_rotated(x, y)
rx2, ry2 = self._map_screen_to_rotated(x + w, y + h)
# Get the region in rotated frame coordinates
left_r = min(rx1, rx2)
top_r = min(ry1, ry2)
right_r = max(rx1, rx2)
bottom_r = max(ry1, ry2)
# Remove features within this region
features = self.feature_tracker.features[self.current_frame]
original_count = len(features['positions'])
# Filter out features within the region
filtered_positions = []
for fx, fy in features['positions']:
if not (left_r <= fx <= right_r and top_r <= fy <= bottom_r):
filtered_positions.append((fx, fy))
# Update the features
features['positions'] = filtered_positions
removed_count = original_count - len(filtered_positions)
if removed_count > 0:
self.show_feedback_message(f"Removed {removed_count} features from selected region")
self.save_state()
else:
self.show_feedback_message("No features found in selected region")
def apply_rotation(self, frame): def apply_rotation(self, frame):
@@ -1940,13 +2335,19 @@ class VideoEditor:
motion_text = ( motion_text = (
f" | Motion: {self.tracking_enabled}" if self.tracking_enabled else "" f" | Motion: {self.tracking_enabled}" if self.tracking_enabled else ""
) )
feature_text = (
f" | Features: {self.feature_tracker.tracking_enabled}" if self.feature_tracker.tracking_enabled else ""
)
if self.feature_tracker.tracking_enabled and self.current_frame in self.feature_tracker.features:
feature_count = self.feature_tracker.get_feature_count(self.current_frame)
feature_text = f" | Features: {feature_count} pts"
autorepeat_text = ( autorepeat_text = (
f" | Loop: ON" if self.looping_between_markers else "" f" | Loop: ON" if self.looping_between_markers else ""
) )
if self.is_image_mode: if self.is_image_mode:
info_text = f"Image | Zoom: {self.zoom_factor:.1f}x{rotation_text}{brightness_text}{contrast_text}{motion_text}" info_text = f"Image | Zoom: {self.zoom_factor:.1f}x{rotation_text}{brightness_text}{contrast_text}{motion_text}{feature_text}"
else: else:
info_text = f"Frame: {self.current_frame}/{self.total_frames} | Speed: {self.playback_speed:.1f}x | Zoom: {self.zoom_factor:.1f}x{seek_multiplier_text}{rotation_text}{brightness_text}{contrast_text}{motion_text}{autorepeat_text} | {'Playing' if self.is_playing else 'Paused'}" info_text = f"Frame: {self.current_frame}/{self.total_frames} | Speed: {self.playback_speed:.1f}x | Zoom: {self.zoom_factor:.1f}x{seek_multiplier_text}{rotation_text}{brightness_text}{contrast_text}{motion_text}{feature_text}{autorepeat_text} | {'Playing' if self.is_playing else 'Paused'}"
cv2.putText( cv2.putText(
canvas, canvas,
info_text, info_text,
@@ -2039,6 +2440,27 @@ class VideoEditor:
cv2.circle(canvas, (sx, sy), 6, (255, 0, 0), -1) cv2.circle(canvas, (sx, sy), 6, (255, 0, 0), -1)
cv2.circle(canvas, (sx, sy), 6, (255, 255, 255), 1) cv2.circle(canvas, (sx, sy), 6, (255, 255, 255), 1)
# Draw feature tracking points (green circles)
if (not self.is_image_mode and
self.feature_tracker.tracking_enabled and
self.current_frame in self.feature_tracker.features):
feature_positions = self.feature_tracker.features[self.current_frame]['positions']
for (fx, fy) in feature_positions:
# Features are stored in rotated frame coordinates (like existing motion tracking)
# Use the existing coordinate transformation system
sx, sy = self._map_rotated_to_screen(fx, fy)
cv2.circle(canvas, (sx, sy), 4, (0, 255, 0), -1) # Green circles for features
cv2.circle(canvas, (sx, sy), 4, (255, 255, 255), 1)
# Draw selection rectangles for feature extraction/deletion
if self.selective_feature_extraction_rect:
x, y, w, h = self.selective_feature_extraction_rect
cv2.rectangle(canvas, (x, y), (x + w, y + h), (0, 255, 255), 2) # Yellow for extraction
if self.selective_feature_deletion_rect:
x, y, w, h = self.selective_feature_deletion_rect
cv2.rectangle(canvas, (x, y), (x + w, y + h), (0, 0, 255), 2) # Red for deletion
# Draw previous and next tracking points with motion path visualization # Draw previous and next tracking points with motion path visualization
if not self.is_image_mode and self.tracking_points: if not self.is_image_mode and self.tracking_points:
prev_result = self._get_previous_tracking_point() prev_result = self._get_previous_tracking_point()
@@ -2176,6 +2598,50 @@ class VideoEditor:
if flags & cv2.EVENT_FLAG_CTRLKEY and event == cv2.EVENT_LBUTTONDOWN: if flags & cv2.EVENT_FLAG_CTRLKEY and event == cv2.EVENT_LBUTTONDOWN:
self.zoom_center = (x, y) self.zoom_center = (x, y)
# Handle Shift+Right-click+drag for selective feature extraction
if event == cv2.EVENT_RBUTTONDOWN and (flags & cv2.EVENT_FLAG_SHIFTKEY):
if not self.is_image_mode:
# Enable feature tracking if not already enabled
if not self.feature_tracker.tracking_enabled:
self.feature_tracker.tracking_enabled = True
self.show_feedback_message("Feature tracking enabled")
self.selective_feature_extraction_start = (x, y)
self.selective_feature_extraction_rect = None
print(f"DEBUG: Started selective feature extraction at ({x}, {y})")
# Handle Shift+Right-click+drag for selective feature extraction
if event == cv2.EVENT_MOUSEMOVE and (flags & cv2.EVENT_FLAG_SHIFTKEY) and self.selective_feature_extraction_start:
if not self.is_image_mode:
start_x, start_y = self.selective_feature_extraction_start
self.selective_feature_extraction_rect = (min(start_x, x), min(start_y, y), abs(x - start_x), abs(y - start_y))
# Handle Shift+Right-click release for selective feature extraction
if event == cv2.EVENT_RBUTTONUP and (flags & cv2.EVENT_FLAG_SHIFTKEY) and self.selective_feature_extraction_start:
if not self.is_image_mode and self.selective_feature_extraction_rect:
self._extract_features_from_region(self.selective_feature_extraction_rect)
self.selective_feature_extraction_start = None
self.selective_feature_extraction_rect = None
# Handle Ctrl+Right-click+drag for selective feature deletion
if event == cv2.EVENT_RBUTTONDOWN and (flags & cv2.EVENT_FLAG_CTRLKEY):
if not self.is_image_mode and self.feature_tracker.tracking_enabled:
self.selective_feature_deletion_start = (x, y)
self.selective_feature_deletion_rect = None
print(f"DEBUG: Started selective feature deletion at ({x}, {y})")
# Handle Ctrl+Right-click+drag for selective feature deletion
if event == cv2.EVENT_MOUSEMOVE and (flags & cv2.EVENT_FLAG_CTRLKEY) and self.selective_feature_deletion_start:
if not self.is_image_mode:
start_x, start_y = self.selective_feature_deletion_start
self.selective_feature_deletion_rect = (min(start_x, x), min(start_y, y), abs(x - start_x), abs(y - start_y))
# Handle Ctrl+Right-click release for selective feature deletion
if event == cv2.EVENT_RBUTTONUP and (flags & cv2.EVENT_FLAG_CTRLKEY) and self.selective_feature_deletion_start:
if not self.is_image_mode and self.feature_tracker.tracking_enabled and self.selective_feature_deletion_rect:
self._delete_features_from_region(self.selective_feature_deletion_rect)
self.selective_feature_deletion_start = None
self.selective_feature_deletion_rect = None
# Handle right-click for tracking points (no modifiers) # Handle right-click for tracking points (no modifiers)
if event == cv2.EVENT_RBUTTONDOWN and not (flags & (cv2.EVENT_FLAG_CTRLKEY | cv2.EVENT_FLAG_SHIFTKEY)): if event == cv2.EVENT_RBUTTONDOWN and not (flags & (cv2.EVENT_FLAG_CTRLKEY | cv2.EVENT_FLAG_SHIFTKEY)):
if not self.is_image_mode: if not self.is_image_mode:
@@ -2405,6 +2871,9 @@ class VideoEditor:
self.tracking_enabled = False self.tracking_enabled = False
self.tracking_points = {} self.tracking_points = {}
# Reset feature tracking
self.feature_tracker.clear_features()
# Reset cut markers # Reset cut markers
self.cut_start_frame = None self.cut_start_frame = None
self.cut_end_frame = None self.cut_end_frame = None
@@ -3013,9 +3482,16 @@ class VideoEditor:
print(" p: Toggle project view") print(" p: Toggle project view")
print(" 1: Set cut start point") print(" 1: Set cut start point")
print(" 2: Set cut end point") print(" 2: Set cut end point")
print(" T: Toggle loop between markers") print(" t: Toggle loop between markers")
print(" ,: Jump to previous marker") print(" ,: Jump to previous marker")
print(" .: Jump to next marker") print(" .: Jump to next marker")
print(" F: Toggle feature tracking")
print(" Shift+T: Extract features from current frame")
print(" g: Toggle auto feature extraction")
print(" G: Clear all feature data")
print(" H: Switch detector (SIFT/ORB)")
print(" Shift+Right-click+drag: Extract features from selected region")
print(" Ctrl+Right-click+drag: Delete features from selected region")
if len(self.video_files) > 1: if len(self.video_files) > 1:
print(" N: Next video") print(" N: Next video")
print(" n: Previous video") print(" n: Previous video")
@@ -3273,6 +3749,73 @@ class VideoEditor:
self.tracking_points = {} self.tracking_points = {}
self.show_feedback_message("Tracking points cleared") self.show_feedback_message("Tracking points cleared")
self.save_state() self.save_state()
elif key == ord("F"):
# Toggle feature tracking on/off
self.feature_tracker.tracking_enabled = not self.feature_tracker.tracking_enabled
self.show_feedback_message(f"Feature tracking {'ON' if self.feature_tracker.tracking_enabled else 'OFF'}")
self.save_state()
elif key == ord("T"):
# Extract features from current frame (Shift+T)
if not self.is_image_mode and self.current_display_frame is not None:
# Extract features from the transformed frame (what user sees)
# This handles all transformations (crop, zoom, rotation) correctly
display_frame = self.apply_crop_zoom_and_rotation(self.current_display_frame)
if display_frame is not None:
# Map coordinates from transformed frame to rotated frame coordinates
# Use the existing coordinate transformation system
def coord_mapper(x, y):
# The transformed frame coordinates are in the display frame space
# We need to map them to screen coordinates first, then use the existing
# _map_screen_to_rotated function
# Map from transformed frame coordinates to screen coordinates
# The transformed frame is centered on the canvas
frame_height, frame_width = display_frame.shape[:2]
available_height = self.window_height - (0 if self.is_image_mode else self.TIMELINE_HEIGHT)
start_y = (available_height - frame_height) // 2
start_x = (self.window_width - frame_width) // 2
# Convert to screen coordinates
screen_x = x + start_x
screen_y = y + start_y
# Use the existing coordinate transformation system
return self._map_screen_to_rotated(screen_x, screen_y)
success = self.feature_tracker.extract_features(display_frame, self.current_frame, coord_mapper)
if success:
count = self.feature_tracker.get_feature_count(self.current_frame)
self.show_feedback_message(f"Extracted {count} features from visible area")
else:
self.show_feedback_message("Failed to extract features")
else:
self.show_feedback_message("No display frame available")
self.save_state()
else:
self.show_feedback_message("No frame data available")
elif key == ord("g"):
# Toggle auto tracking
self.feature_tracker.auto_tracking = not self.feature_tracker.auto_tracking
print(f"DEBUG: Auto tracking toggled to {self.feature_tracker.auto_tracking}")
self.show_feedback_message(f"Auto tracking {'ON' if self.feature_tracker.auto_tracking else 'OFF'}")
self.save_state()
elif key == ord("G"):
# Clear all feature tracking data
self.feature_tracker.clear_features()
self.show_feedback_message("Feature tracking data cleared")
self.save_state()
elif key == ord("H"):
# Switch detector type (SIFT -> ORB -> SIFT) - SURF not available
current_type = self.feature_tracker.detector_type
if current_type == 'SIFT':
new_type = 'ORB'
elif current_type == 'ORB':
new_type = 'SIFT'
else:
new_type = 'SIFT'
self.feature_tracker.set_detector_type(new_type)
self.show_feedback_message(f"Detector switched to {new_type}")
self.save_state()
elif key == ord("t"): elif key == ord("t"):
# Marker looping only for videos # Marker looping only for videos
if not self.is_image_mode: if not self.is_image_mode:

57
uv.lock generated
View File

@@ -15,12 +15,14 @@ source = { virtual = "croppa" }
dependencies = [ dependencies = [
{ name = "numpy" }, { name = "numpy" },
{ name = "opencv-python" }, { name = "opencv-python" },
{ name = "pillow" },
] ]
[package.metadata] [package.metadata]
requires-dist = [ requires-dist = [
{ name = "numpy", specifier = ">=1.24.0" }, { name = "numpy", specifier = ">=1.24.0" },
{ name = "opencv-python", specifier = ">=4.8.0" }, { name = "opencv-python", specifier = ">=4.8.0" },
{ name = "pillow", specifier = ">=10.0.0" },
] ]
[[package]] [[package]]
@@ -85,6 +87,61 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fa/80/eb88edc2e2b11cd2dd2e56f1c80b5784d11d6e6b7f04a1145df64df40065/opencv_python-4.12.0.88-cp37-abi3-win_amd64.whl", hash = "sha256:d98edb20aa932fd8ebd276a72627dad9dc097695b3d435a4257557bbb49a79d2", size = 39000307, upload-time = "2025-07-07T09:14:16.641Z" }, { url = "https://files.pythonhosted.org/packages/fa/80/eb88edc2e2b11cd2dd2e56f1c80b5784d11d6e6b7f04a1145df64df40065/opencv_python-4.12.0.88-cp37-abi3-win_amd64.whl", hash = "sha256:d98edb20aa932fd8ebd276a72627dad9dc097695b3d435a4257557bbb49a79d2", size = 39000307, upload-time = "2025-07-07T09:14:16.641Z" },
] ]
[[package]]
name = "pillow"
version = "11.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f3/0d/d0d6dea55cd152ce3d6767bb38a8fc10e33796ba4ba210cbab9354b6d238/pillow-11.3.0.tar.gz", hash = "sha256:3828ee7586cd0b2091b6209e5ad53e20d0649bbe87164a459d0676e035e8f523", size = 47113069, upload-time = "2025-07-01T09:16:30.666Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1e/93/0952f2ed8db3a5a4c7a11f91965d6184ebc8cd7cbb7941a260d5f018cd2d/pillow-11.3.0-cp313-cp313-ios_13_0_arm64_iphoneos.whl", hash = "sha256:1c627742b539bba4309df89171356fcb3cc5a9178355b2727d1b74a6cf155fbd", size = 2128328, upload-time = "2025-07-01T09:14:35.276Z" },
{ url = "https://files.pythonhosted.org/packages/4b/e8/100c3d114b1a0bf4042f27e0f87d2f25e857e838034e98ca98fe7b8c0a9c/pillow-11.3.0-cp313-cp313-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:30b7c02f3899d10f13d7a48163c8969e4e653f8b43416d23d13d1bbfdc93b9f8", size = 2170652, upload-time = "2025-07-01T09:14:37.203Z" },
{ url = "https://files.pythonhosted.org/packages/aa/86/3f758a28a6e381758545f7cdb4942e1cb79abd271bea932998fc0db93cb6/pillow-11.3.0-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f", size = 2227443, upload-time = "2025-07-01T09:14:39.344Z" },
{ url = "https://files.pythonhosted.org/packages/01/f4/91d5b3ffa718df2f53b0dc109877993e511f4fd055d7e9508682e8aba092/pillow-11.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c", size = 5278474, upload-time = "2025-07-01T09:14:41.843Z" },
{ url = "https://files.pythonhosted.org/packages/f9/0e/37d7d3eca6c879fbd9dba21268427dffda1ab00d4eb05b32923d4fbe3b12/pillow-11.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd", size = 4686038, upload-time = "2025-07-01T09:14:44.008Z" },
{ url = "https://files.pythonhosted.org/packages/ff/b0/3426e5c7f6565e752d81221af9d3676fdbb4f352317ceafd42899aaf5d8a/pillow-11.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e", size = 5864407, upload-time = "2025-07-03T13:10:15.628Z" },
{ url = "https://files.pythonhosted.org/packages/fc/c1/c6c423134229f2a221ee53f838d4be9d82bab86f7e2f8e75e47b6bf6cd77/pillow-11.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1", size = 7639094, upload-time = "2025-07-03T13:10:21.857Z" },
{ url = "https://files.pythonhosted.org/packages/ba/c9/09e6746630fe6372c67c648ff9deae52a2bc20897d51fa293571977ceb5d/pillow-11.3.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805", size = 5973503, upload-time = "2025-07-01T09:14:45.698Z" },
{ url = "https://files.pythonhosted.org/packages/d5/1c/a2a29649c0b1983d3ef57ee87a66487fdeb45132df66ab30dd37f7dbe162/pillow-11.3.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8", size = 6642574, upload-time = "2025-07-01T09:14:47.415Z" },
{ url = "https://files.pythonhosted.org/packages/36/de/d5cc31cc4b055b6c6fd990e3e7f0f8aaf36229a2698501bcb0cdf67c7146/pillow-11.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2", size = 6084060, upload-time = "2025-07-01T09:14:49.636Z" },
{ url = "https://files.pythonhosted.org/packages/d5/ea/502d938cbaeec836ac28a9b730193716f0114c41325db428e6b280513f09/pillow-11.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:45dfc51ac5975b938e9809451c51734124e73b04d0f0ac621649821a63852e7b", size = 6721407, upload-time = "2025-07-01T09:14:51.962Z" },
{ url = "https://files.pythonhosted.org/packages/45/9c/9c5e2a73f125f6cbc59cc7087c8f2d649a7ae453f83bd0362ff7c9e2aee2/pillow-11.3.0-cp313-cp313-win32.whl", hash = "sha256:a4d336baed65d50d37b88ca5b60c0fa9d81e3a87d4a7930d3880d1624d5b31f3", size = 6273841, upload-time = "2025-07-01T09:14:54.142Z" },
{ url = "https://files.pythonhosted.org/packages/23/85/397c73524e0cd212067e0c969aa245b01d50183439550d24d9f55781b776/pillow-11.3.0-cp313-cp313-win_amd64.whl", hash = "sha256:0bce5c4fd0921f99d2e858dc4d4d64193407e1b99478bc5cacecba2311abde51", size = 6978450, upload-time = "2025-07-01T09:14:56.436Z" },
{ url = "https://files.pythonhosted.org/packages/17/d2/622f4547f69cd173955194b78e4d19ca4935a1b0f03a302d655c9f6aae65/pillow-11.3.0-cp313-cp313-win_arm64.whl", hash = "sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580", size = 2423055, upload-time = "2025-07-01T09:14:58.072Z" },
{ url = "https://files.pythonhosted.org/packages/dd/80/a8a2ac21dda2e82480852978416cfacd439a4b490a501a288ecf4fe2532d/pillow-11.3.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e", size = 5281110, upload-time = "2025-07-01T09:14:59.79Z" },
{ url = "https://files.pythonhosted.org/packages/44/d6/b79754ca790f315918732e18f82a8146d33bcd7f4494380457ea89eb883d/pillow-11.3.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d", size = 4689547, upload-time = "2025-07-01T09:15:01.648Z" },
{ url = "https://files.pythonhosted.org/packages/49/20/716b8717d331150cb00f7fdd78169c01e8e0c219732a78b0e59b6bdb2fd6/pillow-11.3.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced", size = 5901554, upload-time = "2025-07-03T13:10:27.018Z" },
{ url = "https://files.pythonhosted.org/packages/74/cf/a9f3a2514a65bb071075063a96f0a5cf949c2f2fce683c15ccc83b1c1cab/pillow-11.3.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c", size = 7669132, upload-time = "2025-07-03T13:10:33.01Z" },
{ url = "https://files.pythonhosted.org/packages/98/3c/da78805cbdbee9cb43efe8261dd7cc0b4b93f2ac79b676c03159e9db2187/pillow-11.3.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8", size = 6005001, upload-time = "2025-07-01T09:15:03.365Z" },
{ url = "https://files.pythonhosted.org/packages/6c/fa/ce044b91faecf30e635321351bba32bab5a7e034c60187fe9698191aef4f/pillow-11.3.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59", size = 6668814, upload-time = "2025-07-01T09:15:05.655Z" },
{ url = "https://files.pythonhosted.org/packages/7b/51/90f9291406d09bf93686434f9183aba27b831c10c87746ff49f127ee80cb/pillow-11.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe", size = 6113124, upload-time = "2025-07-01T09:15:07.358Z" },
{ url = "https://files.pythonhosted.org/packages/cd/5a/6fec59b1dfb619234f7636d4157d11fb4e196caeee220232a8d2ec48488d/pillow-11.3.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:83e1b0161c9d148125083a35c1c5a89db5b7054834fd4387499e06552035236c", size = 6747186, upload-time = "2025-07-01T09:15:09.317Z" },
{ url = "https://files.pythonhosted.org/packages/49/6b/00187a044f98255225f172de653941e61da37104a9ea60e4f6887717e2b5/pillow-11.3.0-cp313-cp313t-win32.whl", hash = "sha256:2a3117c06b8fb646639dce83694f2f9eac405472713fcb1ae887469c0d4f6788", size = 6277546, upload-time = "2025-07-01T09:15:11.311Z" },
{ url = "https://files.pythonhosted.org/packages/e8/5c/6caaba7e261c0d75bab23be79f1d06b5ad2a2ae49f028ccec801b0e853d6/pillow-11.3.0-cp313-cp313t-win_amd64.whl", hash = "sha256:857844335c95bea93fb39e0fa2726b4d9d758850b34075a7e3ff4f4fa3aa3b31", size = 6985102, upload-time = "2025-07-01T09:15:13.164Z" },
{ url = "https://files.pythonhosted.org/packages/f3/7e/b623008460c09a0cb38263c93b828c666493caee2eb34ff67f778b87e58c/pillow-11.3.0-cp313-cp313t-win_arm64.whl", hash = "sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e", size = 2424803, upload-time = "2025-07-01T09:15:15.695Z" },
{ url = "https://files.pythonhosted.org/packages/73/f4/04905af42837292ed86cb1b1dabe03dce1edc008ef14c473c5c7e1443c5d/pillow-11.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12", size = 5278520, upload-time = "2025-07-01T09:15:17.429Z" },
{ url = "https://files.pythonhosted.org/packages/41/b0/33d79e377a336247df6348a54e6d2a2b85d644ca202555e3faa0cf811ecc/pillow-11.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a", size = 4686116, upload-time = "2025-07-01T09:15:19.423Z" },
{ url = "https://files.pythonhosted.org/packages/49/2d/ed8bc0ab219ae8768f529597d9509d184fe8a6c4741a6864fea334d25f3f/pillow-11.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632", size = 5864597, upload-time = "2025-07-03T13:10:38.404Z" },
{ url = "https://files.pythonhosted.org/packages/b5/3d/b932bb4225c80b58dfadaca9d42d08d0b7064d2d1791b6a237f87f661834/pillow-11.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673", size = 7638246, upload-time = "2025-07-03T13:10:44.987Z" },
{ url = "https://files.pythonhosted.org/packages/09/b5/0487044b7c096f1b48f0d7ad416472c02e0e4bf6919541b111efd3cae690/pillow-11.3.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027", size = 5973336, upload-time = "2025-07-01T09:15:21.237Z" },
{ url = "https://files.pythonhosted.org/packages/a8/2d/524f9318f6cbfcc79fbc004801ea6b607ec3f843977652fdee4857a7568b/pillow-11.3.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77", size = 6642699, upload-time = "2025-07-01T09:15:23.186Z" },
{ url = "https://files.pythonhosted.org/packages/6f/d2/a9a4f280c6aefedce1e8f615baaa5474e0701d86dd6f1dede66726462bbd/pillow-11.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874", size = 6083789, upload-time = "2025-07-01T09:15:25.1Z" },
{ url = "https://files.pythonhosted.org/packages/fe/54/86b0cd9dbb683a9d5e960b66c7379e821a19be4ac5810e2e5a715c09a0c0/pillow-11.3.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:98a9afa7b9007c67ed84c57c9e0ad86a6000da96eaa638e4f8abe5b65ff83f0a", size = 6720386, upload-time = "2025-07-01T09:15:27.378Z" },
{ url = "https://files.pythonhosted.org/packages/e7/95/88efcaf384c3588e24259c4203b909cbe3e3c2d887af9e938c2022c9dd48/pillow-11.3.0-cp314-cp314-win32.whl", hash = "sha256:02a723e6bf909e7cea0dac1b0e0310be9d7650cd66222a5f1c571455c0a45214", size = 6370911, upload-time = "2025-07-01T09:15:29.294Z" },
{ url = "https://files.pythonhosted.org/packages/2e/cc/934e5820850ec5eb107e7b1a72dd278140731c669f396110ebc326f2a503/pillow-11.3.0-cp314-cp314-win_amd64.whl", hash = "sha256:a418486160228f64dd9e9efcd132679b7a02a5f22c982c78b6fc7dab3fefb635", size = 7117383, upload-time = "2025-07-01T09:15:31.128Z" },
{ url = "https://files.pythonhosted.org/packages/d6/e9/9c0a616a71da2a5d163aa37405e8aced9a906d574b4a214bede134e731bc/pillow-11.3.0-cp314-cp314-win_arm64.whl", hash = "sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6", size = 2511385, upload-time = "2025-07-01T09:15:33.328Z" },
{ url = "https://files.pythonhosted.org/packages/1a/33/c88376898aff369658b225262cd4f2659b13e8178e7534df9e6e1fa289f6/pillow-11.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae", size = 5281129, upload-time = "2025-07-01T09:15:35.194Z" },
{ url = "https://files.pythonhosted.org/packages/1f/70/d376247fb36f1844b42910911c83a02d5544ebd2a8bad9efcc0f707ea774/pillow-11.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653", size = 4689580, upload-time = "2025-07-01T09:15:37.114Z" },
{ url = "https://files.pythonhosted.org/packages/eb/1c/537e930496149fbac69efd2fc4329035bbe2e5475b4165439e3be9cb183b/pillow-11.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6", size = 5902860, upload-time = "2025-07-03T13:10:50.248Z" },
{ url = "https://files.pythonhosted.org/packages/bd/57/80f53264954dcefeebcf9dae6e3eb1daea1b488f0be8b8fef12f79a3eb10/pillow-11.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36", size = 7670694, upload-time = "2025-07-03T13:10:56.432Z" },
{ url = "https://files.pythonhosted.org/packages/70/ff/4727d3b71a8578b4587d9c276e90efad2d6fe0335fd76742a6da08132e8c/pillow-11.3.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b", size = 6005888, upload-time = "2025-07-01T09:15:39.436Z" },
{ url = "https://files.pythonhosted.org/packages/05/ae/716592277934f85d3be51d7256f3636672d7b1abfafdc42cf3f8cbd4b4c8/pillow-11.3.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477", size = 6670330, upload-time = "2025-07-01T09:15:41.269Z" },
{ url = "https://files.pythonhosted.org/packages/e7/bb/7fe6cddcc8827b01b1a9766f5fdeb7418680744f9082035bdbabecf1d57f/pillow-11.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50", size = 6114089, upload-time = "2025-07-01T09:15:43.13Z" },
{ url = "https://files.pythonhosted.org/packages/8b/f5/06bfaa444c8e80f1a8e4bff98da9c83b37b5be3b1deaa43d27a0db37ef84/pillow-11.3.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a1bc6ba083b145187f648b667e05a2534ecc4b9f2784c2cbe3089e44868f2b9b", size = 6748206, upload-time = "2025-07-01T09:15:44.937Z" },
{ url = "https://files.pythonhosted.org/packages/f0/77/bc6f92a3e8e6e46c0ca78abfffec0037845800ea38c73483760362804c41/pillow-11.3.0-cp314-cp314t-win32.whl", hash = "sha256:118ca10c0d60b06d006be10a501fd6bbdfef559251ed31b794668ed569c87e12", size = 6377370, upload-time = "2025-07-01T09:15:46.673Z" },
{ url = "https://files.pythonhosted.org/packages/4a/82/3a721f7d69dca802befb8af08b7c79ebcab461007ce1c18bd91a5d5896f9/pillow-11.3.0-cp314-cp314t-win_amd64.whl", hash = "sha256:8924748b688aa210d79883357d102cd64690e56b923a186f35a82cbc10f997db", size = 7121500, upload-time = "2025-07-01T09:15:48.512Z" },
{ url = "https://files.pythonhosted.org/packages/89/c7/5572fa4a3f45740eaab6ae86fcdf7195b55beac1371ac8c619d880cfe948/pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa", size = 2512835, upload-time = "2025-07-01T09:15:50.399Z" },
]
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.12.12" version = "0.12.12"