feat: Implement optimized streaming and request handling for low-resource environments

- Added fetchWithTimeout utility for enhanced fetch requests with timeout support.
- Created createBackoffFetcher hook for API data fetching with retry and exponential backoff.
- Developed optimizedStreamingHandler for efficient chunked streaming with memory management.
- Introduced streamHandler for enhanced streaming capabilities with better error handling.
- Implemented requestLimiter middleware to prevent API overload and manage concurrent requests.
- Configured server settings for low-resource environments, including connection limits and timeouts.
- Added endpoints for torrent management, including adding, removing, pausing, and resuming torrents.
- Enhanced server info endpoint to provide detailed system and memory usage statistics.
This commit is contained in:
Salman Qureshi
2025-08-12 22:07:17 +05:30
parent a4063d3cbf
commit b205369802
25 changed files with 2505 additions and 0 deletions

172
DEPLOYMENT_GUIDE.md Normal file
View File

@@ -0,0 +1,172 @@
# SeedBox-Lite Deployment Guide for Low-Resource Environments
This guide will help you deploy SeedBox-Lite on low-resource environments such as your 2GB RAM/1 core cloud instance.
## Requirements
- Node.js v14+ (v16 recommended)
- 2GB RAM minimum
- 1 CPU core minimum
- Linux-based OS (Ubuntu/Debian recommended)
## 1. Prepare Your Server
Connect to your server and install dependencies:
```bash
# Update package manager
apt-get update
# Install Node.js v16
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
apt-get install -y nodejs
# Install PM2 for process management
npm install -g pm2
# Install required packages for WebTorrent
apt-get install -y build-essential libtool automake
```
## 2. Deploy the Application
Clone the repository or upload your files:
```bash
# Using Git
git clone https://github.com/your-username/seedbox-lite.git
cd seedbox-lite
# OR upload via SCP from your local machine
# scp -r /path/to/local/seedbox-lite root@your-server-ip:/path/on/server
```
## 3. Install Dependencies
Install optimized dependencies:
```bash
cd seedbox-lite
# Install backend dependencies
cd server
npm install --only=production
cd ..
# Install frontend dependencies and build
cd client
npm install --only=production
npm run build
cd ..
```
## 4. Configure the Optimized Version
Create environment file:
```bash
cd server
cat > .env << EOL
NODE_ENV=production
SERVER_PORT=3001
DOWNLOAD_PATH=/root/downloads
EOL
# Create downloads directory
mkdir -p /root/downloads
chmod 755 /root/downloads
```
## 5. Start with PM2
Configure PM2 for optimal performance:
```bash
# Start the optimized version
cd /root/seedbox-lite/server
pm2 start index-optimized.js --name "seedbox-lite" \
--max-memory-restart 1G \
--node-args="--max-old-space-size=768" \
--log /root/logs/seedbox.log
# Set to start on system boot
pm2 startup
pm2 save
```
## 6. Configure Nginx (Optional but Recommended)
Install and configure Nginx as a reverse proxy:
```bash
# Install Nginx
apt-get install -y nginx
# Configure Nginx
cat > /etc/nginx/sites-available/seedbox-lite << EOL
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_cache_bypass \$http_upgrade;
proxy_read_timeout 120s;
}
}
EOL
# Enable site and restart Nginx
ln -s /etc/nginx/sites-available/seedbox-lite /etc/nginx/sites-enabled/
nginx -t
systemctl restart nginx
```
## Monitoring and Management
Monitor your application:
```bash
# Check status
pm2 status
# View logs
pm2 logs seedbox-lite
# Monitor resources
pm2 monit
# Restart the application
pm2 restart seedbox-lite
```
## Troubleshooting
### High Memory Usage
If you encounter high memory usage:
- Reduce the number of active torrents
- In index-optimized.js, lower the `maxConnections` value (e.g., from 20 to 10)
- Restart the application: `pm2 restart seedbox-lite`
### API Timeouts
If you encounter API timeouts:
- Check server logs: `pm2 logs seedbox-lite`
- Increase the request timeout in the API client or server config
- Ensure you're using the optimized streaming endpoint
### Pending Requests
If you see many pending API requests:
- The client-side smart polling hooks should handle this automatically
- If issues persist, increase the `maxInterval` in useSmartPolling.js
- Make sure the request limiter is properly configured in the server
## Security Recommendations
- Set up a firewall (UFW)
- Enable HTTPS with Let's Encrypt
- Run the application as a non-root user
- Use authentication for the API endpoints

122
OPTIMIZATIONS.md Normal file
View File

@@ -0,0 +1,122 @@
# SeedBox-Lite Optimization Summary
This document outlines the optimizations made to address API polling issues on low-resource environments (2GB RAM, 1 CPU).
## Problem Diagnosis
The original application had several issues when running on low-resource hardware:
1. **API Timeouts**: Long-running requests (up to 56s) blocked the Node.js event loop
2. **Memory Pressure**: Inefficient streaming caused memory spikes and GC pauses
3. **Connection Pileup**: Frontend kept polling while previous requests were pending
4. **No Request Limits**: The server accepted unlimited concurrent requests
5. **No Timeouts**: Requests could hang indefinitely without resolution
## Key Optimizations
### 1. Server-side Request Management
- **Request Limiter Middleware**: Prevents server overload by limiting concurrent requests
- Implements per-IP request tracking
- Sets appropriate timeouts for all API requests
- Applies different limits based on resource availability
- See: `/server/middleware/requestLimiter.js`
### 2. Optimized Streaming Implementation
- **Chunked Streaming**: Serves content in small chunks (256KB) to prevent memory issues
- Implements proper flow control with stream pause/resume
- Handles range requests efficiently
- Automatically cleans up resources on client disconnect
- See: `/server/handlers/optimizedStreamingHandler.js`
### 3. Resilient Client-side Fetching
- **Enhanced API Client**: Prevents API pileup with smart request handling
- Implements timeouts, retries, and exponential backoff
- Deduplicates identical pending requests
- Features circuit breaker to prevent request floods
- See: `/client/src/utils/apiClient.js`
### 4. Adaptive Polling
- **Smart Polling Hook**: React hook that adapts to server conditions
- Dynamically adjusts polling interval based on response times
- Backs off exponentially when errors occur
- Implements circuit breaking on consecutive failures
- See: `/client/src/hooks/useSmartPolling.js`
### 5. Resource-Aware Configuration
- **Environmental Detection**: Server auto-configures based on available resources
- Adjusts connection limits for low-resource environments
- Sets appropriate timeouts based on system capabilities
- See: `/server/index-optimized.js`
## Implementation Guide
To implement these optimizations:
1. Replace the existing stream handler with the optimized one
```javascript
// In server/index.js
const streamHandler = require('./handlers/optimizedStreamingHandler');
app.get('/api/torrents/:identifier/files/:fileIdx/stream', streamHandler);
```
2. Add the request limiter middleware
```javascript
// In server/index.js
const createRequestLimiter = require('./middleware/requestLimiter');
app.use(createRequestLimiter({
maxConcurrentRequests: 15,
requestTimeout: 30000,
logLevel: 1
}));
```
3. Use the enhanced API client in frontend components
```javascript
// In your React components
import { api } from '../utils/apiClient';
// Use it for API calls
api.get('/api/torrents')
.then(data => console.log(data))
.catch(err => console.error(err));
```
4. Implement the smart polling hook for data fetching
```javascript
// In your React components
import useSmartPolling from '../hooks/useSmartPolling';
function TorrentList() {
const fetchTorrents = async (signal) => {
const response = await fetch('/api/torrents', { signal });
return response.json();
};
const { data, error, isLoading, refresh } = useSmartPolling(fetchTorrents);
// Use data, handle loading/error states
}
```
5. For full optimization, consider using the completely optimized server:
```bash
# Run the optimized version
node server/index-optimized.js
```
## Results
These optimizations should significantly improve your application's performance on low-resource environments:
- **Reduced Memory Usage**: Smaller chunks and better resource cleanup
- **More Responsive API**: Limited concurrent requests prevents overload
- **Fewer Pending Requests**: Smart polling prevents request pileup
- **Graceful Degradation**: System adapts to resource constraints
- **Improved Stability**: Proper error handling and recovery mechanisms
If you encounter any issues, refer to the detailed comments in each file for troubleshooting guidance.

View File

@@ -0,0 +1,86 @@
import React from 'react';
/**
* Error boundary component to catch and handle React errors
*/
class ErrorBoundary extends React.Component {
constructor(props) {
super(props);
this.state = {
hasError: false,
error: null,
errorInfo: null
};
}
static getDerivedStateFromError(error) {
// Update state so the next render will show the fallback UI
return {
hasError: true,
error
};
}
componentDidCatch(error, errorInfo) {
// You can also log the error to an error reporting service
console.error('Error caught by boundary:', error, errorInfo);
this.setState({
errorInfo: errorInfo
});
}
render() {
if (this.state.hasError) {
return (
<div className="error-boundary">
<div className="error-container">
<h2>Something went wrong</h2>
<p>{this.state.error?.message || 'An unknown error occurred'}</p>
<div className="error-actions">
<button
className="retry-button"
onClick={() => {
// Reset the error state
this.setState({
hasError: false,
error: null,
errorInfo: null
});
// If there's a retry callback, call it
if (this.props.onRetry && typeof this.props.onRetry === 'function') {
this.props.onRetry();
}
}}
>
Try Again
</button>
{this.props.showReload && (
<button
className="reload-button"
onClick={() => window.location.reload()}
>
Reload Page
</button>
)}
</div>
{this.props.debug && this.state.errorInfo && (
<details className="error-details">
<summary>Error Details</summary>
<pre>{this.state.error?.toString()}</pre>
<pre>{this.state.errorInfo.componentStack}</pre>
</details>
)}
</div>
</div>
);
}
// If there's no error, render children normally
return this.props.children;
}
}
export default ErrorBoundary;

View File

@@ -0,0 +1,121 @@
import React, { useState } from 'react';
import { Link } from 'react-router-dom';
import { usePollingWithBackoff } from '../hooks/usePollingWithBackoff';
import { getTorrentsWithRetry } from '../services/api';
import ErrorBoundary from './ErrorBoundary';
/**
* TorrentList component with resilient polling
*/
const TorrentList = () => {
// Start with a longer polling interval to reduce load
const [pollingInterval, setPollingInterval] = useState(5000);
const {
data,
isLoading,
error,
refetch
} = usePollingWithBackoff(
getTorrentsWithRetry,
pollingInterval,
true, // enabled
30000, // max backoff
true // immediate
);
// Format sizes for better readability
const formatSize = (bytes) => {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
};
// Format speed
const formatSpeed = (bytesPerSec) => {
return `${formatSize(bytesPerSec)}/s`;
};
// Handle the loading state
if (isLoading && !data) {
return <div className="loading-indicator">Loading torrents...</div>;
}
// Handle errors
if (error) {
return (
<div className="error-message">
<h3>Error loading torrents</h3>
<p>{error.message}</p>
<button onClick={refetch}>Retry</button>
</div>
);
}
// Handle empty state
if (!data || !data.torrents || data.torrents.length === 0) {
return <div className="empty-state">No torrents found. Add a new torrent to get started.</div>;
}
return (
<div className="torrent-list">
<div className="list-controls">
<button onClick={refetch}>Refresh</button>
<select
value={pollingInterval}
onChange={(e) => setPollingInterval(parseInt(e.target.value))}
>
<option value="2000">Fast (2s)</option>
<option value="5000">Normal (5s)</option>
<option value="10000">Slow (10s)</option>
<option value="30000">Very Slow (30s)</option>
</select>
</div>
<div className="torrent-table">
<div className="table-header">
<div className="name-column">Name</div>
<div className="size-column">Size</div>
<div className="progress-column">Progress</div>
<div className="speed-column">Speed</div>
<div className="peers-column">Peers</div>
</div>
{data.torrents.map(torrent => (
<Link
to={`/torrent/${torrent.infoHash}`}
key={torrent.infoHash}
className="torrent-row"
>
<div className="name-column">{torrent.name}</div>
<div className="size-column">{formatSize(torrent.size)}</div>
<div className="progress-column">
<div className="progress-bar">
<div
className="progress-fill"
style={{ width: `${(torrent.progress * 100).toFixed(0)}%` }}
/>
<span className="progress-text">
{(torrent.progress * 100).toFixed(1)}%
</span>
</div>
</div>
<div className="speed-column">{formatSpeed(torrent.downloadSpeed)}</div>
<div className="peers-column">{torrent.peers}</div>
</Link>
))}
</div>
</div>
);
};
// Wrap with error boundary
const TorrentListWithErrorBoundary = () => (
<ErrorBoundary onRetry={() => window.location.reload()}>
<TorrentList />
</ErrorBoundary>
);
export default TorrentListWithErrorBoundary;

View File

@@ -0,0 +1,100 @@
// src/components/TorrentListWithSmartPolling.jsx
import React, { useCallback } from 'react';
import useSmartPolling from '../hooks/useSmartPolling';
import { api } from '../utils/apiClient';
/**
* TorrentList component that uses smart polling to fetch data
* This is an example of how to implement the optimized polling solution
*/
const TorrentListWithSmartPolling = () => {
// Define the fetch function
const fetchTorrents = useCallback(async (signal) => {
const response = await api.get('/api/torrents', { signal });
return response;
}, []);
// Use our smart polling hook
const {
data: torrents,
error,
isLoading,
lastUpdated,
refresh
} = useSmartPolling(fetchTorrents, {
initialInterval: 3000, // Poll every 3 seconds initially
minInterval: 2000, // Never poll faster than every 2 seconds
maxInterval: 20000, // Never poll slower than every 20 seconds
adaptiveSpeed: true // Adjust polling speed based on response time
});
// Show loading state
if (isLoading && !torrents) {
return <div>Loading torrents...</div>;
}
// Show error state
if (error) {
return (
<div className="error-container">
<h3>Error loading torrents</h3>
<p>{error}</p>
<button onClick={refresh}>Try Again</button>
</div>
);
}
// Render the torrent list
return (
<div className="torrent-list">
<div className="header">
<h2>Torrents</h2>
<div className="actions">
<button onClick={refresh}>Refresh</button>
{lastUpdated && (
<span className="last-updated">
Last updated: {lastUpdated.toLocaleTimeString()}
</span>
)}
</div>
</div>
{torrents && torrents.length > 0 ? (
<ul>
{torrents.map(torrent => (
<li key={torrent.infoHash} className="torrent-item">
<div className="torrent-name">{torrent.name}</div>
<div className="torrent-progress">
<div className="progress-bar">
<div
className="progress-fill"
style={{ width: `${torrent.progress * 100}%` }}
/>
</div>
<span>{Math.round(torrent.progress * 100)}%</span>
</div>
<div className="torrent-stats">
<span> {formatBytes(torrent.downloadSpeed)}/s</span>
<span> {formatBytes(torrent.uploadSpeed)}/s</span>
<span>{torrent.numPeers} peers</span>
</div>
</li>
))}
</ul>
) : (
<div className="empty-state">No torrents found</div>
)}
</div>
);
};
// Helper function to format bytes
const formatBytes = (bytes) => {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(2))} ${sizes[i]}`;
};
export default TorrentListWithSmartPolling;

View File

@@ -0,0 +1,101 @@
import React from 'react';
import { usePollingWithBackoff } from '../hooks/usePollingWithBackoff';
import { getTorrentStatsWithRetry } from '../services/api';
import ErrorBoundary from './ErrorBoundary';
/**
* TorrentStatus component with resilient polling
*
* @param {Object} props
* @param {string} props.torrentId - Torrent ID to fetch stats for
* @param {number} props.pollingInterval - How often to poll (default: 3000ms)
*/
const TorrentStatus = ({ torrentId, pollingInterval = 3000 }) => {
const {
data: stats,
isLoading,
error,
refetch
} = usePollingWithBackoff(
() => getTorrentStatsWithRetry(torrentId),
pollingInterval,
true, // enabled
30000, // max backoff
true // immediate
);
if (isLoading && !stats) {
return <div className="loading-indicator">Loading torrent stats...</div>;
}
if (error) {
return (
<div className="error-message">
<p>Failed to load torrent stats: {error.message}</p>
<button onClick={refetch}>Retry</button>
</div>
);
}
if (!stats) {
return <div className="no-data">No stats available</div>;
}
// Format progress as percentage
const progressPercent = (stats.progress * 100).toFixed(2);
// Format sizes for better readability
const formatSize = (bytes) => {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
};
// Format speed
const formatSpeed = (bytesPerSec) => {
return `${formatSize(bytesPerSec)}/s`;
};
return (
<div className="torrent-status">
<h3>{stats.name}</h3>
<div className="progress-bar">
<div
className="progress-fill"
style={{ width: `${progressPercent}%` }}
/>
<span className="progress-text">{progressPercent}%</span>
</div>
<div className="stats-grid">
<div className="stat">
<span className="stat-label">Size</span>
<span className="stat-value">{formatSize(stats.size)}</span>
</div>
<div className="stat">
<span className="stat-label">Downloaded</span>
<span className="stat-value">{formatSize(stats.downloaded)}</span>
</div>
<div className="stat">
<span className="stat-label">Download Speed</span>
<span className="stat-value">{formatSpeed(stats.downloadSpeed)}</span>
</div>
<div className="stat">
<span className="stat-label">Peers</span>
<span className="stat-value">{stats.peers}</span>
</div>
</div>
</div>
);
};
// Wrap component with ErrorBoundary
const TorrentStatusWithErrorBoundary = (props) => (
<ErrorBoundary onRetry={() => window.location.reload()}>
<TorrentStatus {...props} />
</ErrorBoundary>
);
export default TorrentStatusWithErrorBoundary;

View File

@@ -0,0 +1,140 @@
import { useState, useEffect, useRef } from 'react';
/**
* Custom hook for polling data with built-in error handling and exponential backoff
*
* @param {Function} fetchFunction - Function that returns a promise with the data
* @param {number} interval - Polling interval in milliseconds (default: 3000ms)
* @param {boolean} enabled - Whether polling is enabled (default: true)
* @param {number} maxBackoff - Maximum backoff time in milliseconds (default: 30000ms)
* @param {boolean} immediate - Whether to fetch immediately (default: true)
*
* @returns {Object} - { data, isLoading, error, refetch }
*/
export function usePollingWithBackoff(
fetchFunction,
interval = 3000,
enabled = true,
maxBackoff = 30000,
immediate = true
) {
const [data, setData] = useState(null);
const [isLoading, setIsLoading] = useState(immediate);
const [error, setError] = useState(null);
const [currentInterval, setCurrentInterval] = useState(interval);
// Use refs for values that we don't want to trigger re-renders
const timeoutRef = useRef(null);
const failedAttemptsRef = useRef(0);
const activeRequestRef = useRef(false);
const mountedRef = useRef(true);
// Function to fetch data with error handling
const fetchData = async () => {
// Don't start a new request if one is already in progress
if (activeRequestRef.current) return;
activeRequestRef.current = true;
setIsLoading(true);
try {
const result = await fetchFunction();
if (mountedRef.current) {
setData(result);
setError(null);
setIsLoading(false);
// Reset backoff on success
failedAttemptsRef.current = 0;
setCurrentInterval(interval);
}
} catch (error) {
if (mountedRef.current) {
setError(error);
setIsLoading(false);
// Increase backoff on failure
failedAttemptsRef.current += 1;
const newInterval = Math.min(
interval * Math.pow(1.5, failedAttemptsRef.current),
maxBackoff
);
setCurrentInterval(newInterval);
console.warn(`Polling error, backing off to ${newInterval}ms:`, error.message);
}
} finally {
activeRequestRef.current = false;
}
};
// Schedule the next fetch
const scheduleNextFetch = () => {
if (!mountedRef.current || !enabled) return;
// Clear any existing timeout
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
// Schedule next fetch
timeoutRef.current = setTimeout(() => {
if (mountedRef.current && enabled) {
fetchData().finally(() => {
scheduleNextFetch();
});
}
}, currentInterval);
};
// Fetch immediately function for manual refetching
const refetch = async () => {
// Clear any scheduled fetch
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
timeoutRef.current = null;
}
// Fetch and then reschedule
await fetchData();
scheduleNextFetch();
};
// Set up polling when the component mounts
useEffect(() => {
mountedRef.current = true;
// Fetch immediately if requested
if (immediate && enabled) {
fetchData().finally(() => {
scheduleNextFetch();
});
} else if (enabled) {
scheduleNextFetch();
}
// Clean up on unmount
return () => {
mountedRef.current = false;
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
};
}, [enabled]); // Only re-run if enabled changes
// Handle changes to the interval
useEffect(() => {
if (enabled && !activeRequestRef.current) {
// If we're not in backoff mode, update the interval
if (failedAttemptsRef.current === 0) {
setCurrentInterval(interval);
}
// Reschedule with the new interval
scheduleNextFetch();
}
}, [interval, enabled]);
return { data, isLoading, error, refetch };
}

View File

@@ -0,0 +1,198 @@
// src/hooks/useSmartPolling.js
import { useState, useEffect, useRef } from 'react';
/**
* Smart polling hook that adapts to server conditions
* - Uses exponential backoff when errors occur
* - Adapts polling frequency based on response time
* - Implements circuit breaking when server is overloaded
* - Handles cleanup and cancellation properly
*
* @param {Function} fetchFn - Async function that fetches data
* @param {Object} options - Configuration options
* @returns {Object} - { data, error, isLoading, lastUpdated, refresh, cancel }
*/
export const useSmartPolling = (fetchFn, options = {}) => {
// Default options
const config = {
initialInterval: 3000, // Start polling every 3s
minInterval: 1000, // Fastest polling: 1s
maxInterval: 30000, // Slowest polling: 30s
adaptiveSpeed: true, // Adapt to response times
backoffFactor: 1.5, // Exponential backoff multiplier
recoveryFactor: 0.8, // How quickly to recover from backoff
maxConsecutiveErrors: 3, // After this many errors, circuit breaks
circuitResetTime: 10000, // Time to wait before trying again when circuit breaks
enablePolling: true, // Whether polling is enabled
...options
};
const [data, setData] = useState(null);
const [error, setError] = useState(null);
const [isLoading, setIsLoading] = useState(true);
const [lastUpdated, setLastUpdated] = useState(null);
// Refs to avoid re-renders and for cleanup
const intervalRef = useRef(config.initialInterval);
const timeoutIdRef = useRef(null);
const consecutiveErrorsRef = useRef(0);
const circuitBrokenRef = useRef(false);
const isMountedRef = useRef(true);
const abortControllerRef = useRef(new AbortController());
const lastFetchTimeRef = useRef(0);
// Calculate next interval based on success/failure and response time
const calculateNextInterval = (responseTime, hadError) => {
// Start with current interval
let nextInterval = intervalRef.current;
if (hadError) {
// On error, back off exponentially
nextInterval = Math.min(nextInterval * config.backoffFactor, config.maxInterval);
} else {
// On success, recover gradually
nextInterval = Math.max(
config.minInterval,
nextInterval * config.recoveryFactor
);
// If adaptive speed is enabled, adjust based on response time
if (config.adaptiveSpeed && responseTime > 0) {
// If response was slow (>1s), increase interval
if (responseTime > 1000) {
nextInterval = Math.min(nextInterval * 1.2, config.maxInterval);
}
// If response was fast (<200ms), decrease interval slightly
else if (responseTime < 200) {
nextInterval = Math.max(nextInterval * 0.9, config.minInterval);
}
}
}
return nextInterval;
};
// Function to fetch data
const fetchData = async (isRefresh = false) => {
if (!isMountedRef.current || (!isRefresh && !config.enablePolling)) return;
// If circuit is broken, don't fetch unless manual refresh
if (circuitBrokenRef.current && !isRefresh) {
// Schedule circuit reset
timeoutIdRef.current = setTimeout(() => {
if (isMountedRef.current) {
console.log('🔌 Circuit reset, retrying fetch');
circuitBrokenRef.current = false;
fetchData();
}
}, config.circuitResetTime);
return;
}
// Create a new AbortController for this fetch
abortControllerRef.current = new AbortController();
const { signal } = abortControllerRef.current;
// Track fetch time for adaptive polling
const fetchStartTime = Date.now();
lastFetchTimeRef.current = fetchStartTime;
try {
setIsLoading(true);
// Call the fetch function with the abort signal
const result = await fetchFn(signal);
// Only update state if component is still mounted and this is the latest fetch
if (isMountedRef.current && lastFetchTimeRef.current === fetchStartTime) {
setData(result);
setError(null);
setLastUpdated(new Date());
// Reset error counter on success
consecutiveErrorsRef.current = 0;
// Calculate response time and adjust interval
const responseTime = Date.now() - fetchStartTime;
intervalRef.current = calculateNextInterval(responseTime, false);
// Schedule next fetch if polling is enabled
if (config.enablePolling) {
timeoutIdRef.current = setTimeout(fetchData, intervalRef.current);
}
}
} catch (err) {
// Only update state if component is still mounted and this is the latest fetch
if (isMountedRef.current && lastFetchTimeRef.current === fetchStartTime) {
// Don't update error state if it was just an abort
if (err.name !== 'AbortError') {
setError(err.message || 'Error fetching data');
// Increase consecutive error count
consecutiveErrorsRef.current++;
// Calculate response time and adjust interval
const responseTime = Date.now() - fetchStartTime;
intervalRef.current = calculateNextInterval(responseTime, true);
// Check if circuit should break
if (consecutiveErrorsRef.current >= config.maxConsecutiveErrors) {
console.log('🔌 Circuit breaker activated due to consecutive errors');
circuitBrokenRef.current = true;
}
// Schedule next fetch if polling is enabled
if (config.enablePolling) {
timeoutIdRef.current = setTimeout(fetchData, intervalRef.current);
}
}
}
} finally {
if (isMountedRef.current && lastFetchTimeRef.current === fetchStartTime) {
setIsLoading(false);
}
}
};
// Manual refresh function
const refresh = () => {
// Cancel any pending fetch
cancel();
// Do a fresh fetch
fetchData(true);
};
// Cancel function
const cancel = () => {
if (timeoutIdRef.current) {
clearTimeout(timeoutIdRef.current);
}
abortControllerRef.current.abort();
};
// Effect for initial fetch and cleanup
useEffect(() => {
isMountedRef.current = true;
// Define a wrapper function to avoid dependency cycle
const initialFetch = () => fetchData();
initialFetch();
return () => {
isMountedRef.current = false;
cancel();
};
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [fetchFn, config.enablePolling]);
return {
data,
error,
isLoading,
lastUpdated,
refresh,
cancel
};
};
export default useSmartPolling;

138
client/src/services/api.js Normal file
View File

@@ -0,0 +1,138 @@
import { fetchWithTimeout, createBackoffFetcher } from '../utils/fetchWithTimeout';
// Get the API base URL from environment variables
const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:3000';
/**
* Get the list of all torrents
*/
export const getTorrents = async () => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/torrents`, {}, 5000);
return await response.json();
} catch (error) {
console.error('Error fetching torrents:', error);
throw error;
}
};
/**
* Get detailed information about a specific torrent
* @param {string} id - Torrent ID or info hash
*/
export const getTorrentDetails = async (id) => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/torrents/${id}`, {}, 10000);
return await response.json();
} catch (error) {
console.error(`Error fetching details for torrent ${id}:`, error);
throw error;
}
};
/**
* Get statistics for a specific torrent
* @param {string} id - Torrent ID or info hash
*/
export const getTorrentStats = async (id) => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/torrents/${id}/stats`, {}, 5000);
return await response.json();
} catch (error) {
console.error(`Error fetching stats for torrent ${id}:`, error);
throw error;
}
};
/**
* Get files for a specific torrent
* @param {string} id - Torrent ID or info hash
*/
export const getTorrentFiles = async (id) => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/torrents/${id}/files`, {}, 8000);
return await response.json();
} catch (error) {
console.error(`Error fetching files for torrent ${id}:`, error);
throw error;
}
};
/**
* Add a new torrent
* @param {string} torrentId - Magnet URI or torrent info hash
*/
export const addTorrent = async (torrentId) => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/torrents`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ torrentId }),
}, 15000);
return await response.json();
} catch (error) {
console.error('Error adding torrent:', error);
throw error;
}
};
/**
* Delete a torrent
* @param {string} id - Torrent ID or info hash
*/
export const deleteTorrent = async (id) => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/torrents/${id}`, {
method: 'DELETE',
}, 10000);
return await response.json();
} catch (error) {
console.error(`Error deleting torrent ${id}:`, error);
throw error;
}
};
/**
* Get the URL for streaming a file from a torrent
* @param {string} torrentId - Torrent ID or info hash
* @param {number} fileIndex - File index
*/
export const getStreamUrl = (torrentId, fileIndex) => {
return `${API_BASE_URL}/api/torrents/${torrentId}/files/${fileIndex}/stream`;
};
/**
* Get IMDB data for a torrent
* @param {string} id - Torrent ID or info hash
*/
export const getImdbData = async (id) => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/torrents/${id}/imdb`, {}, 10000);
return await response.json();
} catch (error) {
console.error(`Error fetching IMDB data for torrent ${id}:`, error);
throw error;
}
};
/**
* Check server health
*/
export const checkServerHealth = async () => {
try {
const response = await fetchWithTimeout(`${API_BASE_URL}/api/system/health`, {}, 5000);
return await response.json();
} catch (error) {
console.error('Error checking server health:', error);
throw error;
}
};
// Create enhanced fetchers with retry logic
export const getTorrentsWithRetry = createBackoffFetcher(getTorrents);
export const getTorrentDetailsWithRetry = (id) => createBackoffFetcher(() => getTorrentDetails(id))();
export const getTorrentStatsWithRetry = (id) => createBackoffFetcher(() => getTorrentStats(id))();

View File

@@ -0,0 +1,180 @@
// src/utils/apiClient.js
/**
* Enhanced API client with timeout, retry, and error handling capabilities
* Specifically designed to handle flaky connections and slow responses
*/
/**
* Configuration for API client
*/
const defaultConfig = {
baseTimeout: 8000, // Base timeout for API calls in ms
maxRetries: 2, // Maximum number of retries
retryDelay: 1000, // Initial delay before retry
exponentialFactor: 1.5, // Factor for exponential backoff
statusCodesToRetry: [408, 429, 500, 502, 503, 504]
};
/**
* Creates an enhanced fetch function with timeout, retry, and error handling
* @param {Object} userConfig - Custom configuration
* @returns {Function} Enhanced fetch function
*/
export const createApiClient = (userConfig = {}) => {
const config = { ...defaultConfig, ...userConfig };
// Keep track of pending requests
const pendingRequests = new Map();
// Track consecutive failures for circuit breaking
let consecutiveFailures = 0;
/**
* Enhanced fetch function
* @param {string} url - URL to fetch
* @param {Object} options - Fetch options
* @returns {Promise} - Resolved with response data
*/
const fetchWithRetry = async (url, options = {}) => {
// Add base URL if relative
const fullUrl = url.startsWith('http') ? url :
(config.baseUrl ? `${config.baseUrl}${url}` : url);
// If we already have a pending identical request, return that promise
const requestKey = `${options.method || 'GET'}:${fullUrl}`;
if (pendingRequests.has(requestKey)) {
console.log(`Reusing pending request for: ${requestKey}`);
return pendingRequests.get(requestKey);
}
// Check if circuit breaker is activated
if (consecutiveFailures >= 5) {
const backoffTime = Math.min(1000 * Math.pow(1.5, consecutiveFailures - 5), 30000);
console.log(`🔌 Circuit breaker active, backing off for ${backoffTime}ms`);
await new Promise(resolve => setTimeout(resolve, backoffTime));
}
let attempt = 0;
let lastError = null;
// Create a new request promise
const requestPromise = (async () => {
while (attempt <= config.maxRetries) {
// Calculate timeout with exponential backoff
const timeout = config.baseTimeout * Math.pow(config.exponentialFactor, attempt);
try {
// Create abort controller for timeout
const controller = new AbortController();
const signal = controller.signal;
const timeoutId = setTimeout(() => controller.abort(), timeout);
const response = await fetch(fullUrl, {
...options,
signal,
headers: {
'Content-Type': 'application/json',
...options.headers
}
});
clearTimeout(timeoutId);
// Check if response needs retry
if (config.statusCodesToRetry.includes(response.status)) {
lastError = new Error(`Server responded with ${response.status}`);
attempt++;
// Wait before retry (exponential backoff)
if (attempt <= config.maxRetries) {
const delay = config.retryDelay * Math.pow(config.exponentialFactor, attempt - 1);
console.log(`⏱️ Retrying in ${delay}ms (attempt ${attempt}/${config.maxRetries})`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
}
// Success
consecutiveFailures = 0;
if (!response.ok) {
// Non-retryable error with response
const errorText = await response.text();
throw new Error(`API error ${response.status}: ${errorText}`);
}
return response.json();
} catch (error) {
lastError = error;
if (error.name === 'AbortError') {
console.log(`⏱️ Request timed out after ${timeout}ms`);
}
// On last attempt, track failure for circuit breaker
if (attempt >= config.maxRetries) {
consecutiveFailures++;
break;
}
attempt++;
// Wait before retry (exponential backoff)
const delay = config.retryDelay * Math.pow(config.exponentialFactor, attempt - 1);
console.log(`⏱️ Retrying in ${delay}ms (attempt ${attempt}/${config.maxRetries})`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
throw lastError || new Error('Request failed after multiple attempts');
})();
// Store the promise for deduplication
pendingRequests.set(requestKey, requestPromise);
// Remove from pending after completion
requestPromise
.catch(() => {}) // Ignore errors as they will be caught by the caller
.finally(() => {
pendingRequests.delete(requestKey);
});
return requestPromise;
};
// Methods for common HTTP verbs
return {
get: (url, options = {}) => fetchWithRetry(url, { ...options, method: 'GET' }),
post: (url, data, options = {}) => fetchWithRetry(url, {
...options,
method: 'POST',
body: JSON.stringify(data),
}),
put: (url, data, options = {}) => fetchWithRetry(url, {
...options,
method: 'PUT',
body: JSON.stringify(data),
}),
delete: (url, options = {}) => fetchWithRetry(url, { ...options, method: 'DELETE' }),
// Reset the circuit breaker
resetCircuitBreaker: () => {
consecutiveFailures = 0;
}
};
};
// Create a default instance
export const api = createApiClient();
// Removed React hook to keep file vanilla JS
// If you need the React hook, import React properly in your component and use:
/*
import { createApiClient } from '../utils/apiClient';
import { useMemo } from 'react';
function YourComponent() {
const customConfig = { baseTimeout: 10000 };
const apiClient = useMemo(() => createApiClient(customConfig), []);
// Use apiClient here
}
*/

View File

@@ -0,0 +1,91 @@
/**
* Enhanced fetch with timeout support to prevent hanging requests
* @param {string} url - The URL to fetch
* @param {object} options - Fetch options
* @param {number} timeout - Timeout in milliseconds
* @returns {Promise<Response>} - Fetch response
*/
export const fetchWithTimeout = async (url, options = {}, timeout = 8000) => {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeout);
try {
const response = await fetch(url, {
...options,
signal: controller.signal,
headers: {
...options.headers,
'Cache-Control': 'no-cache',
'Pragma': 'no-cache'
}
});
clearTimeout(timeoutId);
if (!response.ok) {
throw new Error(`HTTP error! Status: ${response.status}`);
}
return response;
} catch (error) {
clearTimeout(timeoutId);
if (error.name === 'AbortError') {
console.error(`Request to ${url} timed out after ${timeout}ms`);
throw new Error(`Request timeout after ${timeout}ms`);
}
throw error;
}
};
/**
* Custom hook for API data fetching with retry and exponential backoff
* @param {Function} fetchFunction - The function to fetch data
* @param {number} initialDelay - Initial delay in milliseconds
* @param {number} maxRetries - Maximum number of retries
* @returns {object} - Data, loading state, error, and refetch function
*/
export const createBackoffFetcher = (fetchFunction, initialDelay = 3000, maxRetries = 3) => {
let currentDelay = initialDelay;
let retries = 0;
let activeRequestTimestamp = null;
// Return an enhanced version of the fetch function with retry logic
return async () => {
// Generate unique request ID
const requestTimestamp = Date.now();
activeRequestTimestamp = requestTimestamp;
try {
const response = await fetchFunction();
// On success, reset retry parameters
retries = 0;
currentDelay = initialDelay;
return response;
} catch (error) {
// Only process if this is still the active request
if (requestTimestamp !== activeRequestTimestamp) {
throw error;
}
// If we've hit max retries, throw the error
if (retries >= maxRetries) {
retries = 0; // Reset for next time
currentDelay = initialDelay;
throw error;
}
// Otherwise, increment retries and apply exponential backoff
retries++;
const backoffDelay = currentDelay;
currentDelay = Math.min(currentDelay * 1.5, 30000); // Max 30 seconds
console.log(`Retrying after ${backoffDelay}ms (attempt ${retries}/${maxRetries})`);
// Wait for the backoff period
await new Promise(resolve => setTimeout(resolve, backoffDelay));
// Try again recursively
return fetchFunction();
}
};
};

View File

View File

View File

View File

View File

0
server-new/index.js Normal file
View File

0
server-new/package.json Normal file
View File

View File

0
server-new/test-cors.sh Normal file
View File

View File

@@ -0,0 +1,315 @@
// optimizedStreamingHandler.js
/**
* Optimized streaming handler for low-resource environments
* Provides efficient, chunked streaming with proper memory management
*/
/**
* Creates an optimized streaming handler
* @param {Object} options - Configuration options
* @returns {Function} Express request handler
*/
const createOptimizedStreamingHandler = (options = {}) => {
// Default options
const config = {
chunkSize: 256 * 1024, // Default 256KB chunks
streamTimeout: 20000, // 20 seconds timeout
prioritizePieces: true, // Whether to prioritize pieces at seek position
logLevel: 1, // 0=none, 1=basic, 2=verbose
universalTorrentResolver: null, // Required function to resolve torrents
...options
};
if (!config.universalTorrentResolver) {
throw new Error('universalTorrentResolver is required');
}
return async (req, res) => {
const identifier = req.params.identifier;
const fileIdx = parseInt(req.params.fileIdx, 10);
if (config.logLevel > 0) {
console.log(`🎬 Streaming request: ${identifier}/${fileIdx}`);
}
// Set important headers immediately
res.setHeader('Connection', 'keep-alive');
res.setHeader('Cache-Control', 'no-cache, no-store');
// Timeout handling
let streamTimeout;
try {
// Get the torrent with a timeout
const torrentPromise = config.universalTorrentResolver(identifier);
const timeoutPromise = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Resolver timeout')), 5000)
);
const torrent = await Promise.race([torrentPromise, timeoutPromise]);
if (!torrent) {
return res.status(404).send('Torrent not found for streaming');
}
const file = torrent.files[fileIdx];
if (!file) {
return res.status(404).send('File not found');
}
// Ensure torrent is active and file is selected
torrent.resume();
file.select();
if (config.logLevel > 0) {
console.log(`🎬 Streaming: ${file.name} (${(file.length / 1024 / 1024).toFixed(1)} MB)`);
}
// Determine MIME type
const ext = file.name.split('.').pop().toLowerCase();
const mimeTypes = {
'mp4': 'video/mp4',
'mkv': 'video/x-matroska',
'avi': 'video/x-msvideo',
'mov': 'video/quicktime',
'wmv': 'video/x-ms-wmv',
'flv': 'video/x-flv',
'webm': 'video/webm',
'm4v': 'video/mp4',
'mp3': 'audio/mpeg',
'm4a': 'audio/mp4',
'wav': 'audio/wav',
'ogg': 'audio/ogg',
'flac': 'audio/flac',
'jpg': 'image/jpeg',
'jpeg': 'image/jpeg',
'png': 'image/png',
'gif': 'image/gif',
'webp': 'image/webp'
};
const contentType = mimeTypes[ext] || 'application/octet-stream';
// Handle range request with small chunks
const range = req.headers.range;
if (range) {
const parts = range.replace(/bytes=/, "").split("-");
const start = parseInt(parts[0], 10);
// Calculate end with a reasonable chunk size
const end = parts[1] ? parseInt(parts[1], 10) : Math.min(start + config.chunkSize - 1, file.length - 1);
// Safety check for valid ranges
if (isNaN(start) || isNaN(end) || start < 0 || end >= file.length || start > end) {
return res.status(416).send('Range Not Satisfiable');
}
const chunkSize = (end - start) + 1;
// Log seeking behavior for debugging
if (start > 0 && config.logLevel > 1) {
console.log(`⏩ Seek to position ${(start / file.length * 100).toFixed(1)}% of the file`);
}
// Prioritize pieces at the seek position
if (config.prioritizePieces && start > 0 && file._torrent) {
try {
const pieceLength = torrent.pieceLength;
const startPiece = Math.floor(start / pieceLength);
// Prioritize just a few pieces for better seeking
for (let i = 0; i < 3; i++) {
if (startPiece + i < file._endPiece) {
file._torrent.select(startPiece + i, startPiece + i + 1, 1);
}
}
} catch (err) {
if (config.logLevel > 0) {
console.log(`⚠️ Error prioritizing pieces: ${err.message}`);
}
}
}
// Set response headers
res.writeHead(206, {
'Content-Range': `bytes ${start}-${end}/${file.length}`,
'Accept-Ranges': 'bytes',
'Content-Length': chunkSize,
'Content-Type': contentType,
'Cache-Control': 'no-cache',
});
// Create stream with timeout handling
const stream = file.createReadStream({ start, end });
let bytesStreamed = 0;
// Set initial timeout
streamTimeout = setTimeout(() => {
if (config.logLevel > 0) {
console.log(`⚠️ Stream timeout after ${config.streamTimeout}ms`);
}
stream.destroy();
res.end();
}, config.streamTimeout);
// Handle stream events
stream.on('data', (chunk) => {
bytesStreamed += chunk.length;
// Reset timeout on data
clearTimeout(streamTimeout);
streamTimeout = setTimeout(() => {
if (config.logLevel > 0) {
console.log(`⚠️ Stream timeout after inactivity`);
}
stream.destroy();
res.end();
}, config.streamTimeout);
// Flow control: pause stream if response can't keep up
if (!res.write(chunk)) {
stream.pause();
}
});
// Resume stream when response drains
res.on('drain', () => {
stream.resume();
});
stream.on('end', () => {
clearTimeout(streamTimeout);
res.end();
if (config.logLevel > 0) {
console.log(`✅ Stream chunk completed: ${bytesStreamed} bytes streamed`);
}
});
stream.on('error', (error) => {
clearTimeout(streamTimeout);
if (config.logLevel > 0) {
console.log(`❌ Stream error: ${error.message}`);
}
if (!res.headersSent) {
res.status(500).send('Stream error');
} else {
res.end();
}
});
// Clean up on client disconnect
req.on('close', () => {
clearTimeout(streamTimeout);
stream.destroy();
if (config.logLevel > 1) {
console.log(`🔌 Client disconnected from stream`);
}
});
} else {
// Non-range request - still use chunked approach for large files
if (file.length > 10 * 1024 * 1024) { // Files larger than 10MB
// For large files without range, respond with a 206 and the first chunk
const end = Math.min(1024 * 1024 - 1, file.length - 1); // First 1MB
res.writeHead(206, {
'Content-Range': `bytes 0-${end}/${file.length}`,
'Accept-Ranges': 'bytes',
'Content-Length': end + 1,
'Content-Type': contentType,
'Cache-Control': 'no-cache',
});
const stream = file.createReadStream({ start: 0, end });
// Set timeout
streamTimeout = setTimeout(() => {
if (config.logLevel > 0) {
console.log(`⚠️ Stream timeout after ${config.streamTimeout}ms`);
}
stream.destroy();
res.end();
}, config.streamTimeout);
stream.pipe(res);
stream.on('end', () => {
clearTimeout(streamTimeout);
});
stream.on('error', (error) => {
clearTimeout(streamTimeout);
if (config.logLevel > 0) {
console.log(`❌ Stream error: ${error.message}`);
}
if (!res.headersSent) {
res.status(500).send('Stream error');
} else {
res.end();
}
});
// Clean up on client disconnect
req.on('close', () => {
clearTimeout(streamTimeout);
stream.destroy();
});
} else {
// Small files - normal response
res.writeHead(200, {
'Content-Length': file.length,
'Content-Type': contentType,
'Accept-Ranges': 'bytes',
'Cache-Control': 'no-cache',
});
const stream = file.createReadStream();
// Set timeout
streamTimeout = setTimeout(() => {
if (config.logLevel > 0) {
console.log(`⚠️ Stream timeout after ${config.streamTimeout}ms`);
}
stream.destroy();
res.end();
}, config.streamTimeout);
stream.pipe(res);
stream.on('end', () => {
clearTimeout(streamTimeout);
});
stream.on('error', (error) => {
clearTimeout(streamTimeout);
if (config.logLevel > 0) {
console.log(`❌ Stream error: ${error.message}`);
}
if (!res.headersSent) {
res.status(500).send('Stream error');
} else {
res.end();
}
});
// Clean up on client disconnect
req.on('close', () => {
clearTimeout(streamTimeout);
stream.destroy();
});
}
}
} catch (error) {
clearTimeout(streamTimeout);
if (config.logLevel > 0) {
console.error(`❌ Streaming error:`, error.message);
}
if (!res.headersSent) {
res.status(500).send('Error streaming file: ' + error.message);
} else {
res.end();
}
}
};
};
module.exports = createOptimizedStreamingHandler;

View File

@@ -0,0 +1,183 @@
// Enhanced streaming handler optimized for low-resource environments
const mime = require('mime-types');
const path = require('path');
/**
* Stream handler factory - creates a handler optimized for low-resource environments
* @param {Object} options - Configuration options
* @returns {Function} Express route handler
*/
const createOptimizedStreamHandler = (options = {}) => {
// Default options with sensible values for low-resource servers
const config = {
maxChunkSize: options.maxChunkSize || 1024 * 1024, // 1MB default chunk size
defaultMimeType: options.defaultMimeType || 'video/mp4',
prioritizeSeeks: options.prioritizeSeeks !== undefined ? options.prioritizeSeeks : true,
priorityPieces: options.priorityPieces || 3,
...options
};
// MIME type mapping with more video types
const mimeTypes = {
'mp4': 'video/mp4',
'webm': 'video/webm',
'mkv': 'video/x-matroska',
'avi': 'video/x-msvideo',
'mov': 'video/quicktime',
'wmv': 'video/x-ms-wmv',
'flv': 'video/x-flv',
'm4v': 'video/mp4',
'ts': 'video/mp2t',
'mts': 'video/mp2t',
'mpg': 'video/mpeg',
'mpeg': 'video/mpeg',
'ogv': 'video/ogg',
'3gp': 'video/3gpp',
'mp3': 'audio/mpeg',
'wav': 'audio/wav',
'ogg': 'audio/ogg',
'm4a': 'audio/mp4',
'aac': 'audio/aac',
'srt': 'application/x-subrip',
'vtt': 'text/vtt',
'ass': 'text/plain',
};
// Express route handler
return async (req, res) => {
const requestId = req.requestId || `${Date.now()}-${Math.floor(Math.random() * 1000)}`;
try {
const { identifier, index } = req.params;
// Get torrent from your resolver
const torrent = await req.app.locals.universalTorrentResolver(identifier);
if (!torrent) {
return res.status(404).json({ error: 'Torrent not found for streaming' });
}
const fileIndex = parseInt(index, 10);
const file = torrent.files[fileIndex];
if (!file) {
return res.status(404).json({ error: 'File not found' });
}
// Ensure torrent is active and file is selected
torrent.resume();
file.select();
// Set file priority high for streaming
if (typeof file.prioritize === 'function') {
file.prioritize();
}
// Determine content type from file extension
const ext = path.extname(file.name).toLowerCase().slice(1);
const contentType = mimeTypes[ext] || mime.lookup(file.name) || config.defaultMimeType;
// Parse range header with better error handling
let start = 0;
let end = Math.min(file.length - 1, start + config.maxChunkSize - 1);
let isRangeRequest = false;
if (req.headers.range) {
try {
isRangeRequest = true;
const parts = req.headers.range.replace(/bytes=/, "").split("-");
start = parseInt(parts[0], 10);
// End is either specified or limited by max chunk size
if (parts[1] && parts[1].trim() !== '') {
end = Math.min(parseInt(parts[1], 10), file.length - 1);
} else {
end = Math.min(file.length - 1, start + config.maxChunkSize - 1);
}
// Validate range values
if (isNaN(start) || isNaN(end) || start < 0 || end >= file.length || start > end) {
throw new Error('Invalid range');
}
} catch (error) {
console.log(`⚠️ [${requestId}] Invalid range header: ${req.headers.range}`);
// Fall back to default range
start = 0;
end = Math.min(file.length - 1, config.maxChunkSize - 1);
isRangeRequest = false;
}
}
// Calculate chunk size for response
const chunkSize = (end - start) + 1;
// Set appropriate headers
res.setHeader('Content-Type', contentType);
res.setHeader('Content-Length', chunkSize);
res.setHeader('Accept-Ranges', 'bytes');
if (isRangeRequest) {
res.setHeader('Content-Range', `bytes ${start}-${end}/${file.length}`);
res.status(206); // Partial content
} else {
res.status(200);
}
// Handle seeking by prioritizing pieces
if (config.prioritizeSeeks && start > 0) {
const pieceLength = torrent.pieceLength;
const startPiece = Math.floor(start / pieceLength);
try {
// Prioritize pieces at the seek position
for (let i = 0; i < config.priorityPieces; i++) {
const pieceIndex = startPiece + i;
if (pieceIndex < file._endPiece) {
// Use different prioritization methods depending on what's available
if (file._torrent && typeof file._torrent.select === 'function') {
file._torrent.select(pieceIndex, pieceIndex + 1, 1);
}
if (file._torrent && typeof file._torrent.critical === 'function') {
file._torrent.critical(pieceIndex);
}
}
}
} catch (error) {
console.log(`⚠️ [${requestId}] Error prioritizing pieces: ${error.message}`);
}
}
// Create read stream with appropriate range
const stream = file.createReadStream({ start, end });
// Handle stream events
stream.on('error', (err) => {
console.log(`❌ [${requestId}] Stream error: ${err.message}`);
if (!res.headersSent) {
res.status(500).json({ error: 'Stream error: ' + err.message });
} else if (!res.writableEnded) {
res.end();
}
});
// Handle client disconnect
req.on('close', () => {
stream.destroy();
});
// Pipe the stream to the response
stream.pipe(res);
} catch (error) {
console.log(`❌ [${requestId}] Streaming handler error: ${error.message}`);
if (!res.headersSent) {
res.status(500).json({ error: 'Streaming error: ' + error.message });
} else if (!res.writableEnded) {
res.end();
}
}
};
};
module.exports = createOptimizedStreamHandler;

344
server/index-optimized.js Normal file
View File

@@ -0,0 +1,344 @@
// server/index-optimized.js
/**
* SeedBox-Lite - Optimized for low-resource environments
* This version includes memory-aware behavior and performance optimizations
*/
// Core modules
const express = require('express');
const cors = require('cors');
const path = require('path');
const fs = require('fs').promises;
const os = require('os');
// Import optimized handlers
const createRequestLimiter = require('./middleware/requestLimiter');
const createOptimizedStreamingHandler = require('./handlers/optimizedStreamingHandler');
// Environment detection
const isLowResourceEnv = os.totalmem() < 4 * 1024 * 1024 * 1024; // Less than 4GB
const cpuCount = os.cpus().length;
console.log(`🖥️ System info: ${os.totalmem() / 1024 / 1024 / 1024}GB RAM, ${cpuCount} CPUs`);
console.log(`⚙️ Running in ${isLowResourceEnv ? 'low-resource' : 'normal'} mode`);
// Environment Configuration
const config = {
server: {
port: process.env.SERVER_PORT || 3001,
host: process.env.SERVER_HOST || 'localhost'
},
torrent: {
downloadPath: process.env.DOWNLOAD_PATH || path.join(__dirname, '../downloads'),
maxConnections: isLowResourceEnv ? Math.max(20, 100 / cpuCount) : 100,
maxDownloadSpeed: isLowResourceEnv ? 1 * 1024 * 1024 : undefined, // 1MB/s in low resource mode
maxUploadSpeed: isLowResourceEnv ? 256 * 1024 : undefined, // 256KB/s in low resource mode
},
api: {
maxConcurrentRequests: isLowResourceEnv ? 15 : 50,
requestTimeout: isLowResourceEnv ? 30000 : 60000, // 30s in low resource mode
}
};
// Create Express app
const app = express();
app.use(express.json());
app.use(cors());
// Apply request limiter middleware
app.use(createRequestLimiter({
maxConcurrentRequests: config.api.maxConcurrentRequests,
requestTimeout: config.api.requestTimeout,
logLevel: 1
}));
// Create WebTorrent client with optimized settings
const WebTorrent = require('webtorrent');
const client = new WebTorrent({
maxConns: config.torrent.maxConnections,
downloadLimit: config.torrent.maxDownloadSpeed,
uploadLimit: config.torrent.maxUploadSpeed
});
// Universal torrent resolver - finds a torrent by any identifier
const universalTorrentResolver = (identifier) => {
// First try to find by infoHash
let torrent = client.torrents.find(t => t.infoHash === identifier);
if (torrent) return torrent;
// Then by magnet URI
torrent = client.torrents.find(t => t.magnetURI.includes(identifier));
if (torrent) return torrent;
// Then by name
torrent = client.torrents.find(t => t.name === identifier);
if (torrent) return torrent;
// Not found
return null;
};
// Create optimized streaming handler
const streamHandler = createOptimizedStreamingHandler({
chunkSize: isLowResourceEnv ? 256 * 1024 : 1024 * 1024, // 256KB chunks in low resource mode
streamTimeout: isLowResourceEnv ? 20000 : 30000, // 20s timeout in low resource mode
universalTorrentResolver,
logLevel: 1
});
// Serve static client files
app.use(express.static(path.join(__dirname, '../client/build')));
// UNIVERSAL GET TORRENTS - Optimized for performance and memory usage
app.get('/api/torrents', (req, res) => {
try {
// Get torrents with basic info only
const torrents = client.torrents.map(torrent => ({
infoHash: torrent.infoHash,
magnetURI: torrent.magnetURI,
name: torrent.name,
progress: Math.min(torrent.progress, 1), // Ensure progress is never > 1
downloaded: torrent.downloaded,
uploaded: torrent.uploaded,
downloadSpeed: torrent.downloadSpeed,
uploadSpeed: torrent.uploadSpeed,
timeRemaining: torrent.timeRemaining,
numPeers: torrent.numPeers,
ratio: torrent.uploaded / (torrent.downloaded || 1),
active: !torrent.paused,
done: torrent.done
}));
res.json(torrents);
} catch (err) {
console.error(`❌ Error getting torrents list:`, err.message);
res.status(500).json({ error: 'Error getting torrents list' });
}
});
// UNIVERSAL GET TORRENT DETAILS - Optimized
app.get('/api/torrents/:identifier', async (req, res) => {
const identifier = req.params.identifier;
try {
const torrent = universalTorrentResolver(identifier);
if (!torrent) {
return res.status(404).json({ error: 'Torrent not found' });
}
// Return basic torrent info without heavy file details
const basicInfo = {
infoHash: torrent.infoHash,
magnetURI: torrent.magnetURI,
name: torrent.name,
announce: torrent.announce,
created: torrent.created,
createdBy: torrent.createdBy,
comment: torrent.comment,
// Status info
progress: Math.min(torrent.progress, 1),
downloaded: torrent.downloaded,
uploaded: torrent.uploaded,
downloadSpeed: torrent.downloadSpeed,
uploadSpeed: torrent.uploadSpeed,
timeRemaining: torrent.timeRemaining,
numPeers: torrent.numPeers,
ratio: torrent.uploaded / (torrent.downloaded || 1),
active: !torrent.paused,
done: torrent.done,
// File summary (lightweight)
fileCount: torrent.files.length,
totalSize: torrent.length,
// Add just lightweight file info
files: torrent.files.map((file, i) => ({
name: file.name,
path: file.path,
length: file.length,
progress: Math.min(file.progress, 1),
index: i
}))
};
res.json(basicInfo);
} catch (err) {
console.error(`❌ Error getting torrent ${identifier}:`, err.message);
res.status(500).json({ error: `Error getting torrent: ${err.message}` });
}
});
// UNIVERSAL FILES ENDPOINT - Returns just the files array
app.get('/api/torrents/:identifier/files', async (req, res) => {
const identifier = req.params.identifier;
try {
const torrent = universalTorrentResolver(identifier);
if (!torrent) {
return res.status(404).json({ error: 'Torrent not found' });
}
// Return lightweight file info
const files = torrent.files.map((file, i) => ({
name: file.name,
path: file.path,
length: file.length,
progress: Math.min(file.progress, 1),
index: i
}));
res.json(files);
} catch (err) {
console.error(`❌ Error getting files for ${identifier}:`, err.message);
res.status(500).json({ error: 'Error getting files' });
}
});
// UNIVERSAL STREAMING - Use the optimized handler
app.get('/api/torrents/:identifier/files/:fileIdx/stream', streamHandler);
// ADD TORRENT ENDPOINT
app.post('/api/torrents/add', async (req, res) => {
try {
const { magnetUri, torrentUrl, torrentFile } = req.body;
if (!magnetUri && !torrentUrl && !torrentFile) {
return res.status(400).json({ error: 'No torrent source provided' });
}
// Check if we're already at capacity in low-resource mode
if (isLowResourceEnv && client.torrents.length >= 10) {
return res.status(429).json({
error: 'Too many torrents',
message: 'The server is in low-resource mode and cannot handle more torrents. Please remove some first.'
});
}
let torrentId = magnetUri || torrentUrl || torrentFile;
client.add(torrentId, { path: config.torrent.downloadPath }, (torrent) => {
res.json({
infoHash: torrent.infoHash,
magnetURI: torrent.magnetURI,
name: torrent.name
});
});
} catch (err) {
console.error('Error adding torrent:', err);
res.status(500).json({ error: 'Error adding torrent' });
}
});
// REMOVE TORRENT ENDPOINT
app.delete('/api/torrents/:identifier', (req, res) => {
const identifier = req.params.identifier;
try {
const torrent = universalTorrentResolver(identifier);
if (!torrent) {
return res.status(404).json({ error: 'Torrent not found' });
}
client.remove(torrent.infoHash, (err) => {
if (err) {
console.error(`❌ Error removing torrent ${identifier}:`, err.message);
return res.status(500).json({ error: 'Error removing torrent' });
}
res.json({ success: true, message: 'Torrent removed' });
});
} catch (err) {
console.error(`❌ Error in remove endpoint:`, err.message);
res.status(500).json({ error: 'Error processing remove request' });
}
});
// PAUSE/RESUME TORRENT ENDPOINTS
app.post('/api/torrents/:identifier/pause', (req, res) => {
const identifier = req.params.identifier;
try {
const torrent = universalTorrentResolver(identifier);
if (!torrent) {
return res.status(404).json({ error: 'Torrent not found' });
}
torrent.pause();
res.json({ success: true, status: 'paused' });
} catch (err) {
console.error(`❌ Error pausing torrent ${identifier}:`, err.message);
res.status(500).json({ error: 'Error pausing torrent' });
}
});
app.post('/api/torrents/:identifier/resume', (req, res) => {
const identifier = req.params.identifier;
try {
const torrent = universalTorrentResolver(identifier);
if (!torrent) {
return res.status(404).json({ error: 'Torrent not found' });
}
torrent.resume();
res.json({ success: true, status: 'resumed' });
} catch (err) {
console.error(`❌ Error resuming torrent ${identifier}:`, err.message);
res.status(500).json({ error: 'Error resuming torrent' });
}
});
// SERVER INFO ENDPOINT
app.get('/api/server/info', (req, res) => {
try {
const memoryUsage = process.memoryUsage();
const systemInfo = {
platform: os.platform(),
arch: os.arch(),
cpus: os.cpus().length,
totalMemory: os.totalmem(),
freeMemory: os.freemem(),
uptime: os.uptime(),
nodeVersion: process.version,
processMemory: {
rss: memoryUsage.rss,
heapTotal: memoryUsage.heapTotal,
heapUsed: memoryUsage.heapUsed,
external: memoryUsage.external
},
torrents: client.torrents.length,
isLowResourceMode: isLowResourceEnv
};
res.json(systemInfo);
} catch (err) {
console.error('Error getting server info:', err);
res.status(500).json({ error: 'Error getting server info' });
}
});
// Catch-all for React router
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname, '../client/build/index.html'));
});
// Start server
app.listen(config.server.port, () => {
console.log(`🚀 Server running on http://${config.server.host}:${config.server.port}`);
console.log(`📂 Downloads folder: ${config.torrent.downloadPath}`);
});
// Handle process termination
process.on('SIGINT', () => {
console.log('Shutting down...');
client.destroy((err) => {
if (err) console.error('Error destroying WebTorrent client:', err);
process.exit(0);
});
});

View File

@@ -0,0 +1,113 @@
// This file contains optimized streaming middleware for low-resource environments
/**
* Creates optimized streaming middleware
* @param {Object} options - Configuration options
* @returns {Function} Express middleware
*/
const createOptimizedStreamingMiddleware = (options = {}) => {
// Default options
const config = {
maxChunkSize: options.maxChunkSize || 1024 * 1024, // 1MB default chunk size
streamTimeout: options.streamTimeout || 15000, // 15s timeout
globalTimeout: options.globalTimeout || 60000, // 60s global timeout
logRequests: options.logRequests !== undefined ? options.logRequests : true,
...options
};
return async (req, res, next) => {
// Skip non-streaming routes
if (!req.path.includes('/stream')) {
return next();
}
const startTime = Date.now();
const requestId = `${Date.now()}-${Math.floor(Math.random() * 1000)}`;
// Set up response headers early
res.setHeader('Connection', 'keep-alive');
res.setHeader('Cache-Control', 'no-cache, no-store');
res.setHeader('Accept-Ranges', 'bytes');
// Set up timeout handling
let streamTimeout;
const clearStreamTimeout = () => {
if (streamTimeout) {
clearTimeout(streamTimeout);
streamTimeout = null;
}
};
const setStreamTimeout = (ms, callback) => {
clearStreamTimeout();
streamTimeout = setTimeout(callback, ms);
};
// Set global timeout to prevent hanging requests
const globalTimeout = setTimeout(() => {
console.log(`⏱️ Global timeout reached for stream request ${requestId}`);
clearStreamTimeout();
if (!res.headersSent) {
res.status(504).json({ error: 'Global timeout reached' });
} else if (!res.writableEnded) {
res.end();
}
}, config.globalTimeout);
// Clean up function
const cleanUp = () => {
clearTimeout(globalTimeout);
clearStreamTimeout();
};
// Handle request completion
res.on('finish', () => {
cleanUp();
if (config.logRequests) {
const duration = Date.now() - startTime;
console.log(`✅ Stream request ${requestId} completed in ${duration}ms`);
}
});
// Handle client disconnection
res.on('close', () => {
cleanUp();
if (!res.writableEnded) {
console.log(`🔌 Client disconnected from stream request ${requestId}`);
}
});
try {
// Log request start
if (config.logRequests) {
console.log(`🎬 Stream request ${requestId} started: ${req.originalUrl}`);
}
// Set initial timeout
setStreamTimeout(config.streamTimeout, () => {
console.log(`⏱️ Stream timeout for request ${requestId}`);
if (!res.headersSent) {
res.status(504).json({ error: 'Stream timeout' });
} else if (!res.writableEnded) {
res.end();
}
});
// Continue to actual streaming handler
next();
} catch (error) {
cleanUp();
console.error(`❌ Stream middleware error: ${error.message}`);
if (!res.headersSent) {
res.status(500).json({ error: 'Stream processing error' });
} else if (!res.writableEnded) {
res.end();
}
}
};
};
module.exports = createOptimizedStreamingMiddleware;

View File

@@ -0,0 +1,101 @@
// Request limiter middleware for preventing API overload
/**
* Creates a request limiter middleware for protecting against API overload
* @param {Object} options - Configuration options
* @returns {Function} Express middleware
*/
const createRequestLimiter = (options = {}) => {
// Default options
const config = {
maxConcurrentRequests: options.maxConcurrentRequests || 15,
maxRequestsPerIP: options.maxRequestsPerIP || 5,
logLevel: options.logLevel || 1, // 0=none, 1=errors only, 2=all
requestTimeout: options.requestTimeout || 30000, // 30s default timeout
...options
};
// Track active requests
const activeRequests = {
count: 0,
byIP: {}
};
return (req, res, next) => {
// Skip for static assets
if (req.path.includes('.') || req.path.startsWith('/assets/')) {
return next();
}
const clientIP = req.ip || req.connection.remoteAddress || 'unknown';
// Track this request
activeRequests.byIP[clientIP] = (activeRequests.byIP[clientIP] || 0) + 1;
activeRequests.count++;
// Log excessive connections from same IP
if (config.logLevel > 1 && activeRequests.byIP[clientIP] > config.maxRequestsPerIP) {
console.log(`⚠️ High number of concurrent requests (${activeRequests.byIP[clientIP]}) from ${clientIP}`);
}
// Set up a global timeout for this request
const requestTimeout = setTimeout(() => {
console.log(`⏱️ Request timeout for ${req.originalUrl} from ${clientIP}`);
if (!res.headersSent) {
res.status(504).json({
error: 'Request timeout',
message: 'Your request took too long to process'
});
}
}, config.requestTimeout);
// If too many concurrent requests, limit based on strategy
if (activeRequests.count > config.maxConcurrentRequests) {
// If way too many requests, reject outright
if (activeRequests.count > config.maxConcurrentRequests * 1.5) {
// Clean up
activeRequests.count--;
activeRequests.byIP[clientIP]--;
clearTimeout(requestTimeout);
if (config.logLevel > 0) {
console.log(`🛑 Request limit exceeded: ${activeRequests.count}/${config.maxConcurrentRequests} active requests`);
}
return res.status(429).json({
error: 'Too many requests',
message: 'Server is currently busy. Please try again later.'
});
}
// If this IP already has many requests, reject
if (activeRequests.byIP[clientIP] > config.maxRequestsPerIP) {
// Clean up
activeRequests.count--;
activeRequests.byIP[clientIP]--;
clearTimeout(requestTimeout);
return res.status(429).json({
error: 'Too many requests',
message: 'You have too many pending requests. Please try again later.'
});
}
}
// Clean up when the response is finished
const cleanup = () => {
clearTimeout(requestTimeout);
activeRequests.count = Math.max(0, activeRequests.count - 1);
activeRequests.byIP[clientIP] = Math.max(0, (activeRequests.byIP[clientIP] || 0) - 1);
};
// Register cleanup on both finish and close events
res.on('finish', cleanup);
res.on('close', cleanup);
// Continue to the next middleware
next();
};
};
module.exports = createRequestLimiter;