The Storage module provides a generic, S3-compatible object storage layer for file uploads and transfers. It works with AWS S3, OVH Object Storage, MinIO, and other S3-compatible services. The module supports streaming large files (800MB+) without buffering in memory.
Architecture Overview
Storage System Components
Save a file from a source URL to S3.
flowchart LR
subgraph Input
A[Source URL]
end
subgraph Services
B[FileTransfer Service]
E[S3 Service]
D[StorageFile Repository]
end
subgraph Storage
C[S3 Bucket]
end
A --> B
B --> E
B --> D
E --> CProvide a file from the S3 bucket to the frontend.
flowchart RL
subgraph Services
B[FileTransfer Service]
E[S3 Service]
D[StorageFile Repository]
end
subgraph Storage
C[S3 Bucket]
end
subgraph Output
F[Presigned URLs]
G[Frontend]
end
F --> G
D --> B
E --> B
B --> F
C --> EKey Features
- Streaming transfer: Transfer files from URL to S3 without loading into memory
- Large file support: Handle 800MB+ files via multipart upload (10MB chunks)
- S3-compatible: Works with AWS S3, OVH Object Storage, MinIO, etc.
- Presigned URLs: Generate temporary access URLs
- Path security: Built-in validation against path traversal attacks
- Transfer tracking: Database records with status (PENDING → TRANSFERRING → DONE/FAILED)
Module Structure
The StorageModule exports three main services:
| Service | Purpose |
|---|---|
S3Service | Low-level S3 operations (upload, delete, presigned URLs) |
FileTransferService | URL → S3 streaming transfer with database tracking |
StorageFileRepository | CRUD operations for StorageFile records |
Configuration
Environment Variables
Add these variables to your .env file:
# S3-compatible storage configuration
S3_ENDPOINT=https://s3.gra.cloud.ovh.net # or your S3 endpoint
S3_REGION=gra
S3_BUCKET=my-app-storage
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-keyConditional Module Loading
The StorageModule is optional and only loaded when S3_ENDPOINT is configured. Add this to your app.module.ts:
import { Module } from '@nestjs/common';
import { ConfigModule, ConfigService } from '@nestjs/config';
import { StorageModule } from './storage/storage.module';
@Module({
imports: [
ConfigModule.forRoot(),
// Conditionally import StorageModule only when S3 is configured
...(process.env.S3_ENDPOINT ? [StorageModule] : []),
],
})
export class AppModule {}Services
S3Service
Low-level S3 operations with built-in path validation.
Location: apps/api/src/storage/s3.service.ts
@Injectable()
export class S3Service {
// Upload a stream (multipart, 10MB chunks)
async uploadStream(options: {
path: string;
body: Readable;
contentType: string;
bucket?: string;
}): Promise<{ path: string; bucket: string; size: number | null }>;
// Generate presigned URL (instantaneous, ~1ms)
async getSignedUrl(path: string, expiresIn?: number, bucket?: string): Promise<string>;
// Delete an object
async deleteObject(path: string, bucket?: string): Promise<void>;
// Get object metadata
async headObject(
path: string,
bucket?: string
): Promise<{
contentLength: number;
contentType: string;
lastModified: Date;
}>;
// Get default bucket name
getDefaultBucket(): string;
}Usage Examples
import { S3Service } from '../storage/s3.service';
@Injectable()
export class MyService {
constructor(private readonly s3Service: S3Service) {}
async uploadFile() {
// Upload a stream
const result = await this.s3Service.uploadStream({
path: 'uploads/workspace-id/file.mp4',
body: nodeReadableStream,
contentType: 'video/mp4',
});
// result: { path: '...', bucket: '...', size: 12345 }
// Generate presigned URL (1 hour)
const url = await this.s3Service.getSignedUrl('uploads/workspace-id/file.mp4', 3600);
// Delete object
await this.s3Service.deleteObject('uploads/workspace-id/file.mp4');
// Check if object exists and get size
const meta = await this.s3Service.headObject('uploads/workspace-id/file.mp4');
console.log(`Size: ${meta.contentLength} bytes`);
}
}FileTransferService
High-level service for transferring files from a source URL to S3 with database tracking.
Location: apps/api/src/storage/file-transfer.service.ts
@Injectable()
export class FileTransferService {
// Transfer from URL to S3 (creates StorageFile record)
async createAndTransfer(options: { sourceUrl: string; workspaceId: string; path: string; type: string; mimeType?: string }): Promise<StorageFile>;
// Generate presigned URL for a StorageFile
async getSignedUrl(storageFile: StorageFile, expiresIn?: number): Promise<string>;
// Delete from both S3 and database
async deleteFile(storageFile: StorageFile, hardDelete?: boolean): Promise<void>;
}Transfer Flow
- Create StorageFile record
PENDING - Update status to
TRANSFERRING - Stream: Source URL → HTTP → S3 Multipart Upload
- Update StorageFile record
DONEorFAILED - No buffering in memory - handles 800MB+ files
Usage Examples
import { FileTransferService } from '../storage/file-transfer.service';
@Injectable()
export class RecordingService {
constructor(private readonly fileTransferService: FileTransferService) {}
async transferRecordingVideo(recording: Recording) {
// Transfer video from external URL to S3
const storageFile = await this.fileTransferService.createAndTransfer({
sourceUrl: recording.externalVideoUrl,
workspaceId: recording.workspaceId,
path: `recordings/${recording.workspaceId}/${recording.id}.mp4`,
type: 'recording-video',
mimeType: 'video/mp4',
});
// storageFile.transferStatus is now 'DONE'
// storageFile.size contains the file size
// Generate presigned URL for frontend
const url = await this.fileTransferService.getSignedUrl(storageFile);
// Delete file (soft delete by default)
await this.fileTransferService.deleteFile(storageFile);
// Hard delete (permanent)
await this.fileTransferService.deleteFile(storageFile, true);
}
}Database Model
StorageFile Schema
enum FileTransferStatus {
PENDING
TRANSFERRING
DONE
FAILED
}
model StorageFile {
// Fields
id String @id @default(uuid()) @db.Uuid
workspaceId String @map("workspace_id") @db.Uuid
type String // e.g., "recording-video", "user-upload", "avatar"
path String // S3 object path
bucket String
mimeType String? @map("mime_type")
size BigInt?
transferStatus FileTransferStatus @default(PENDING) @map("transfer_status")
transferError String? @map("transfer_error")
// Relations
workspace Workspace @relation(fields: [workspaceId], references: [id], onDelete: Cascade)
// Timestamps
createdAt DateTime @default(now()) @map("created_at")
deletedAt DateTime? @map("deleted_at")
@@map("core_storage_file")
}Security
Path Validation
All paths are validated to prevent security issues:
Presigned URL Security
- Default expiry: 1 hours (3600 seconds)
- URLs are cryptographically signed
- Cannot be forged without S3 credentials
- Frontend can display files without exposing S3 credentials
CORS Configuration
CORS must be configured on your S3 bucket to allow frontend access via presigned URLs.
Configuration Files
Located in infra/s3/:
| File | Purpose |
|---|---|
cors-dev.json | Local development (localhost:3000) |
cors-staging.json | Staging environment |
cors-prod.json | Production environment |
Example CORS Configuration
{
"CORSRules": [
{
"AllowedOrigins": ["http://localhost:3000", "http://127.0.0.1:3000"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["Content-Length", "Content-Type", "ETag"],
"MaxAgeSeconds": 3600
}
]
}Apply CORS Configuration
# Development
aws s3api put-bucket-cors \
--bucket your-bucket-dev \
--cors-configuration file://infra/s3/cors-dev.json \
--endpoint-url https://your-s3-endpoint.com
# Verify
aws s3api get-bucket-cors \
--bucket your-bucket-dev \
--endpoint-url https://your-s3-endpoint.comUsage in Other Modules
Importing the Module
import { Module } from '@nestjs/common';
import { StorageModule } from '../storage/storage.module';
import { FileTransferService } from '../storage/file-transfer.service';
@Module({
imports: [StorageModule],
providers: [MyService],
})
export class MyModule {}Complete Example: Video Recording Transfer
@Injectable()
export class RecordingTransferListener {
constructor(
private readonly fileTransferService: FileTransferService,
private readonly recordingRepository: RecordingRepository
) {}
@OnEvent('recording.ready')
async handleRecordingReady(event: { recordingId: string }) {
const recording = await this.recordingRepository.findById(event.recordingId);
try {
// Transfer video to S3
const storageFile = await this.fileTransferService.createAndTransfer({
sourceUrl: recording.externalVideoUrl,
workspaceId: recording.workspaceId,
path: `recordings/${recording.workspaceId}/${recording.id}.mp4`,
type: 'recording-video',
mimeType: 'video/mp4',
});
// Link StorageFile to Recording
await this.recordingRepository.update(recording.id, {
videoFileId: storageFile.id,
});
this.logger.log(`Video transferred: ${recording.id}`);
} catch (error) {
this.logger.error(`Transfer failed: ${recording.id}`, error);
}
}
}Troubleshooting
CORS Errors
Problem: Files fail to load with CORS error in browser console.
Solution:
- Verify CORS is applied:
aws s3api get-bucket-cors ... - Check the origin matches your frontend URL exactly
- Re-apply CORS config if needed
403 Forbidden on Presigned URLs
Problem: Presigned URL returns 403.
Causes:
- URL has expired (default: 1 hours)
- Bucket policy doesn't allow the operation
- Object doesn't exist
Solution: Generate a fresh presigned URL or check object existence with headObject().
Large File Memory Issues
Problem: Out of memory when transferring large files.
Solution: This shouldn't happen with the streaming implementation. If it does:
- Verify you're using
FileTransferService.createAndTransfer()(streaming) - Don't buffer the entire response in memory
- Check for memory leaks in your code
Performance Considerations
Multipart Upload Configuration
The S3Service uses optimized settings for large files:
const upload = new Upload({
client: this.client,
params: { Bucket, Key, Body, ContentType },
partSize: 10 * 1024 * 1024, // 10MB parts
queueSize: 4, // 4 concurrent uploads
});- Part size: 10MB (minimum 5MB, recommended 10-100MB)
- Concurrency: 4 parts uploaded simultaneously
- Result: ~40MB/s upload speed on good connections
Presigned URL Generation
Presigned URL generation is instantaneous (~1ms):
- No S3 API call required
- URL is computed locally using credentials
- Safe to call frequently (e.g., on each page load)