Object Storage
Every project gets S3-compatible object storage automatically. No buckets to create, no credentials to manage. Deploy your app and start storing files.
How it works
When you deploy an app, the platform automatically injects S3 environment variables into your container. These are standard S3-compatible credentials that work with any S3 SDK in any language. There is nothing to configure.
S3_ENDPOINT The S3-compatible API endpoint for server-side SDK callsS3_PUBLIC_ENDPOINT Optional browser-safe endpoint for presigned URLs. Use this when you want uploads and downloads to stay on your app domain.S3_ACCESS_KEY Your project-specific access keyS3_SECRET_KEY Your project-specific secret keyS3_BUCKET The bucket name to use for all operationsS3_REGION The region identifier (typically "auto")These variables are managed by the platform and cannot be overridden. They are unique per
project and scoped so that each app can only access its own files. In most code you only
need S3_ENDPOINT.
Use S3_PUBLIC_ENDPOINT when generating browser-facing presigned URLs and you want requests to stay on your app's own
host.
Quick start
Install your language's S3 SDK and use the auto-injected environment variables. That's it.
import { S3Client, GetObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import { Upload } from '@aws-sdk/lib-storage';
const s3 = new S3Client({
endpoint: process.env.S3_ENDPOINT,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY!,
secretAccessKey: process.env.S3_SECRET_KEY!,
},
region: process.env.S3_REGION,
forcePathStyle: true,
});
// Upload a file (streams in chunks — works for any file size)
const upload = new Upload({
client: s3,
params: {
Bucket: process.env.S3_BUCKET,
Key: 'uploads/avatar.png',
Body: readableStream, // or Buffer, Uint8Array, string
ContentType: 'image/png',
},
});
await upload.done();
// Download a file
const result = await s3.send(new GetObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: 'uploads/avatar.png',
}));
const data = await result.Body?.transformToByteArray();
// Delete a file
await s3.send(new DeleteObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: 'uploads/avatar.png',
}));Browser uploads
For user-facing file uploads, use presigned URLs so the browser uploads directly to storage. The file never passes through your app server, which keeps memory usage low and supports files of any size.
Install the presigner alongside the S3 SDK:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
// Use a browser-facing endpoint for presigned URLs when your platform provides one.
// Fall back to S3_ENDPOINT when the same endpoint is used everywhere.
const s3 = new S3Client({
endpoint: process.env.S3_PUBLIC_ENDPOINT || process.env.S3_ENDPOINT,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY!,
secretAccessKey: process.env.S3_SECRET_KEY!,
},
region: process.env.S3_REGION,
forcePathStyle: true,
});
// Generate a presigned PUT URL (expires in 1 hour)
const url = await getSignedUrl(s3, new PutObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: 'uploads/photo.png',
ContentType: 'image/png',
}), { expiresIn: 3600 });
// Return the URL to the browser
return Response.json({ uploadUrl: url });// Browser: upload directly to storage using the presigned URL
const { uploadUrl } = await fetch('/api/upload-url', {
method: 'POST',
body: JSON.stringify({ filename: 'photo.png' }),
}).then(r => r.json());
await fetch(uploadUrl, {
method: 'PUT',
headers: { 'Content-Type': 'image/png' },
body: file, // File, Blob, or ArrayBuffer
});Presigned URLs use the same S3 API and credentials as every other operation. If your
platform provides a browser-facing endpoint on your app's own host, use that via S3_PUBLIC_ENDPOINT. If it does not, use the regular public S3 endpoint and configure bucket CORS as needed.
Supported operations
The storage API supports the core S3 operations. Use any S3-compatible SDK — the platform handles authentication and isolation automatically.
PUT /{bucket}/{key} Upload a fileGET /{bucket}/{key} Download a fileHEAD /{bucket}/{key} Get file metadataDELETE /{bucket}/{key} Delete a fileGET /{bucket}?list-type=2 List filesLocal development
Storage environment variables are injected automatically when your app is deployed. For local development, you can run MinIO as a local S3-compatible server. Your app code stays exactly the same.
1. Start MinIO
docker run -d --name minio \ -p 9000:9000 -p 9001:9001 \ -e MINIO_ROOT_USER=minioadmin \ -e MINIO_ROOT_PASSWORD=minioadmin \ minio/minio server /data --console-address ":9001"
2. Create the bucket
Use the MinIO console at http://localhost:9001 or the AWS CLI:
3. Add these to your .env
S3_ENDPOINT=http://localhost:9000 S3_PUBLIC_ENDPOINT=http://localhost:9000 S3_ACCESS_KEY=minioadmin S3_SECRET_KEY=minioadmin S3_BUCKET=storage S3_REGION=auto
That's it. Your app will talk to the local MinIO instance using the same S3 SDK code that
runs in production. If you generate presigned URLs locally, set S3_PUBLIC_ENDPOINT to the same value unless you have a separate browser-facing storage domain.
Good to know
Ready to start?
Deploy your app and start storing files. No configuration needed.