← Back to home Documentation

Object Storage

Every project gets S3-compatible object storage automatically. No buckets to create, no credentials to manage. Deploy your app and start storing files.

How it works

When you deploy an app, the platform automatically injects S3 environment variables into your container. These are standard S3-compatible credentials that work with any S3 SDK in any language. There is nothing to configure.

Auto-injected environment variables
S3_ENDPOINT The S3-compatible API endpoint for server-side SDK calls
S3_PUBLIC_ENDPOINT Optional browser-safe endpoint for presigned URLs. Use this when you want uploads and downloads to stay on your app domain.
S3_ACCESS_KEY Your project-specific access key
S3_SECRET_KEY Your project-specific secret key
S3_BUCKET The bucket name to use for all operations
S3_REGION The region identifier (typically "auto")

These variables are managed by the platform and cannot be overridden. They are unique per project and scoped so that each app can only access its own files. In most code you only need S3_ENDPOINT. Use S3_PUBLIC_ENDPOINT when generating browser-facing presigned URLs and you want requests to stay on your app's own host.

Quick start

Install your language's S3 SDK and use the auto-injected environment variables. That's it.

$ npm install @aws-sdk/client-s3 @aws-sdk/lib-storage
JavaScript / TypeScript
import { S3Client, GetObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import { Upload } from '@aws-sdk/lib-storage';

const s3 = new S3Client({
  endpoint: process.env.S3_ENDPOINT,
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY!,
    secretAccessKey: process.env.S3_SECRET_KEY!,
  },
  region: process.env.S3_REGION,
  forcePathStyle: true,
});

// Upload a file (streams in chunks — works for any file size)
const upload = new Upload({
  client: s3,
  params: {
    Bucket: process.env.S3_BUCKET,
    Key: 'uploads/avatar.png',
    Body: readableStream, // or Buffer, Uint8Array, string
    ContentType: 'image/png',
  },
});
await upload.done();

// Download a file
const result = await s3.send(new GetObjectCommand({
  Bucket: process.env.S3_BUCKET,
  Key: 'uploads/avatar.png',
}));
const data = await result.Body?.transformToByteArray();

// Delete a file
await s3.send(new DeleteObjectCommand({
  Bucket: process.env.S3_BUCKET,
  Key: 'uploads/avatar.png',
}));

Browser uploads

For user-facing file uploads, use presigned URLs so the browser uploads directly to storage. The file never passes through your app server, which keeps memory usage low and supports files of any size.

Install the presigner alongside the S3 SDK:

$ npm install @aws-sdk/s3-request-presigner
Server — generate a presigned upload URL
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

// Use a browser-facing endpoint for presigned URLs when your platform provides one.
// Fall back to S3_ENDPOINT when the same endpoint is used everywhere.
const s3 = new S3Client({
  endpoint: process.env.S3_PUBLIC_ENDPOINT || process.env.S3_ENDPOINT,
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY!,
    secretAccessKey: process.env.S3_SECRET_KEY!,
  },
  region: process.env.S3_REGION,
  forcePathStyle: true,
});

// Generate a presigned PUT URL (expires in 1 hour)
const url = await getSignedUrl(s3, new PutObjectCommand({
  Bucket: process.env.S3_BUCKET,
  Key: 'uploads/photo.png',
  ContentType: 'image/png',
}), { expiresIn: 3600 });

// Return the URL to the browser
return Response.json({ uploadUrl: url });
Browser — upload directly to storage
// Browser: upload directly to storage using the presigned URL
const { uploadUrl } = await fetch('/api/upload-url', {
  method: 'POST',
  body: JSON.stringify({ filename: 'photo.png' }),
}).then(r => r.json());

await fetch(uploadUrl, {
  method: 'PUT',
  headers: { 'Content-Type': 'image/png' },
  body: file, // File, Blob, or ArrayBuffer
});

Presigned URLs use the same S3 API and credentials as every other operation. If your platform provides a browser-facing endpoint on your app's own host, use that via S3_PUBLIC_ENDPOINT. If it does not, use the regular public S3 endpoint and configure bucket CORS as needed.

Supported operations

The storage API supports the core S3 operations. Use any S3-compatible SDK — the platform handles authentication and isolation automatically.

Method Path Description
PUT /{bucket}/{key} Upload a file
GET /{bucket}/{key} Download a file
HEAD /{bucket}/{key} Get file metadata
DELETE /{bucket}/{key} Delete a file
GET /{bucket}?list-type=2 List files

Local development

Storage environment variables are injected automatically when your app is deployed. For local development, you can run MinIO as a local S3-compatible server. Your app code stays exactly the same.

1. Start MinIO

docker run -d --name minio \
  -p 9000:9000 -p 9001:9001 \
  -e MINIO_ROOT_USER=minioadmin \
  -e MINIO_ROOT_PASSWORD=minioadmin \
  minio/minio server /data --console-address ":9001"

2. Create the bucket

Use the MinIO console at http://localhost:9001 or the AWS CLI:

$ aws --endpoint-url http://localhost:9000 s3 mb s3://storage

3. Add these to your .env

S3_ENDPOINT=http://localhost:9000
S3_PUBLIC_ENDPOINT=http://localhost:9000
S3_ACCESS_KEY=minioadmin
S3_SECRET_KEY=minioadmin
S3_BUCKET=storage
S3_REGION=auto

That's it. Your app will talk to the local MinIO instance using the same S3 SDK code that runs in production. If you generate presigned URLs locally, set S3_PUBLIC_ENDPOINT to the same value unless you have a separate browser-facing storage domain.

Good to know

Zero configuration
Environment variables are injected automatically on every deploy. No setup required.
Project isolation
Each project has its own credential scope. Apps cannot access files from other projects.
Standard S3 API
Use any S3-compatible SDK or library. Server-side code is plain S3, and browser presigned URLs are still just standard S3 signatures.
No vendor lock-in
On AWS or any other S3 provider you would use the same SDK calls, buckets, keys, and presigning flow. The only provider-specific detail is which endpoint you point the client at.
Streaming support
Upload and download files of any size. The storage layer streams data without buffering entire files in memory.
Flat key namespace
Object keys can contain slashes to simulate directories (e.g. uploads/images/photo.jpg) but the storage is a flat key-value store.

Ready to start?

Deploy your app and start storing files. No configuration needed.

serverlite.com