Skip to content

Maximum number of parts per upload constraint (S3) #737

@avoidik

Description

@avoidik

Hello,

I would like to understand if there's a safety logic to prevent more than 10_000 (ref. link) multi-part chunks being uploaded to AWS S3 bucket. It seems I'm facing with an edge-case scenario where I have more than 10_000 chunks, as a result I have got the following error:

:body              => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidArgument</Code><Message>Part number must be an integer between 1 and 10000, inclusive</Message><ArgumentName>partNumber</ArgumentName><ArgumentValue>10001</ArgumentValue><RequestId>removed</RequestId><HostId>removed</HostId></Error>"
:cookies           => [
]
:headers           => {
  "Connection"       => "close"
  "Content-Type"     => "application/xml"
  "Date"             => "Tue, 06 May 2025 04:14:43 GMT"
  "Server"           => "AmazonS3"
  "x-amz-id-2"       => "removed"
  "x-amz-request-id" => "removed"
}
:host              => "removed.s3.eu-central-1.amazonaws.com"
:local_address     => "x.x.x.x"
:local_port        => 49484
:method            => "PUT"
:omit_default_port => false
:path              => "/huge_file.tar"
:port              => 443
:query             => {
  "partNumber" => 10001
  "uploadId"   => "removed"
}
:reason_phrase     => "Bad Request"
:remote_ip         => "y.y.y.y"
:scheme            => "https"
:status            => 400
:status_line       => "HTTP/1.1 400 Bad Request\r\n"

I completely understand that it's possible to adhere to 10_000 limit by adjusting the multipart_chunk_size property. However, it is not always feasible to know how much data is going to be uploaded upfront.

Cheers

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions