-
-
Notifications
You must be signed in to change notification settings - Fork 354
Open
Description
Hello,
I would like to understand if there's a safety logic to prevent more than 10_000
(ref. link) multi-part chunks being uploaded to AWS S3 bucket. It seems I'm facing with an edge-case scenario where I have more than 10_000
chunks, as a result I have got the following error:
:body => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidArgument</Code><Message>Part number must be an integer between 1 and 10000, inclusive</Message><ArgumentName>partNumber</ArgumentName><ArgumentValue>10001</ArgumentValue><RequestId>removed</RequestId><HostId>removed</HostId></Error>"
:cookies => [
]
:headers => {
"Connection" => "close"
"Content-Type" => "application/xml"
"Date" => "Tue, 06 May 2025 04:14:43 GMT"
"Server" => "AmazonS3"
"x-amz-id-2" => "removed"
"x-amz-request-id" => "removed"
}
:host => "removed.s3.eu-central-1.amazonaws.com"
:local_address => "x.x.x.x"
:local_port => 49484
:method => "PUT"
:omit_default_port => false
:path => "/huge_file.tar"
:port => 443
:query => {
"partNumber" => 10001
"uploadId" => "removed"
}
:reason_phrase => "Bad Request"
:remote_ip => "y.y.y.y"
:scheme => "https"
:status => 400
:status_line => "HTTP/1.1 400 Bad Request\r\n"
I completely understand that it's possible to adhere to 10_000
limit by adjusting the multipart_chunk_size
property. However, it is not always feasible to know how much data is going to be uploaded upfront.
Cheers
Metadata
Metadata
Assignees
Labels
No labels