Amazon S3: Attaching a File in Salesforce

Amazon S3 & Salesforce

Last week I covered how to send an attachment from Salesforce to Jira.  This week we’ll cover how to attach a file from Salesforce into the Amazons S3 cloud.  Unlike the Jira uploading, we will not be associating these files with a specific case, but instead will be uploading them to a generic bucket.  This can be modified by changing how the filename is generated on line 8 of the code.

Attachment attach = [
    select Body,
    from Attachment
    limit 1

String attachmentBody = EncodingUtil.base64Encode(attach.Body);
String formattedDateString ='EEE, dd MMM yyyy HH:mm:ss z');
String key = 'key_goes_here';
String secret = 'secret_goes_here';
String bucketname = 'mybucket-salesforce';
String host = '';
String method = 'PUT';
String filename = attach.Id + '-' + attach.Name;

HttpRequest req = new HttpRequest();
req.setEndpoint('https://' + bucketname + '.' + host + '/' + bucketname + '/' + filename);
req.setHeader('Host', bucketname + '.' + host);
req.setHeader('Content-Length', String.valueOf(attachmentBody.length()));
req.setHeader('Content-Encoding', 'UTF-8');
req.setHeader('Content-type', attach.ContentType);
req.setHeader('Connection', 'keep-alive');
req.setHeader('Date', formattedDateString);
req.setHeader('ACL', 'public-read');

String stringToSign = 'PUT\n\n' +
    attach.ContentType + '\n' +
    formattedDateString + '\n' +
    '/' + bucketname + '/' + bucketname + '/' + filename;

String encodedStringToSign = EncodingUtil.urlEncode(stringToSign, 'UTF-8');
Blob mac = Crypto.generateMac('HMACSHA1', blob.valueof(stringToSign),blob.valueof(secret));
String signed = EncodingUtil.base64Encode(mac);
String authHeader = 'AWS' + ' ' + key + ':' + signed;
String decoded = EncodingUtil.urlDecode(encodedStringToSign , 'UTF-8');

Http http = new Http();
HTTPResponse res = http.send(req);
System.debug('*Resp:' + String.ValueOF(res.getBody()));
System.debug('RESPONSE STRING: ' + res.toString());
System.debug('RESPONSE STATUS: ' + res.getStatus());
System.debug('STATUS_CODE: ' + res.getStatusCode());

Most of this code is pretty standard web callouts but the key take aways are:

  • Line 9: The attachment body is base64 encoded
  • Line 11-14: Our Amazon credential and host information
  • Line 16: The filename (more on that below)
  • Line 19: Where we are POSTing our attachment to
  • Line 20-26: The required headers
  • Line 19-38: The signing of the request to send to Amazon

The filename here is particularly important.  Amazon S3 is closer to a filesystem than how Salesforce records attachments.  If you POST the same filename to S3 multiple times, you will simply overwrite the file every time you POST.  This may be a desired result, but for the example above, we are creating a unique (and reproducible) attachment filename.  From this we could simply add a formula on the Attachment record that generate our Amazon S3 URL and then use that for display purposes.

Amazon S3: Why use it?

Being able to do this is all fine an dandy, but why use it over the standard Salesforce Attachments?  The biggest reason is that Amazon offers a better Content Delivery Network (CDN) for the Amazon S3 content than Salesforce does for it’s attachments.  If you had a Salesforce Site that you wanted to share attachment records, this would make your attachments load much faster for users around the world.

Additionally, you could re-use the code above and instead of storing the data in the Attachment object, simply upload directly from a Visualforce page to Amazon S3 and then store the URL somewhere for future use.

This entry was posted in Development, Salesforce and tagged , , . Bookmark the permalink.
  • Adam

    Hi, Thanks for the write up.

    I am very new into salesforce and development in general. Where would I put the above code to run it?

  • The code above would run in the developer console as anonymous apex. But it’s honestly not very useful in that state. It would be better to instead add it to a utility class (with the correct callout annotation) and then call this from a trigger or a visualforce page.

  • Adam

    Thank you!

  • Subrat kumar Ray

    Thanks a lot Patrick for writing this.

    Is there a way to retrieve the URL which can be used to directly open/download/view the file?
    I am unable to find a solution for this.

  • Whatever you end up setting the endpoint to will be the URL used to access the file. For our example above if we had the file “photo.jpg” the URL would be “” You will need to make sure that you have the proper permissions set on the bucket, or you will need to modify the POST request to set the permissions on the file specifically.

  • Subrat kumar Ray

    Hi Patrick,

    How can I upload a file of size larger than 25 MB to Amazon S3.


  • Because of the file limitations in Salesforce, you will need to write a custom uploader to upload directly into S3 instead of first loading it into a Salesforce Attachment. The code would be the same, you just change where the Body is coming from.

  • thomasemmerson

    Super useful, thanks. Is there any way to get around the SOAP API filesize limit if I’m uploading from my own machine through SF?

  • If you are referring to the 25mb limit for native attachments, the you will have to write a custom visualforce page that does the upload. I’ve not done this, but I can look into it and maybe write a blog post about how to do it.

  • Pratibha

    I tried the above code but am receiving this error – Unable to tunnel through proxy. Proxy returns “HTTP/1.0 404 Not Found”

  • What endpoint are you using?

  • Vinay Singh

    I am trying to put content on S3 using AWS signature version 4.Will the required parameters an headers will be same as provided in your code or do we need to anything different?

  • Vaddi Ravi Teja

    The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. I am getting this error i have given key, secret and bucket and host also

  • Phantom Phantom

    Hi Patrick,
    Two questions,
    1. I see that bucket name is repeated twice, is this correct?

    String stringToSign = ‘PUTnn’ +
    attach.ContentType + ‘n’ +
    formattedDateString + ‘n’ +
    ‘/’ + bucketname + ‘/’ + bucketname + ‘/’ + filename;

    Similarly, I see that when you send the end point too where the bucket name is repeated. I tried looking up, where do I find the documentation for this on AWS, please advise.

    req.setEndpoint(‘https://’ + bucketname + ‘.’ + host + ‘/’ + bucketname + ‘/’ + filename);

    2. Why are you decoding the signature string? I don’t see that being used anywhere in the code snippet. Just curious.

  • Irene Gómez

    Hi Pratibha, Patrick, I have the same error. Did you resolve it?

  • katya guschina

    Hello Patrick,
    Thank you for the post, it’s really helpful.
    But this header does not work for me:
    req.setHeader(‘ACL’, ‘public-read’); – this seems to do nothing at all.
    However when I try to set the amazon specified header:
    req.setHeader(‘x-amz-acl’, ‘public-read’); I get the 403 error.
    Do you know anything about the access control header? Could you please share?

  • Mayukhman Pathak

    Hi Patrick,
    I am getting a System.CalloutException: Unexpected end of file from server.
    My code :
    public class ProductAmazon_RestClass {
    public void ProductAmazon_RestMethod(string folderName){

    string binaryString = ProductAmazonIntegration.ProductAmazonIntegration();
    String key=’******************************’;
    String secret=’******************************’;
    String formattedDateString=‘EEE, dd MMM yyyy HH:mm:ss z’);
    String bucketname = ‘myBucketName’;
    String host = ‘’;
    String method = ‘PUT’;
    String filename = ‘Product/Product.json’;

    //Request starts
    HttpRequest req = new HttpRequest();
    req.setEndpoint(‘https://’ + bucketname + ‘.’ + host + ‘/’ + bucketname + ‘/’ + filename);
    req.setHeader(‘Host’, bucketname + ‘.’ + host);
    req.setHeader(‘Content-Length’, string.valueOf(binaryString.length()));
    req.setHeader(‘Content-Encoding’, ‘UTF-8’);
    req.setHeader(‘Content-Type’, ‘application/json’);
    req.setHeader(‘Date’, formattedDateString);
    req.setHeader(‘ACL’, ‘public-read-write’);
    String stringToSign = ‘PUTnn’ + ‘application/json’ + ‘n’ + ‘/’ + bucketname + ‘/’ + filename;
    String encodedStringToSign = EncodingUtil.urlEncode(stringToSign,’UTF-8′);
    String signed = createSignature(stringToSign,secret);
    String authHeader = ‘AWS’ + ‘ ‘ + key + ‘:’ + signed;
    Http http = new Http();
    try {
    //Execute web service call
    HTTPResponse res = http.send(req);
    System.debug(‘RESPONSE STRING: ‘ + res.toString());
    System.debug(‘RESPONSE STATUS: ‘+res.getStatus());
    System.debug(‘STATUS_CODE: ‘+res.getStatusCode());

    } catch(System.CalloutException ae) {
    system.debug(‘AWS Service Callout Exception: ‘ + ae);


    public string createSignature(string canonicalBuffer,String secret){
    string sig;
    Blob mac = Crypto.generateMac(‘HMACSHA1’, blob.valueof(canonicalBuffer),blob.valueof(secret));
    sig = EncodingUtil.base64Encode(mac);
    return sig;


  • Nene

    Did you guys resolve this issue? I am having same issue.

  • Himanshu Gupta

    Hi Patrick

    It worked like brilliance… awesome man


  • Himanshu Gupta

    Hi There, Can you help with writing AWS v4 authorisation?

  • I haven’t had a chance to look at AWS v4. If I get some time, I’ll look into it and write another post and link it to this comment / post.