Tuesday, February 11, 2020

Retrieve records from Amazon RDS through Salesforce Apex class

Motivation behind this


I was doing a proof-of-concept on retrieving records from Amazon RDS (Relational Database Service) where I have faced critical challenges in authorization mechanism with AWS Signature version 4 signing process which motivates me to write this post.

Usually, most of the cases I have come across either posting or retrieving files to/from AWS S3. But there can be much more.

Let's start with a use case.

Use Case


Business has a requirement to view the data in Salesforce where data is maintained in PostgreSQL database. Amazon RDS will host REST based webservice which retrieves the records from PostgreSQL database table. From Salesforce Apex class, it will make a callout to REST based end point and fetch the data as response.

The architecture looks like this:

For this use case, we will concentrate on the Apex part assuming we have an end point to connect to AWS.

Solution Approach


We will create a Apex class which makes the callout to the end point.


Flow diagram will look like this.


Flow Diagram


Flow diagram will show step by step approach of signing process to perform callout.



Broadly, the signing process can be divided into following 4 steps:

  • Create Canonical Request

  • Create String to Sign

  • Calculate Signature
  • Create Request Header and perform callout.

Code Sample


The guidance has been taken from Signature Version 4 Signing Process


  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
public class MyAWSService{

 //request values
 String region = 'us-east-1';
 String service = 'execute-api';
 String key = 'Provide AWS Key'; //AWS key
 String secret = 'Provide AWS Secret key'; //AWS Secret key
 String host = 'api.amazonaws.com'; 
 String stage = 'qa'; //dev or qa
 String serviceName = 'retrieveAccount'; //this actually fetches the data from PostgreSQL
 
 String method = 'GET';
 String request_parameters = 'Action=ListUsers&Version=2010-05-08';
 String canonical_querystring = '';

 String qaAPIKey = 'Provide API Key';
 
 //date for headers and the credential string
 String amzdate = Datetime.now().formatGMT('yyyyMMdd\'T\'HHmmss\'Z\'');
 String datestamp = Datetime.now().formatGMT('yyyyMMdd');
  
 public void runScript(){
  
  HttpRequest req = new HttpRequest();
  //set the request method (GET, PUT etc.)
  req.setMethod(method);
  
  //create end point URL
  String endPoint = 'https://' + host + '/' + stage + '/' + serviceName;
  system.debug('Endpoint='+endPoint); //'https://api.amazonaws.com/qa/retrieveAccount'

  //assign endpointURL to request
  req.setEndpoint(endPoint);  
  
  //************* TASK 1: CREATE A CANONICAL REQUEST *************
  //Step 1 is to define the verb (GET, POST, etc.)
  //Step 2: Create canonical URI--the part of the URI from domain to query 
  String canonical_uri = '/' + stage + '/' + serviceName;    // '/qa/retrieveAccount'
  
  /*Step 3: Create the canonical query string. In this example (a GET request),
                # request parameters are in the query string. Query string values must
                # be URL-encoded. The parameters must be sorted by name.
  */
  canonical_querystring = '';
  
  //Step 4: Create the canonical headers and signed headers.
  String canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n';
  System.debug('##canonical_headers:' + canonical_headers);
  
  //Step 5: Create the list of signed headers.
  String signed_headers = 'host;x-amz-date';
  
  //Step 6: Create payload hash (hash of the request body content). 
                //For GET requests, the payload is an empty string ('')
  Blob payload = Blob.valueOf('');
  String payload_hash = EncodingUtil.convertToHex(Crypto.generateDigest('SHA-256', payload));
  
  //Step 7: Combine elements to create create canonical request  
  String canonical_request = method + '\n' 
      + canonical_uri + '\n'  
      + canonical_querystring + '\n' 
      + canonical_headers + '\n' 
      + signed_headers + '\n' 
      + payload_hash;
        
  System.debug('canonical_request=' + canonical_request);

  //************* TASK 2: CREATE THE STRING TO SIGN*************
  String algorithm = 'AWS4-HMAC-SHA256';
  String credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request';
  String string_to_sign = algorithm + '\n' +  amzdate + '\n' +  credential_scope + '\n' + 
        EncodingUtil.convertToHex(Crypto.generateDigest('sha256', Blob.valueOf(canonical_request)));
  System.debug('String_to_sign: ' + string_to_sign);

  //************* TASK 3: CALCULATE THE SIGNATURE *************
  //generate signing key
  Blob signingKey = createSigningKey(secret);
  
  //generate signature  
  String signature =  createSignature(string_to_sign, signingKey); 
  
  //************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************
  String authorization_header = algorithm + ' ' 
      + 'Credential=' + key + '/' 
      + credential_scope + ', ' 
      +  'SignedHeaders=' + signed_headers + ', ' 
      + 'Signature=' + signature;
       
  req.setHeader('Authorization',authorization_header);
  
  //The request can include any headers,
  req.setHeader('x-api-key', qaAPIKey);
  req.setHeader('x-amz-date', amzdate);
  req.setHeader('Accept', 'application/json');
  
  Http http = new Http();
  HTTPResponse res = http.send(req);
  System.debug('*Resp:' + String.ValueOF(res.getBody()));
  System.debug('RESPONSE STRING: ' + res.toString());
  System.debug('RESPONSE STATUS: ' + res.getStatus());
  System.debug('STATUS_CODE: ' + res.getStatusCode()); 
 
 }

 //key derivation functions
 private Blob createSigningKey(String secretKey){
        Blob dateKey = signString(Blob.valueOf(datestamp),Blob.valueOf('AWS4'+secretKey));
        Blob dateRegionKey = signString(Blob.valueOf(region),dateKey);
        Blob dateRegionServiceKey = signString(Blob.valueOf(service),dateRegionKey);
        return signString(Blob.valueOf('aws4_request'),dateRegionServiceKey);
    }

 private Blob signString(Blob msg, Blob key){
        return Crypto.generateMac('HMACSHA256', msg, key);
    } 
 
 private String createSignature(String stringToSign, Blob signingKey){        
  return EncodingUtil.convertToHex(Crypto.generateMac('HMACSHA256', blob.valueof(stringToSign), signingKey));
    }
}

Few points to be noted:


  • Service has been used as 'execute-api'

  • We can use same code for other HTTP method like, POST, PATCH etc.

  • For GET method payload should be empty.

  • Query parameter keys must be sorted.

  • Algorithm has been used as AWS4-HMAC-SHA256 for signing process.

  • Minimal request header parameters to be passed as 'x-api-key', 'x-amz-date', 'Accept'


If we don't follow the step properly then most of the cases it throws Signature doesn't match error.


Also, Troubleshooting AWS Signature Version 4 Errors helps to solve the issues.


It has taken few days to implement expected authorization mechanism. 


Hope it helps!



Further Reading


Saturday, February 1, 2020

Approach: Dynamically insert records based on External Id with maintaining relationship

Motivation behind this


One of my mentees was struggling with approach and code sample for this below use case which motivates me on writing this post.


Use Case


Business has a requirement to view the data at Salesforce. Data will be provided from External System. External System's data is source of truth. External System will host webservice endpoint and developer could perform callout and fetch the data in JSON format.

For example, external system is maintaining Account and Inventory information. Those information should be fetched and insert the records.

Developer is also looking for an option to make the data mapping in a configurable way so that any field or Object name can be fetched on the fly and prepare the relationship among the objects and finally insert the data.


Solution Approach


Developer is trying to build up a solution with a help of Apex class and Custom Metadata Types as per following diagram.



For example, Account data is being provided from external system like below, this can be a JSON format. For better understanding plotting this in excel columns.



Since, external system's data is source of truth so we need to create External Id field on Account Object and Custom Inventory object (Inventory__c).

Account object will also have a lookup relationship to the Inventory object (relationship name: Inventory__r).

Let us assume, Account record with External Id = 123 and Inventory record with External Id = 125 have been created earlier.

So, when we insert Child Account B record then it will also maintain the relationship to the Parent Account A and Inventory record 125 through respective external Ids.

To make it configurable way of mapping, JSON attributes and Salesforce field Names should be maintained in Custom metadata types with the following information.


Here, main challenge is creating the instance of the object dynamically, add those fields using FieldAPINames and create records with maintaining the relationship.

For sake of simplicity, webservice callout and fetching records from JSON and fetching field mapping from Custom metadata types have been omitted.

Only challenging part with optimized code has been provided below:

Code Sample




 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
//create an instance of Account object, take this object name from Object field of custom metadata type.
String sObjectName = 'Account';
SObject acct = (SObject)(Type.forName('Schema.'+ sObjectName).newInstance());
//add fields based on FieldAPIName of Custom metadata types and values of JSON
acct.put('Name', 'Test Acct C');
acct.put('External_Id__c' ,'124');

//To relate to parent Account record, take this from Custom metadata type's 'Relationship Object' field 
String ParentObjName = 'Account'; //Relationship Object

//create an instance of related parent account object instance
SObject accRelationship = (SObject)(Type.forName('Schema.'+ ParentObjName).newInstance());
accRelationship.put('External_Id__c','123'); //Relationship Field API Name

//here mention Relationship API Name from Custom metadata type using putSObject method
acct.putSObject('Parent',accRelationship);

//similarly to relate Inventory record
String relatedLookupObjectName = 'Inventory__c';
SObject invRelationship = (SObject)(Type.forName('Schema.'+ relatedLookupObjectName).newInstance());
invRelationship.put('External_Id__c','125');
acct.putSObject('Inventory__r',relatedLookupObjectName);

//it can be added to a list and insert that
insert acct;




You can see that, to maintain parent relationship of the Account, Account object instance has been created and  External_Id__c has been mentioned and finally putSObject method of SObject has been used to specify related relationship.

Also, creating an instance of an object has done with reflection technique which is faster than SObject describe.

All the String values can be replaceable with dynamic values.

Sometimes, code looks simple to derive but it takes time to put in a proper approach which I have tried to portray here. Hope it helps.


Further Reading


Sunday, January 12, 2020

Upload Files to AWS S3 using Lightning Web Components

Motivation behind this


I was searching for a solution to upload files to AWS S3 from Lightning Web Components (LWC). Through there are few solutions found through either Apex code or through JavaScript but didn't find guided approach to setup AWS S3 account as well as building reusable Lightning Web Components which can be leveraged in any projects. 

Also, base lightning-file-upload component uploads files into Salesforce which cannot be used for this functionality as I want to directly store to AWS.

This idea drives me to come up with a proof-of-concept and sharing the knowledge and challenges with community members.

Let's get started.

Use Case


Business has a requirement not to store files and attachments to Salesforce, rather they have procured AWS S3 account and leverage that storage to store files. It will help Business to reduce storage limit at Salesforce org since that will also comes with cost.

Developer wants to built it with Lightning Web Components and use that anywhere as necessary. From the screen, it will ask user to choose the files or drag and drop files to upload and finally those will be uploaded to AWS S3 bucket.

Developer will also store the file information (e.g. Name, Link to AWS Location etc.) into a custom Attachment object related to the record.

Possible End Results


After building the use case, it will perform the functionality as following video:



Solution Approach


We will create fileUploadLWC component for this functionality.

Flow Diagram


Create a flow diagram like this way to understand the functionalities, flow of control and finalize design approach.



Create AWS Account and S3 Bucket


First, create AWS Account and S3 bucket referring the documentation Create an Amazon S3 Bucket

Here is the following steps I have followed.

Reach following screen and click on Create Bucket


Define Name and Region

Region is important here, initially I chose Asia Pacific (Mumbai) instead of default US East (N. Verginia) which I have faced challenges during testing (refer Solving Pain Points section). Later I changed it to default US East.



Configure Options

Let it be as it is and go next.



Set Permissions

I removed Blocking of all public access.



Review

We can change the settings if required and finally create bucket.



Create Access Key and Access Secret

Since we will be connecting through API from Salesforce so Access key and secret is needed.


After doing this, manually upload a file to verify if everything is okay.


Building Lightning Web Components




For this poc, we will attach file records under Opportunity object. To store file information, create custom Attachment object with following field information:

  • File Name (File_Name__c) Text
  • File URL (FileURL__c) URL
  • Opportunity (Opportunity__c) Lookup(Opportunity)

fileUploadLWC.html will prepare a screen like this:


Code of html as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<template>
    <article class="slds-tile">
    <!-- file upload section -->
    <lightning-card variant="Narrow" title="Lightning Web Component File Uploader for AWS" 
            style="width:30rem"    icon-name="custom:custom14"> 
        <div style="margin-left:5%">
            <div>
                <lightning-input label="" name="file to uploder" onchange={handleSelectedFiles} 
                    type="file" multiple></lightning-input>
            </div><br/>            
            <div class="slds-text-body_small">{fileName}
            <template if:true={showSpinner}>
                <lightning-spinner alternative-text="Uploading the file......" size="medium">                        
                    </lightning-spinner>
            </template>
            </div><br/>
            <div>
                <lightning-button class="slds-m-top--medium" label="Store File to AWS" onclick={handleFileUpload} 
                    variant="brand">
                </lightning-button>
            </div>
        </div><br/><br/>
        <!--displaying uploaded files-->
        <template if:true={tableData}>
            <lightning-card title="Following files uploaded:">
                <div style="width: auto;">                    
                    <ul class="slds-m-around_small">
                        <template for:each={tableData} for:item="attachment">
                            <li key={attachment.Id}>
                                {attachment.File_Name__c}, 
                                <lightning-formatted-url value={attachment.FileURL__c} target="_blank">{attachment.FileURL__c}</lightning-formatted-url>
                            </li>
                        </template>
                    </ul>                    
                </div>
            </lightning-card>
        </template>
    </lightning-card>
    </article>
</template>

fileUploadLWC.js

It performs the operation as given in the flow diagram. Also added context sensitive comments in-line.


  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
/* eslint-disable no-console */
import { LightningElement, api, track } from 'lwc';
import uploadFileToAWS from '@salesforce/apex/AWSFileUploadController.uploadFileToAWS'; 
import displayUploadedFiles from '@salesforce/apex/AWSFileUploadController.displayUploadedFiles';       
import {ShowToastEvent} from 'lightning/platformShowToastEvent';

export default class fileUploadLWC extends LightningElement {
    @api recordId; //get the recordId for which files will be attached.
    selectedFilesToUpload = []; //store selected files
    @track showSpinner = false; //used for when to show spinner
    @track fileName; //to display the selected file name
    @track tableData; //to display the uploaded file and link to AWS
    file; //holding file instance
    myFile;    
    fileType;//holding file type
    fileReaderObj;
    base64FileData;
    

     // get the file name from the user's selection
     handleSelectedFiles(event) {
        if(event.target.files.length > 0) {
            this.selectedFilesToUpload = event.target.files;
            this.fileName = this.selectedFilesToUpload[0].name;
            this.fileType = this.selectedFilesToUpload[0].type;
            console.log('fileName=' + this.fileName);
            console.log('fileType=' + this.fileType);
        }
    }
    
    //parsing the file and prepare for upload.
    handleFileUpload(){
        if(this.selectedFilesToUpload.length > 0) {
            this.showSpinner = true;
            
            this.file = this.selectedFilesToUpload[0];
            //create an intance of File
            this.fileReaderObj = new FileReader();

            //this callback function in for fileReaderObj.readAsDataURL
            this.fileReaderObj.onloadend = (() => {
                //get the uploaded file in base64 format
                let fileContents = this.fileReaderObj.result;
                fileContents = fileContents.substr(fileContents.indexOf(',')+1)
                
                //read the file chunkwise
                let sliceSize = 1024;           
                let byteCharacters = atob(fileContents);
                let bytesLength = byteCharacters.length;
                let slicesCount = Math.ceil(bytesLength / sliceSize);                
                let byteArrays = new Array(slicesCount);
                for (let sliceIndex = 0; sliceIndex < slicesCount; ++sliceIndex) {
                    let begin = sliceIndex * sliceSize;
                    let end = Math.min(begin + sliceSize, bytesLength);                    
                    let bytes = new Array(end - begin);
                    for (let offset = begin, i = 0 ; offset < end; ++i, ++offset) {
                        bytes[i] = byteCharacters[offset].charCodeAt(0);                        
                    }
                    byteArrays[sliceIndex] = new Uint8Array(bytes);                    
                }
                
                //from arraybuffer create a File instance
                this.myFile =  new File(byteArrays, this.fileName, { type: this.fileType });
                
                //callback for final base64 String format
                let reader = new FileReader();
                reader.onloadend = (() => {
                    let base64data = reader.result;
                    this.base64FileData = base64data.substr(base64data.indexOf(',')+1); 
                    this.fileUpload();
                });
                reader.readAsDataURL(this.myFile);                                 
            });
            this.fileReaderObj.readAsDataURL(this.file);            
        }
        else {
            this.fileName = 'Please select a file to upload!';
        }
    }

    //this method calls Apex's controller to upload file in AWS
    fileUpload(){
        
        //implicit call to apex
        uploadFileToAWS({ parentId: this.recordId, 
                        strfileName: this.file.name, 
                        fileType: this.file.type,
                        fileContent: encodeURIComponent(this.base64FileData)})
        .then(result => {
            console.log('Upload result = ' +result);            
            this.fileName = this.fileName + ' - Uploaded Successfully';
            //call to show uploaded files
            this.getUploadedFiles(); 
            this.showSpinner = false;
            // Showing Success message after uploading
            this.dispatchEvent(
                new ShowToastEvent({
                    title: 'Success!!',
                    message: this.file.name + ' - Uploaded Successfully!!!',
                    variant: 'success',
                }),
            );
        })
        .catch(error => {
            // Error to show during upload
            window.console.log(error);
            this.dispatchEvent(
                new ShowToastEvent({
                    title: 'Error in uploading File',
                    message: error.message,
                    variant: 'error',
                }),
            );
            this.showSpinner = false;
        });        
    }

    //retrieve uploaded file information to display to the user
    getUploadedFiles(){
        displayUploadedFiles({parentId: this.recordId})
        .then(data => {
            this.tableData = data;
            console.log('tableData=' + this.tableData);
        })
        .catch(error => {
            this.dispatchEvent(
                new ShowToastEvent({
                    title: 'Error in displaying data!!',
                    message: error.message,
                    variant: 'error',
                }),
            );
        });
    }
}


AWSFileUploadController.cls

uploadFileToAWS method takes file information from js controller and creates end point and post the file as blob with Http callout.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
public with sharing class AWSFileUploadController {
    
    //This method is used to post file to AWS
    @AuraEnabled
    public static boolean uploadFileToAWS(Id parentId,
                                        String strfileName, String fileType,
                                        String fileContent){
        System.debug('parentId=' + parentId);
        System.debug('strfileName=' + strfileName);
        System.debug('fileType=' + fileType);
        HttpRequest req = new HttpRequest();

        Blob base64Content = EncodingUtil.base64Decode(EncodingUtil.urlDecode(fileContent, 'UTF-8'));
        String attachmentBody =  fileContent;           
        String formattedDateString = Datetime.now().formatGMT('EEE, dd MMM yyyy HH:mm:ss z');
        String dateString = Datetime.now().format('ddMMYYYYHHmmss');        
        String filename = dateString + '_' + parentId + '_' + strfileName;
        
        //AWS specific information
        String key = 'Provide AWS key'; //AWS key
        String secret = 'Provide AWS Secret key'; //AWS Secret key
        String bucketname = 'Provide AWS bucket'; //AWS bucket name
        String host = 's3.amazonaws.com:443'; //'s3.amazonaws.com:443'
       
        String method = 'PUT';
        String endPoint = 'https://' + bucketname + '.' + host + '/'+ filename;

        req.setMethod(method);
        req.setEndpoint(endPoint);

        system.debug('Endpoint='+endPoint);
        //create header information
        req.setHeader('Host', bucketname + '.' + host);
        req.setHeader('Access-Control-Allow-Origin', '*');
        req.setHeader('Content-Length', String.valueOf(attachmentBody.length()));
        req.setHeader('Content-Encoding', 'UTF-8');
        req.setHeader('Content-type', fileType);
        req.setHeader('Connection', 'keep-alive');
        req.setHeader('Date', formattedDateString); 
        req.setHeader('ACL', 'public-read');
        //store file as blob       
        req.setBodyAsBlob(base64Content);
        
        //prepare for signing information
        String stringToSign = 'PUT\n\n' +
        fileType + '\n' +
        formattedDateString + '\n' + '/' + bucketname + '/' + filename;

        String encodedStringToSign = EncodingUtil.urlEncode(stringToSign, 'UTF-8');
        Blob mac = Crypto.generateMac('HMACSHA1', blob.valueof(stringToSign),blob.valueof(secret));
        String signedKey  = EncodingUtil.base64Encode(mac);

        //assign Authorization information
        String authHeader = 'AWS' + ' ' + key + ':' + signedKey;                    
        req.setHeader('Authorization',authHeader);

        //finally send information to AWS        
        Http http = new Http();
        HTTPResponse res = http.send(req);

        System.debug('*Resp:' + String.ValueOF(res.getBody()));
        System.debug('RESPONSE STRING: ' + res.toString());
        System.debug('RESPONSE STATUS: ' + res.getStatus());
        System.debug('STATUS_CODE: ' + res.getStatusCode());

        if(res.getStatusCode() == 200){
            insertAttachmentRecord (parentId,strfileName,endPoint);
            return true;
        }
        return false;
    }

    //This method inserts file information to Custom Attachment object
    public static void insertAttachmentRecord (Id parentId, String fileName, String fileURL){
        Attachment__c attachment = new Attachment__c();
        attachment.Opportunity__c = parentId;
        attachment.FileURL__c = fileURL;
        attachment.File_Name__c =  fileName;
        insert attachment;                                           
    }

    //This method retrieves Attachment record based on OpportunityId
    @AuraEnabled
    public static List<Attachment__c> displayUploadedFiles(Id parentId){
        return [SELECT Id, File_Name__c, FileURL__c FROM Attachment__c
                WHERE Opportunity__c =:parentId];
    }
}

After code is ready, change meta.xml file to expose the component to record detail page.

Since we are making callouts so, endpoint URL must be defined in the Remote Site Settings.





Solving Pain Points


Following issues I have faced and possible solutions:

1. For my AWS account, the region of the bucket was other the default. It may not be activated within 24 hours. So, if we try to make callouts, you might face following Temporary Redirect issues with response status code 307.


To solve this, I have changed the region to US East.

2. If the permission of bucket is restricted then, after posting the file from application if you try to access the link from the web browser then you might face access denied error.

To resolve this, go the bucket policy and check the JSON if mentioned.




Action": [
    "s3:PutObject",
    "s3:PutObjectAcl",
    "s3:GetObject",
    "s3:GetObjectAcl"
 ]

Without GetObjectAcl, it will not allow to access URL hitting from browser.

3. File should be read chunk wise otherwise there will be error in uploading the file.
4. Preparation of signing information to generate signedKey is crucial.

That's all, I have faced.

Finally, in the Opportunity record detail page, add a new tab and expose the fileUploadLWC component and run the application.

Hope, it helps!


Further Reading